Python is in the midst of a resurgence. It never went away, but usage now grows like never before. With machine learning developers and data scientists relying on Python, much of the web development ecosystem around the language continues to grow.
One aspect that affects all three of these specializations is the powerful benefits of APIs. Pulling in data, and connecting to external services, is an essential part of any language. In this article, we'll look at the primary libraries for making HTTP requests, along with some common use cases that allow you to connect to an API in Python. Before that, we should ask an important question.
It seems like a strange question, but given the large web presence of Node.js and Ruby, you may think that Python isn't good for making API calls. This isn't true. In fact, Python has had a long and dedicated presence on the web, specifically with it's Flask and Django libraries.
As Python is a powerful, accessible way to manipulate data, it makes sense to also use it for acquiring the data sources. This is where API calls come in. Let's start with the most popular Python HTTP library used for making API calls. Requests.
Requests is the accessible, leading library that developers use for making API requests in Python. It offers an interface to make HTTP requests synchronously. Let's get right into some common types of requests you can make with Requests. The following examples will all assume that your project includes Requests. You can follow their installation instructions, but the gist is:
Install it via pip or pipenv:
pip install requests
Then, make sure to import requests into your project
import requests
The most simple GET request is intuitive.
response = requests.get('https://example.com')
As we can see with the get method above, Requests offers shortcut methods for the HTTP verbs, including POST, PUT, DELETE, HEAD, and OPTIONS.
The previous request is pretty simple. Let's look at more complex requests. Often, an APIs documentation will require that you pass query parameters to a specific endpoint. To pass query parameters, we can pass them into get as the second argument.
response = requests.get('https://example.com', params={'name': 'Bearer'})
The response variable contains the data returned by the API in our examples. There are three primary ways to access the data.
As text, with response.textAs bites with response.contentAs JSON with response.json()Or as the raw response with response.raw
In addition to the body of the response, we can also access the status code with response.status_code, the headers with response.headers, and so on. You can find a full list of properties and methods available on Response in the requests.Response documentation.
As we saw with the params argument, we can also pass headers to the request.
response = requests.get('https://example.com, headers={'example-header': 'Bearer'})
Here, we pass the headers argument with a python dictionary of headers.
The last common API call type we'll make is a full-featured POST, with authentication. This will combine the previous headers technique with the use of the data argument.
url = 'https://example.com'
headers = {'Authorization': 'Bearer example-auth-code'}
payload = {'name':'Mark', email: '[email protected]'}
response = requests.post(url, headers=headers, data=payload)
This sends the payload as form-encoded data. For most modern APIs, we often need to send data as JSON. In this next example, we use the built in json helper from requests.
url = 'https://example.com'
headers = {'Authorization': 'Bearer example-auth-code'}
payload = {'name':'Mark', email: '[email protected]'}
response = requests.post(url, headers=headers, json=payload)
This will encode the payload as JSON, as well as automatically change the Content-Type header to application/json.
Requests is excellent for synchronous API calls, but sometimes your app may depend on asynchronous requests. For this, we can use an asynchronous HTTP library like aiohttp.
When making asynchronous HTTP requests, you'll need to take advantage of some newer features in Python 3. While the requests library does have variations and plugins to handle asynchronous programming, one of the more popular libraries for async is aiohttp. Used together with the asyncio, we can use aiohttp to make requests in an async way. The code is a little more complex, but provides all the additional freedom that async calls provide.
To get started, we'll need to install aiohttp.
Install aiohttp.
pip install aiohttp
We will begin with the same GET request we saw earlier. To start, import both libraries, and define an async function.
import asyncio # [1]
import aiohttp
async def main(): # [2]
async with aiohttp.ClientSession() as session: # [3]
async with session.get('http://example.com') as resp: # [4]
response = await resp.read() # [5]
print(response)
asyncio.run(main()) # [6]
In the code above, we perform the following:
If you haven't worked with async in Python before, this may look strange and complicated compared to the earlier examples. The makers of aiohttp recommend setting a single session per application and opening/closing connections on the single session. To make our examples self-contained, I have left the examples in the less efficient format.
Next, let's look at a full-featured POST with auth headers, like in the requests example.
# ...
async def main():
async with aiohttp.ClientSession() as session:
async with session.post('http://example.com',
headers={'Authorization':'Bearer 123456', 'Content-Type':'application/json'},
json={'title':'Try Bearer'}) as resp: # [1]
response = await resp.json() # [2]
print(response)
asyncio.run(main())
There are a few differences between this example and the previous:
With these two snippets, we're able to perform the majority of common API-related tasks. For additional features like file uploading and form data, take a look at aiohttp's developer documentation.
While Requests is the most popular, you may find value in some of these additional libraries for more unique use cases.
With Python's power in data processing and recent resurgence, thanks in part of the ML and data science communities, it is a great option for interacting with APIs. It is important to remember that even the most battle-tested and popular third-party APIs and services still suffer problems and outages. At Bearer, we're building tools to help manage these problems and better monitor your third-party APIs.
Also published here.