~ 3 min read

Why Python devs love Requests

Requests is the HTTP client I and every Python developer I know always reach for when we need to pull data from remote sources. It got 121M downloads in June 2021 alone according to PePy and has 45.5k github stars at the time of writing. Developers love making requests with Requests!

At the time of Requests original release in 2011 urllib3 was the high level HTTP library of choice and Requests actually uses it as a dependency. Given urllib3 only has a measly 2.7k stars, lets take a look at why Python a developer would choose Requests over it through some code examples.

We’ll base our examples on those found in the urllib3 documentation and create each example again using Requests.

Making a GET request

urllib3:

import urllib3
http = urllib3.PoolManager()
resp = http.request("GET", "http://httpbin.org/robots.txt")
resp.status
200
resp.data
b'User-agent: *\nDisallow: /deny\n'

Requests:

import requests

resp = requests.get("http://httpbin.org/robots.txt")

resp.status_code
200
res.text
'User-agent: *\nDisallow: /deny\n'

Making a POST Request (Form Data)

urllib3:

import urllib3

http = urllib3.PoolManager()
resp = http.request(
    "POST",
    "https://httpbin.org/post",
    fields={"hello": "world"}
)

print(resp.data)

Requests:

import requests

resp = requests.post(
	"https://httpbin.org/post", 
  data={"hello": "world"}
)

print(res.text)

Making a POST Request (JSON Data)

urllib3:

import json
import urllib3

data = {"attribute": "value"}

# Encoding the data in JSON format.
encoded_data = json.dumps(data).encode("utf-8")

http = urllib3.PoolManager()
resp = http.request(
    "POST",
    "https://httpbin.org/post",
    body=encoded_data, # Embedding JSON data into request body.
    headers={"Content-Type": "application/json"}
)

json.loads(resp.data.decode("utf-8"))

Requests:

import requests

resp = requests.post('https://httpbin.org/post', 
	json={"attribute": "value"}
)

resp.json()

Differences to urllib3

One obvious difference here is that we no longer need to create a PoolManager to make our requests. Having to define this for each script could get tedious, so Requests does away with it. Oddly, the urllib3 docs state there is top-level urllib3.request() to save us doing this, though the examples don’t seem to work when I was testing with version 1.26.6.

HTTP Methods

The major difference here is simple, but ultimately what makes Requests so much easier to work with - the HTTP method interface. There’s only a finite amount of HTTP request methods so they might as well all be functions of our client. It also makes it very Pythonic to work with - I want to call a GET on this URL, so I supply the URL as an argument to .get() - simple. There’s less potential for mistakes here given we don’t supply the method type as an argument, calling a known method. It is this interface that makes code so beautifully simple when compared to urllib3.

Built-In JSON Encoding/Decoding

Another difference is that requests handles JSON encoding/decoding for us, common operations we’re going to be doing a lot when talking to JSON services. Simple enough for any dev to write, but annoying given every dev who uses the library will have to themselves. All those utils.py files littering our project will have a couple less functions in them. Look how much simpler that final example becomes as a result. All the cruft is gone!

Conclusion

Of course other HTTP clients are available - and depending on your particular use case may well be more suitable. Since requests has come along, other libraries (such as the asynchronous httpx) have adopted the same simple, elegant API.

Requests has definitely won the HTTP client game for Python and it’s likely that we’ll be using its familiar simple interface for many more years - even if it isn’t Requests that’s providing it.

Subscribe for Exclusives

My monthly newsletter shares exclusive articles you won't find elsewhere, tools and code. No spam, unsubscribe any time.