Process raw HTTP request to python requests - python-3.x

I'd like to pass a raw HTTP request like:
GET /foo/bar HTTP/1.1
Host: example.org
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; fr; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8
Accept: */*
Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded
X-Requested-With: XMLHttpRequest
Referer: http://example.org/test
Cookie: foo=bar; lorem=ipsum;
And generate the python request such as:
import requests
burp0_url = "http://example.org:80/foo/bar"
burp0_cookies = {"foo": "bar", "lorem": "ipsum"}
burp0_headers = {"User-Agent": "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; fr; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8", "Accept": "*/*", "Accept-Language": "fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3", "Accept-Encoding": "gzip,deflate", "Accept-Charset": "ISO-8859-1,utf-8;q=0.7,*;q=0.7", "Keep-Alive": "115", "Connection": "keep-alive", "Content-Type": "application/x-www-form-urlencoded", "X-Requested-With": "XMLHttpRequest", "Referer": "http://example.org/test"}
requests.get(burp0_url, headers=burp0_headers, cookies=burp0_cookies)
Is there a library for that?

I could not find an existing library that does this conversion, but there is a Python library to convert curl commands to python requests code.
https://github.com/spulec/uncurl
e.g.
import uncurl
print(uncurl.parse('curl --header "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7" --compressed --header "Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3" --header "Connection: keep-alive" --header "Content-Type: application/x-www-form-urlencoded" --cookie "foo=bar; lorem=ipsum;" --header "Keep-Alive: 115" --header "Referer: http://example.org/test" --user-agent "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; fr; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8" --header "X-Requested-With: XMLHttpRequest" https://example.org/foo/bar '))
I haven't found a Python library to transform raw HTTP into such a curl command. However, this Perl program does it.
Like this:
$ cat basic
GET /index.html HTTP/2
Host: example.com
Authorization: Basic aGVsbG86eW91Zm9vbA==
Accept: */*
$ ./h2c < basic
curl --http2 --header User-Agent: --user "hello:youfool" https://example.com/index.html
You could either call it from your python script, use a Python-Perl bridge or try to port it.
Postman also allows you to convert raw HTTP requests directly to python requests code, using its Code snippet generator. Although, it seems this can only be done via the GUI. It's also not Open-Source, so you can't access the code that does this transformation.

I needed something that can generate a request and couldn't find it so ended up writing it in gist:
class RequestParser(object):
def __parse_request_line(self, request_line):
request_parts = request_line.split(' ')
self.method = request_parts[0]
self.url = request_parts[1]
self.protocol = request_parts[2] if len(request_parts) > 2 else DEFAULT_HTTP_VERSION
def __init__(self, req_text):
req_lines = req_text.split(CRLF)
self.__parse_request_line(req_lines[0])
ind = 1
self.headers = dict()
while ind < len(req_lines) and len(req_lines[ind]) > 0:
colon_ind = req_lines[ind].find(':')
header_key = req_lines[ind][:colon_ind]
header_value = req_lines[ind][colon_ind + 1:]
self.headers[header_key] = header_value
ind += 1
ind += 1
self.data = req_lines[ind:] if ind < len(req_lines) else None
self.body = CRLF.join(self.data)
def __str__(self):
headers = CRLF.join(f'{key}: {self.headers[key]}' for key in self.headers)
return f'{self.method} {self.url} {self.protocol}{CRLF}' \
f'{headers}{CRLF}{CRLF}{self.body}'
def to_request(self):
req = requests.Request(method=self.method,
url=self.url,
headers=self.headers,
data=self.data, )
return req

Related

upload audio file with python

I need to upload an audio file with python, but i'm having difficulties.
If I ngrep the upload i see something like
T 172.31.41.159:15561 -> 172.31.39.127:80 [AP] #324
POST /v1/wizard-campaign/18cb83c1-b6a3-11ec-a986-02ca52d15783/media-audio HTTP/1.1.
Host: whatever.
Content-Length: 70535.
authorization: Bearer XXXXXXX.
user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.87 Safari/537.36.
content-type: multipart/form-data; boundary=----WebKitFormBoundaryWVhfRY6Fs7rKntcI.
origin: https://whatever.
referer: https://whatever/.
accept-encoding: gzip, deflate, br.
accept-language: en-US,en;q=0.9.
.
T 172.31.41.159:15561 -> 172.31.39.127:80 [A] #326
------WebKitFormBoundaryWVhfRY6Fs7rKntcI.
Content-Disposition: form-data; name="mediaFile"; filename="mensaje-de-saludo.mp3".
Content-Type: audio/mpeg.
.
ID3......!TXXX......
I tried with
audioDataJSON = {
"originalName": filename,
"time": math.ceil(audio.info.length)
}
file = { open(filename,'rb') }
audioData = json.dumps(audioDataJSON)
resp = requests.post(base_url + "/v1/wizard-campaign/" + id + "/media-audio", headers=headers, data=audioData , files=file)
But it's failing with "too many values to unpack"
Any help is greatly appreciated!
David

Curl gives response but python does not and the request call does not terminate?

I am trying this following curl request
curl 'https://www.nseindia.com/api/historical/cm/equity?symbol=COALINDIA&series=\[%22EQ%22\]&from=03-05-2020&to=03-05-2021&csv=true' \
-H 'authority: www.nseindia.com' \
-H 'accept: */*' \
-H 'user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/88.0.4324.182 Safari/537.36' \
-H 'x-requested-with: XMLHttpRequest' \
-H 'sec-gpc: 1' \
-H 'sec-fetch-site: same-origin' \
-H 'sec-fetch-mode: cors' \
-H 'sec-fetch-dest: empty' \
-H 'referer: https://www.nseindia.com/get-quotes/equity?symbol=COALINDIA' \
-H 'accept-language: en-GB,en-US;q=0.9,en;q=0.8' \
-H 'cookie: ak_bmsc=2D5CCD6F330B77016DD02ADFD8BADB8A58DDD69E733C0000451A9060B2DF0E5C~pllIy1yQvFABwPqSfaqwV4quP8uVOfZBlZe9dhyP7+7vCW/YfXy32hQoUm4wxCSxUjj8K67PiZM+8wE7cp0WV5i3oFyw7HRmcg22nLtNY4Wb4xn0qLv0kcirhiGKsq4IO94j8oYTZIzN227I73UKWQBrCSiGOka/toHASjz/R10sX3nxqvmMSBlWvuuHkgKOzrkdvHP1YoLPMw3Cn6OyE/Z2G3oc+mg+DXe8eX1j8b9Hc=; nseQuoteSymbols=[{"symbol":"COALINDIA","identifier":null,"type":"equity"}]; nsit=X5ZCfROTTuLVwZzLBn7OOtf0; AKA_A2=A; bm_mi=6CE0B82205ACE5A1F72250ACDDFF563E~LZ4/HQ257rSMBPCrxy0uSDvrSxj4hHpLQqc8R5JZOzUZYo1OqZg5Q/GOt88XNtMbsWM8bB22vtCXzvksGwPcC/bH2nPFEZr0ci6spQ4GOpCa/TM7soc02HVf0tyDTkmg/ZdLZlWzond4r0vn+QpSB7f3fiVza1Gdx9OaFL1i3rvqe1OKmFONreHEue20PL0hlREVWeLcFM/5DxKArPwzCSopPp62Eea1510iivl7GmY=; nseappid=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJhcGkubnNlIiwiYXVkIjoiYXBpLm5zZSIsImlhdCI6MTYyMDA2MTQ5OSwiZXhwIjoxNjIwMDY1MDk5fQ.YBTQ0MqRayD3QBM3V6zUt5zbRRICkbIhWWNedkDYrdU; bm_sv=C49B743B48F174C77F3DDAD188AA6D87~bm5TD36snlaRLx9M5CS+FOUicUcbVV3OIKjZU2WLwd1PtHYUum7hnBfYeUCDv+5Xdb9ADklnmm1cwZGJJbiBstcA6c5vju53C7aTFBorl8SJZjBN/4ku61oz0ncrQYCaSxkFGkRRY9VMWm6SpQwHXfMsUzc/Qk7301zs7KZuGCY=' \
--compressed
This gives us the required response (example below)
"Date ","series ","OPEN ","HIGH ","LOW ","PREV. CLOSE ","ltp ","close ","vwap ","52W H","52W L ","VOLUME ","VALUE ","No of trades "
"03-May-2021","EQ","133.00","133.45","131.20","133.05","132.20","132.20","132.21","163.00","109.55",10262391,"1,356,811,541.80",59409
But if I use the following python script to get the data
import requests
headers = {
'authority': 'www.nseindia.com',
'accept': '*/*',
'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36',
'x-requested-with': 'XMLHttpRequest',
'sec-gpc': '1',
'sec-fetch-site': 'same-origin',
'sec-fetch-mode': 'cors',
'sec-fetch-dest': 'empty',
'referer': 'https://www.nseindia.com/get-quotes/equity?symbol=COALINDIA',
'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8','cookie':'ak_bmsc=2D5CCD6F330B77016DD02ADFD8BADB8A58DDD69E733C0000451A9060B2DF0E5C~pllIy1yQvFABwPqSfaqwV4quP8uVOfZBlZe9dhyP7+7vCW/YfXy32hQoUm4wxCSxUjj8K67PiZM+8wE7cp0WV5i3oFyw7HRmcg22nLtNY4Wb4xn0qLv0kcirhiGKsq4IO94j8oYTZIzN227I73UKWQBrCSiGOka/toHASjz/R10sX3nxqvmMSBlWvuuHkgKOzrkdvHP1YoLPMw3Cn6OyE/Z2G3oc+mg+DXe8eX1j8b9Hc=; nseQuoteSymbols=[{"symbol":"COALINDIA","identifier":null,"type":"equity"}]; nsit=X5ZCfROTTuLVwZzLBn7OOtf0; AKA_A2=A; bm_mi=6CE0B82205ACE5A1F72250ACDDFF563E~LZ4/HQ257rSMBPCrxy0uSDvrSxj4hHpLQqc8R5JZOzUZYo1OqZg5Q/GOt88XNtMbsWM8bB22vtCXzvksGwPcC/bH2nPFEZr0ci6spQ4GOpCa/TM7soc02HVf0tyDTkmg/ZdLZlWzond4r0vn+QpSB7f3fiVza1Gdx9OaFL1i3rvqe1OKmFONreHEue20PL0hlREVWeLcFM/5DxKArPwzCSopPp62Eea1510iivl7GmY=; nseappid=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJhcGkubnNlIiwiYXVkIjoiYXBpLm5zZSIsImlhdCI6MTYyMDA2MTQ5OSwiZXhwIjoxNjIwMDY1MDk5fQ.YBTQ0MqRayD3QBM3V6zUt5zbRRICkbIhWWNedkDYrdU; bm_sv=C49B743B48F174C77F3DDAD188AA6D87~bm5TD36snlaRLx9M5CS+FOUicUcbVV3OIKjZU2WLwd1PtHYUum7hnBfYeUCDv+5Xdb9ADklnmm1cwZGJJbiBstcA6c5vju53C7aTFBorl8SJZjBN/4ku61oz0ncrQYCaSxkFGkRRY9VMWm6SpQwHXfMsUzc/Qk7301zs7KZuGCY=',}
params = (
('symbol', 'COALINDIA'),
('series', '/["EQ"/]'),
('from', '30-04-2021'),
('to', '03-05-2021'),
('csv', 'true'),
)
response = requests.get('https://www.nseindia.com/api/historical/cm/equity', headers=headers, params=params)
It gets stuck in the last line.
I am using python3.9 and urllib3.
Not sure what is the problem.
This url downloads a csv file from the website.
You have to jump through some loops with Python to get the file you're after. Mainly, you need to get the request header cookie part right, otherwise you'll keep getting 401 code.
First, you need to get the regular cookies from the authority www.nseindia.com. Then, you need to get the bm_sv cookie from the https://www.nseindia.com/json/quotes/equity-historical.json. Finally, add something that's called nseQuoteSymbols.
Glue all that together and make the request to get the file.
Here's how:
from urllib.parse import urlencode
import requests
headers = {
'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) '
'AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/88.0.4324.182 Safari/537.36',
'x-requested-with': 'XMLHttpRequest',
'referer': 'https://www.nseindia.com/get-quotes/equity?symbol=COALINDIA',
}
payload = {
"symbol": "COALINDIA",
"series": '["EQ"]',
"from": "04-04-2021",
"to": "04-05-2021",
"csv": "true",
}
api_endpoint = "https://www.nseindia.com/api/historical/cm/equity?"
nseQuoteSymbols = 'nseQuoteSymbols=[{"symbol":"COALINDIA","identifier":null,"type":"equity"}]; '
def make_cookies(cookie_dict: dict) -> str:
return "; ".join(f"{k}={v}" for k, v in cookie_dict.items())
with requests.Session() as connection:
authority = connection.get("https://www.nseindia.com", headers=headers)
historical_json = connection.get("https://www.nseindia.com/json/quotes/equity-historical.json", headers=headers)
bm_sv_string = make_cookies(historical_json.cookies.get_dict())
cookies = make_cookies(authority.cookies.get_dict()) + nseQuoteSymbols + bm_sv_string
connection.headers.update({**headers, **{"cookie": cookies}})
the_real_slim_shady = connection.get(f"{api_endpoint}{urlencode(payload)}")
csv_file = the_real_slim_shady.headers["Content-disposition"].split("=")[-1]
with open(csv_file, "wb") as f:
f.write(the_real_slim_shady.content)
Output -> a .csv file that looks like this:

How can I specify the exact http method with python requests?

In Burp Suite the first line of a captured request is usually GET / HTTP/1.1. However, I am currently practicing Host Header injection using the method of supplying an absolute URL in order to something like this:
GET https://vulnerable-website.com/ HTTP/1.1
Host: bad-stuff-here
In python I am using the requests library and am unable to specify the exact GET request I need.
import requests
burp0_url = "https://vulnerable-website.com:443/"
burp0_cookies = {[redacted]}
burp0_headers = {"Host": "bad-stuff-here", "User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-US,en;q=0.5", "Accept-Encoding": "gzip, deflate", "Referer": "https://vulnerable-website.com/", "Connection": "close", "Upgrade-Insecure-Requests": "1"}
output = requests.get(burp0_url, headers=burp0_headers, cookies=burp0_cookies)
print(output, output.text)
I have tried specifying the GET request in the header dictionary (header = {"GET":" / HTTP/1.1", ...}), however this only results in a GET Header not r
Request on the 6th line being sent:
GET / HTTP/1.1
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0
Accept-Encoding: gzip, deflate
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Connection: close
GET: /
Host: bad-stuff-here
Accept-Language: en-US,en;q=0.5
Referer: https://vulnerable-website.com/
Upgrade-Insecure-Requests: 1
Cookie: [redacted]
This is a very specific problem and I'm not sure if anyone has had the same issues but any help is appreciated. Maybe a workaround with urllib or something I'm missing. Thanks.
requests uses urllib3 under the hood.
You have to craft the request yourself because of non of the clients [urlib, requests, http.client] won't allow you to insert a control character by design.
You can use a plain socket for this
msg = 'GET / HTTP/1.1\r\n\r\n'
s = socket.create_connection(("vulnerable-website.com", 80))
with closing(s):
s.send(msg)
buf = ''.join(iter(partial(s.recv, 4096), ''))

How to send http 2.0 pseudo headers in python3

Basically i tried sending http2.0 headers with hyper for python
https://hyper.readthedocs.io/en/latest/
https://github.com/python-hyper/hyper
Mounting HTTP20Adapter in my request.session but didnt worked as expected.
First i explain thath "from tls_version import MyAdapter" thath is used later in the Main code its these tls_version.py file
from requests.adapters import HTTPAdapter
from urllib3.poolmanager import PoolManager
import ssl
class MyAdapter(HTTPAdapter):
def init_poolmanager(self, connections, maxsize, block=False):
self.poolmanager = PoolManager(num_pools=connections,
maxsize=maxsize,
block=block,
ssl_version=ssl.PROTOCOL_TLSv1_2)
Just to force to use tls1.2 nothing more.
The Main code is here but basically im trying to send a get call with http2.0 pseudo headers mounting hyper adapter in request.session and having control over headers order with collections.OrderectDict
import requests
from tls_version import MyAdapter
import json
import collections
from userdata import UserData
from hyper.contrib import HTTP20Adapter
headers2 = [('Upgrade-Insecure-Requests', '1'),
('User-Agent', 'Mozilla/5.0 (Linux; Android 5.1.1; google Pixel 2 Build/LMY47I; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/74.0.3729.136 Mobile Safari/537.36'),
('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3'),
('Accept-Encoding', 'gzip, deflate'),
('Accept-Language', 'es-ES,es;q=0.9,en-US;q=0.8,en;q=0.7'),
]
class post_calls():
def start(self,headers_pass,body_pass,params,url,method):
proxies = {
'http': ip,
'https': ip
}
body = str(body_pass)
#send the POST request
session = requests.session()
session.mount('https://', MyAdapter())
session.headers = collections.OrderedDict(headers_pass)
if method == 'get':
q = 'https://' + server + '.' + host
q = q.replace('.www.', '.')
session.mount('https://', HTTP20Adapter())
print('q='+q)
response = session.get(url, proxies=proxies, params=params, verify=charlesproxy)
def login_world2(sid):
a = post_calls()
q ='https://'+server+'.'+ host+'/login.php?mobile&sid='+sid+'&2'
q = q.replace('.www.','.')
params = {}
url = q
body = '0'
login = a.start(headers2,body,params,url,'get')
return login
if __name__ == "__main__":
login_get = login_world(sid)
print(login_get)
these is headers sends these file:
:method: GET
:scheme: https
:authority: server.url.com
:path: /login.php?mobile&sid=577f0967545d6acec94716d265dd4867fa4db4a446326ecde7486a97feede14702f4911438f4a4cd097677f0dd962786ef14b3f16f1184ee63a506155d522f53&2
upgrade-insecure-requests: 1
user-agent: Mozilla/5.0 (Linux; Android 5.1.1; google Pixel 2 Build/LMY47I; wv) AppleWebKit/537.36 (KHTML
user-agent: like Gecko) Version/4.0 Chrome/74.0.3729.136 Mobile Safari/537.36
accept: text/html
accept: application/xhtml+xml
accept: application/xml;q=0.9
accept: image/webp
accept: image/apng
accept: */*;q=0.8
accept: application/signed-exchange;v=b3
accept-encoding: gzip
accept-encoding: deflate
accept-language: es-ES
accept-language: es;q=0.9
accept-language: en-US;q=0.8
accept-language: en;q=0.7
and these is what i need to send cause if i send them like i put above, like these script does, the server rejects my get requests.
:method: GET
:authority: server.url.com
:scheme: https
:path: /login.php?mobile&sid=2ea530a62cb63af6c14be116b7df86ad85cd77c9a11aa3c881b3a460e6c14fbd1fd8b79bd66c9782073705cdff25e890e65b5aeb852fde24c2d54a6e4ee49890&2
upgrade-insecure-requests: 1
user-agent: Mozilla/5.0 (Linux; Android 5.1.1; google Pixel 2 Build/LMY47I; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/74.0.3729.136 Mobile Safari/537.36
accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3
accept-encoding: gzip, deflate
accept-language: es-ES,es;q=0.9,en-US;q=0.8,en;q=0.7
Seems thath for each "," i put in headers dict, hyper creates a new header instead of sending them in same value, witouth hyper it works normal but i need to send those headers thath are http 2.0 and witouth hyper or any other alternative i cannot, requests dosnt have support for it
:method: GET
:scheme: https
:authority: server.url.com
:path: /login.php?mobile&sid=577f0967545d6acec94716d265dd4867fa4db4a446326ecde7486a97feede14702f4911438f4a4cd097677f0dd962786ef14b3f16f1184ee63a506155d522f53&2
Set the headers in the HTTP20Adapter instead of in session and it should work.
adapter = HTTP20Adapter(headers=headers)
session.mount(prefix='https://', adapter=adapter)

Trouble converting a curl command to a request-promise command in nodejs

So I know that this curl command does work. When I run it, I get the html document that should appear after this login:
curl 'https://login.url'
-H 'Pragma: no-cache'
-H 'Origin: https://blah'
-H 'Accept-Encoding: gzip, deflate, br'
-H 'Accept-Language: en-US,en;q=0.8'
-H 'Upgrade-Insecure-Requests: 1'
-H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36'
-H 'Content-Type: application/x-www-form-urlencoded'
-H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'
-H 'Cache-Control: no-cache'
-H 'Referer: https://blash.jsp'
-H 'Connection: keep-alive'
--data 'Username=mary&Password=<PASSWORD>&Go=Login&Action=Srh-1-1' --compressed ;
This is my attempt at converting it to a node request promise. When I run it, I get some weird characters back.
var url = 'https://login.url';
var headers = {
'Pragma': 'no-cache',
'Origin': 'https://blah',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.8',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36',
'Content-Type': 'application/x-www-form-urlencoded',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Cache-Control': 'no-cache',
'Referer': 'https://blash.jsp',
'Connection': 'keep-alive',
}
var data = {
'Username':'mary',
'Password':'<Password>',
'Go':'Login',
'Action':'Srh-1-1'
}
var html = yield request.post({url:url,form:data,headers:headers});
Here's an example of the weird characters:
�X}o�6����4\��b�N��%
What am I doing incorrectly?
You need to tell request that you accept compression by setting the gzip option to true.
Be aware that depending on how/where you get the data, you may get the compressed or uncompressed response. Check the documentation for request for details (search for "compr" on the page).

Resources