Python3 send requests cookies from previous call - python-3.x

I want to resend the initialized cookies from the first call in the second call, so that the session is not changing. This is not working.
Why? And how can I solve it. Sorry, new in python
https_url = "www.google.com"
r = requests.get(https_url)
print(r.cookies.get_dict())
#cookie = {id: abc}
response = requests.get(https_url, cookies=response.cookies.get_dict())
print(response.cookies.get_dict())
#cookie = {id: def}

You aren't necessarily doing it wrong with the way you're passing the cookies from the last response to the next request, except that:
"www.google.com" is not a valid URL.
Even you had used http://www.google.com as the URL, the cookies returned by Google in such a GET request aren't meant to be session cookies and won't be persistent across requests.
You used the variable r to receive the returning value from the first requests.get, and yet you used response.cookies when you make the second requests.get. A possible typo?
If all of the above are due to your trying to mock up your real code, you should really consider using requests.Session to avoid micro-managing session cookies.
Please read requests.Session's documentation for more details.
import requests
with requests.Session() as s:
r = s.get(https_url)
# cookies from the first s.get are automatically passed on to the second s.get
r = s.get(https_url)
...

Related

How to Validate a Xero webhook payload with HMACSHA256 python 3

Based on the instructions here (https://developer.xero.com/documentation/webhooks/configuring-your-server) for setting up and validating the intent to receive for the Xero webhook.
The computed signature should match the signature in the header for a correctly signed payload.
But, using python 3, the computed signature doesn't match the signature in the header in any way at all. Xero would send numerous requests to the subscribing webhook url for both correctly and incorrect. In my log, all those requests returned as 401. So, below is my test code which also proved to not match. I don't know what is missing or I what did wrong.
Don't worry about the key been show here, i have generated another key but this was the key assigned to me to use for hashing at this point.
based on their instruction, running this code should make the signature match one of the headers. But not even close at all.
XERO_KEY =
"lyXWmXrha5MqWWzMzuX8q7aREr/sCWyhN8qVgrW09OzaqJvzd1PYsDAmm7Au+oeR5AhlpHYalba81hrSTBeKAw=="
def create_sha256_signature(key, message):
message = bytes(message, 'utf-8')
return base64.b64encode(hmac.new(key.encode(), message,
digestmod=hashlib.sha256).digest()).decode()
# first request header (possibly the incorrect one)
header = "onoTrUNvGHG6dnaBv+JBJxFod/Vp0m0Dd/B6atdoKpM="
# second request header (possibly the correct one)
header = "onoTrUNvGHG6dnaBv+JBJxFodKVp0m0Dd/B6atdoKpM="
payload = {
'events':[],
'firstEventSequence':0,
'lastEventSequence':0,
'entropy':
'YSXCMKAQBJOEMGUZEPFZ'
}
payload = json.dumps(payload, separators=(",", ":")).strip()
signature = create_sha256_signature(XERO_KEY, str(payload))
if hmac.compare_digest(header, signature):
print(True)
return 200
else:
print(False)
return 401
The problem was because when I was receiving the request payload, I was using
# flask request
request.get_json()
this will automatically parse my request data into a json format, hence the reason why the calculated signature never matched
So, what I did was change the way I receive the request payload:
request.get_data()
This will get the raw data.
I still could not get this to work even with the OP's answer, which I found a little vague.
The method I found which worked was:
key = #{your key}
provided_signature = request.headers.get('X-Xero-Signature')
hashed = hmac.new(bytes(key, 'utf8'), request.data, hashlib.sha256)
generated_signature = base64.b64encode(hashed.digest()).decode('utf-8')
if provided_signature != generated_signature:
return '', 401
else:
return '', 200
found on https://github.com/davidvartanian/xero_api_test/blob/master/webhook_server.py#L34

Python 3 Requests: construct URI without calling GET method?

So requests is a great library and often I used it like so:
payload = {
# ...
}
results = requests.get(some_url, params=payload)
and requests mangles all the key value pairs into the uri and goes ahead and makes the GET request.
Is there a way to construct the url of results.url without having to call .get?
Yes, but you will need to use the "raw" Request object and call its prepare method. Then you will be able to grab the prepared request's url attribute.
r = requests.Request('get', 'http://url', params={'a': 1, 'b': 2})
prepared_r = r.prepare()
print(prepared_r.url)
# http://url/?a=1&b=2
To make the request you will need a Session object:
s = requests.Session()
s.send(prepared_r)

Python3 http request close connection when nothing is sent

I have an API that have the following workaround:
You make a POST request and it returns "n" lines of data:{json}
It mantain the connection opened until 300 seconds minimum without sending nothing.
As this is very slow, I want to find a way to close the connection when is not sending anything or after a timer.
So, yes, it was easier than what I thought, i'm going to copy paste here my code using http.client library:
def asyncCall(url, data = None, timeout = 300,):
conn = http.client.HTTPConnection(IP, timeout=timeout)
conn.request("POST", url, bytes(json.dumps(data), encoding="utf-8"), )
r1 = conn.getresponse()
while not r1.closed:
l = r1.readline().decode("utf-8")
yield l
On this way, it can pass each line of code to a callback (that run in a separated Process) and close the connection after timeout.

Python 3.6 Downloading .csv files from finance.yahoo.com using requests module

I was trying to download a .csv file from this url for the history of a stock. Here's my code:
import requests
r = requests.get("https://query1.finance.yahoo.com/v7/finance/download/CHOLAFIN.BO?period1=1514562437&period2=1517240837&interval=1d&events=history&crumb=JaCfCutLNr7")
file = open(r"history_of_stock.csv", 'w')
file.write(r.text)
file.close()
But when I opened the file history_of_stock.csv, this was what I found: {
"finance": {
"error": {
"code": "Unauthorized",
"description": "Invalid cookie"
}
}
}
I couldn't find anything that could fix my problem. I found this thread in which someone has the same problem except that it is in C#: C# Download price data csv file from https instead of http
To complement the earlier answer and provide a concrete completed code, I wrote a script which accomplishes the task of getting historical stock prices in Yahoo Finance. Tried to write it as simply as possible. To give a summary: when you use requests to get a URL, in many instances you don't need to worry about crumbs or cookies. However, with Yahoo finance, you need to get the crumbs and the cookies. Once you get the cookies, then you are good to go! Make sure to set a timeout on the requests.get call.
import re
import requests
import sys
from pdb import set_trace as pb
symbol = sys.argv[-1]
start_date = '1442203200' # start date timestamp
end_date = '1531800000' # end date timestamp
crumble_link = 'https://finance.yahoo.com/quote/{0}/history?p={0}'
crumble_regex = r'CrumbStore":{"crumb":"(.*?)"}'
cookie_regex = r'set-cookie: (.*?);'
quote_link = 'https://query1.finance.yahoo.com/v7/finance/download/{}?period1={}&period2={}&interval=1d&events=history&crumb={}'
link = crumble_link.format(symbol)
session = requests.Session()
response = session.get(link)
# get crumbs
text = str(response.content)
match = re.search(crumble_regex, text)
crumbs = match.group(1)
# get cookie
cookie = session.cookies.get_dict()
url = "https://query1.finance.yahoo.com/v7/finance/download/%s?period1=%s&period2=%s&interval=1d&events=history&crumb=%s" % (symbol, start_date, end_date, crumbs)
r = requests.get(url,cookies=session.cookies.get_dict(),timeout=5, stream=True)
out = r.text
filename = '{}.csv'.format(symbol)
with open(filename,'w') as f:
f.write(out)
There was a service for exactly this but it was discontinued.
Now you can do what you intend but first you need to get a Cookie. On this post there is an example of how to do it.
Basically, first you need to make a useless request to get the Cookie and later, with this Cookie in place, you can query whatever else you actually need.
There's also a post about another service which might make your life easier.
There's also a Python module to work around this inconvenience and code to show how to do it without it.

Facebook login -Python 3 Requests module

Why does this script still bring me to the main page (not logged in)?
Imagine the email and pass were good
import requests
par = {'email':'addddd','Pass':'ggdddssd'}
r = requests.post('https://www.facebook.com',data=par)
print (r.text)
or even trying trying to search with the search bar on youtube:
<input id="masthead-search-term" name="search_query" value="" type="text" tabindex="1"title="Rechercher">
import requests
par = {'search_query':'good_songs_on_youtube'}
r = requests.post('https://www.youtube.com',data=par)
print (r.text)
doesn't make any search... any ideas why?
You won't be able to login like this for two reasons:
When you post an HTML form - you should use the form's "action" as the url. For example:
import requests
url = 'https://www.facebook.com/login.php?login_attempt=1' #the form's action url
par = {"email": "********", "password": "********"} # the forms parameters
r = requests.post(url, data=par)
print (r.content) # you can't use r.text because of encoding
In your case, this is also not good enough cause FB requires cookies, you might be able to program your way out of it, but it won't be easy/straight-forward.
So, how can you login (programmatically) to FB and post queries ?
You can install and use their SDK, and there are also open source projects such as this one that uses FB API and supports oauth as well.
You will have to go to https://developers.facebook.com/tools/explorer/ and get your API token in order to use it with the oauth authentication.
For youtube, you should use "GET" request instead of post to get the valid response. I've given a demo using your search query "good_songs" and showed how to fetch the results:
import requests
from lxml import html
url = 'https://www.youtube.com/results?search_query='
search = 'good_songs'
response = requests.get(url + search).text
tree = html.fromstring(response)
for item in tree.xpath("//a[contains(concat(' ', #class, ' '), ' yt-uix-tile-link ')]/text()"):
print (item)

Resources