Python3 http request close connection when nothing is sent - python-3.x

I have an API that have the following workaround:
You make a POST request and it returns "n" lines of data:{json}
It mantain the connection opened until 300 seconds minimum without sending nothing.
As this is very slow, I want to find a way to close the connection when is not sending anything or after a timer.

So, yes, it was easier than what I thought, i'm going to copy paste here my code using http.client library:
def asyncCall(url, data = None, timeout = 300,):
conn = http.client.HTTPConnection(IP, timeout=timeout)
conn.request("POST", url, bytes(json.dumps(data), encoding="utf-8"), )
r1 = conn.getresponse()
while not r1.closed:
l = r1.readline().decode("utf-8")
yield l
On this way, it can pass each line of code to a callback (that run in a separated Process) and close the connection after timeout.

Related

How to run a while loop to run a REST API call until no more results come back in Python

I'm writing a short Python program to request a JSON file using a Rest API call. The API limits me to a relatively small results set (50 or so) and I need to retrieve several thousand result sets. I've implemented a while loop to achieve this and it's working fairly well but I can't figure out the logic for 'continuing the while loop' until there are no more results to retrieve. Right now I've implemented a hard number value but would like to replace it with a conditional that stops the loop if no more results come back. The 'offset' field is the parameter that the API forces you to use to specify which set of results you want in your 50. My logic looks something like...
import requests
import json
from time import sleep
url = "https://someurl"
offsetValue = 0
PARAMS = {'limit':50, 'offset':offsetValue}
headers = {
"Accept": "application/json"
}
while offsetValue <= 1000:
response = requests.request(
"GET",
url,
headers=headers,
params = PARAMS
)
testfile = open("testfile.txt", "a")
testfile.write(json.dumps(json.loads(response.text), sort_keys=True, indent=4, separators=(",", ": ")))
testfile.close()
offsetValue = offsetValue + 1
sleep(1)
So I want to change the conditional the controls the while loop from a fixed number to a check to see if the result set for the getRequest is empty. Hopefully this makes sense.
Your loop can be while true. After each fetch, convert the payload to a dict. If the number of results is 0, then break.
Depending on how the API works, there may be other signals that there’s nothing more to fetch, e.g. some HTTP error, not necessarily the result count — you’ll have to discover the API’s logic for that.

How to check if there is no message in RabbitMQ with Pika and Python

I read messages from RabbitMQ with the pika python library. Reading messages in the loop is done by
connection = rpc.connect()
channel = connection.channel()
channel.basic_consume(rpc.consumeCallback, queue=FromQueue, no_ack=Ack)
channel.start_consuming()
This works fine.
But I also have the need to read one single message, which I do with:
method, properties, body = channel.basic_get(queue=FromQueue)
rpc.consumeCallback(Channel=channel,Method=method, Properties=properties,Body=body)
But when there is no message in the queue, the script is craching. How do I implement the get_empty() method described here ?
I solved it temporarily with a check on the response like:
method, properties, body = channel.basic_get(queue=FromQueue)
if(method == None):
## queue is empty
you can check empty in body like this:
def callback(ch, method, properties, body):
decodeBodyInfo = body.decode('utf-8')
if decodeBodyInfo != '':
cacheResult = decodeBodyInfo
ch.stop_consuming()
That so simple and easy to use :D
In case you're using the channel.consume generator in a for loop, you can set the inactivity_timeout parameter.
From the pika docs,
:param float inactivity_timeout: if a number is given (in seconds), will cause the
method to yield (None, None, None) after the given period of inactivity; this
permits for pseudo-regular maintenance activities to be carried out by the user
while waiting for messages to arrive. If None is given (default), then the method
blocks until the next event arrives. NOTE that timing granularity is limited by the
timer resolution of the underlying implementation.NEW in pika 0.10.0.
so changing your code to something like this might help
for method_frame, properties, body in channel.consume(queue, inactivity_timeout=120):
# break of the loop after 2 min of inactivity (no new item fetched)
if method_frame is None
break
Don't forget to properly handle the channel and the connection after exiting the loop

Python3 send requests cookies from previous call

I want to resend the initialized cookies from the first call in the second call, so that the session is not changing. This is not working.
Why? And how can I solve it. Sorry, new in python
https_url = "www.google.com"
r = requests.get(https_url)
print(r.cookies.get_dict())
#cookie = {id: abc}
response = requests.get(https_url, cookies=response.cookies.get_dict())
print(response.cookies.get_dict())
#cookie = {id: def}
You aren't necessarily doing it wrong with the way you're passing the cookies from the last response to the next request, except that:
"www.google.com" is not a valid URL.
Even you had used http://www.google.com as the URL, the cookies returned by Google in such a GET request aren't meant to be session cookies and won't be persistent across requests.
You used the variable r to receive the returning value from the first requests.get, and yet you used response.cookies when you make the second requests.get. A possible typo?
If all of the above are due to your trying to mock up your real code, you should really consider using requests.Session to avoid micro-managing session cookies.
Please read requests.Session's documentation for more details.
import requests
with requests.Session() as s:
r = s.get(https_url)
# cookies from the first s.get are automatically passed on to the second s.get
r = s.get(https_url)
...

Python asyncio Protocol behaviour with multiple clients and infinite loop

I'm having difficulty understanding the behaviour of my altered echo server, which attempts to take advantage of python 3's asyncio module.
Essentially I have an infinite loop (lets say I want to stream some data from the server to the client indefinitely whilst the connection has been made) e.g. MyServer.py:
#! /usr/bin/python3
import asyncio
import os
import time
class MyProtocol(asyncio.Protocol):
def connection_made(self, transport):
peername = transport.get_extra_info('peername')
print('Connection from {}'.format(peername))
self.transport = transport
def connection_lost(self, exc):
asyncio.get_event_loop().stop()
def data_received(self, data):
i = 0
while True:
self.transport.write(b'>> %i' %i)
time.sleep(2)
i+=1
loop = asyncio.get_event_loop()
coro = loop.create_server(MyProtocol,
os.environ.get('MY_SERVICE_ADDRESS', 'localhost'),
os.environ.get('MY_SERVICE_PORT', 8100))
server = loop.run_until_complete(coro)
try:
loop.run_forever()
except:
loop.run_until_complete(server.wait_closed())
finally:
loop.close()
Next when I connect with nc ::1 8100 and send some text (e.g. "testing") I get the following:
user#machine$ nc ::1 8100
*** Connection from('::1', 58503, 0, 0) ***
testing
>> 1
>> 2
>> 3
^C
Now when I attempt to connect using nc again, I do not get any welcome message and after I attempt to send some new text to the server I get an endless stream of the following error:
user#machine$ nc ::1 8100
Is there anybody out there?
socket.send() raised exception
socket.send() raised exception
...
^C
Just to add salt to the wound the socket.send() raised exception message continues to spam my terminal until I kill the python server process...
As I'm new to web technologies (been a desktop dinosaur for far too long!), I'm not sure why I am getting the above behaviour and I haven't got a clue on how to produce the intended behaviour, which loosely looks like this:
server starts
client 1 connects to server
server sends welcome message to client
4 client 1 sends an arbitrary message
server sends messages back to client 1 for as long as the client is connected
client 1 disconnects (lets say the cable is pulled out)
client 2 connects to server
Repeat steps 3-6 for client 2
Any enlightenment would be extremely welcome!
There are multiple problems with the code.
First and foremost, data_received never returns. At the transport/protocol level, asyncio programming is single-threaded and callback-based. Application code is scattered across callbacks like data_received, and the event loop runs the show, monitoring file descriptors and invoking the callbacks as needed. Each callback is only allowed to perform a short calculation, invoke methods on transport, and arrange for further callbacks to be executed. What the callback cannot do is take a lot of time to complete or block waiting for something. A while loop that never exits is especially bad because it doesn't allow the event loop to run at all.
This is why the code only spits out exceptions once the client disconnects: connection_lost is never called. It's supposed to be called by the event loop, and the never-returning data_received is not giving the event loop a chance to resume. With the event loop blocked, the program is unable to respond to other clients, and data_received keeps trying to send data to the disconnected client, and logs its failure to do so.
The correct way to express the idea can look like this:
def data_received(self, data):
self.i = 0
loop.call_soon(self.write_to_client)
def write_to_client(self):
self.transport.write(b'>> %i' % self.i)
self.i += 1
loop.call_later(2, self.write_to_client)
Note how both data_received and write_to_client do very little work and quickly return. No calls to time.sleep(), and definitely no infinite loops - the "loop" is hidden inside the kind-of-recursive call to write_to_client.
This change reveals the second problem in the code. Its MyProtocol.connection_lost stops the whole event loop and exits the program. This renders the program unable to respond to the second client. The fix could be to replace loop.stop() with setting a flag in connection_lost:
def data_received(self, data):
self._done = False
self.i = 0
loop.call_soon(self.write_to_client)
def write_to_client(self):
if self._done:
return
self.transport.write(b'>> %i' % self.i)
self.i += 1
loop.call_later(2, self.write_to_client)
def connection_lost(self, exc):
self._done = True
This allows multiple clients to connect.
Unrelated to the above issues, the callback-based code is a bit tiresome to write, especially when taking into account complicated code paths and exception handling. (Imagine trying to express nested loops with callbacks, or propagating an exception occurring inside a deeply embedded callback.) asyncio supports coroutines-based streams as alternative to callback-based transports and protocols.
Coroutines allow writing natural-looking code that contains loops and looks like it contains blocking calls, which under the hood are converted into suspension points that enable the event loop to resume. Using streams the code from the question would look like this:
async def talk_to_client(reader, writer):
peername = writer.get_extra_info('peername')
print('Connection from {}'.format(peername))
data = await reader.read(1024)
i = 0
while True:
writer.write(b'>> %i' % i)
await writer.drain()
await asyncio.sleep(2)
i += 1
loop = asyncio.get_event_loop()
coro = asyncio.start_server(talk_to_client,
os.environ.get('MY_SERVICE_ADDRESS', 'localhost'),
os.environ.get('MY_SERVICE_PORT', 8100))
server = loop.run_until_complete(coro)
loop.run_forever()
talk_to_client looks very much like the original implementation of data_received, but without the drawbacks. At each point where it uses await the event loop is resumed if the data is not available. time.sleep(n) is replaced with await asyncio.sleep(n) which does the equivalent of loop.call_later(n, <resume current coroutine>). Awaiting writer.drain() ensures that the coroutine pauses when the peer cannot process the output it gets, and that it raises an exception when the peer has disconnected.

Perform handshake only once

I use urllib.request.urlopen to fetch data from server over HTTPS. The function is called a lot to the same server, often to the exact same url. However, unlike standard web browsers which perform a handshake on initial request, calling separate urlopen(url)'s will result in a new handshake for each call. This is very slow on high-latency networks. Is there a way to perform handshake once and reuse the existing connection for further communications?
I cannot modify server code to utilise sockets or other protocols.
You are opening a new connection for every request. To reuse the connection, you either need to use an http.client:
>>> import http.client
>>> conn = http.client.HTTPSConnection("www.python.org")
>>> conn.request("GET", "/")
>>> r1 = conn.getresponse()
>>> print(r1.status, r1.reason)
200 OK
>>> data1 = r1.read() # This will return entire content.
>>> # The following example demonstrates reading data in chunks.
>>> conn.request("GET", "/")
>>> r1 = conn.getresponse()
>>> while not r1.closed:
... print(r1.read(200)) # 200 bytes
b'<!doctype html>\n<!--[if"...
...
>>> # Example of an invalid request
>>> conn.request("GET", "/parrot.spam")
>>> r2 = conn.getresponse()
>>> print(r2.status, r2.reason)
404 Not Found
>>> data2 = r2.read()
>>> conn.close()
Or use the recommended python Requests Package, which has session objects that make use of persistent connections (using urllib3).
You should open a stream for it, because HTTP/(s) is stateless its open new socket to the server for each connection.
So there is no way with this logic, but I just searched around for opening the persistent connection. I just see that hope it will help. It mentions about urllib2
Persistent HTTPS Connections in Python

Resources