python eve gracefully exit from callback - python-3.x

I'm wondering if it's possible to update an item without completely process the PATCH request.
What I'm trying to do is to randomly generate and insert a value inside the db when a user sends a PATCH request to the accounts/ endpoint.If I don't exit from the PATCH request I will get an error because it expects a value but I cannot give it in advance because it will be randomly generated.
def pre_accounts_patch_callback(request, lookup):
if not my_func():
abort(401)
else:
return HTTP 201 OK
What can I do?

Not sure I get what you want to achieve, however keep in mind that you can actually update lookup within your callback, so the API will get back and process the updated version, with validation and all.
import random
def pre_accounts_patch_callback(request, lookup):
lookup['random_field'] = random.randint(0, 10)
app = Eve()
app.on_pre_PATCH_accounts += pre_accounts_patch_callback
if __name__ == '__main__':
app.run()

Related

Can You Retry/Loop inside a Try/Except?

I'm trying to understand if it's possible to set a loop inside of a Try/Except call, or if I'd need to restructure to use functions. Long story short, after spending a few hours learning Python and BeautifulSoup, I managed to frankenstein some code together to scrape a list of URLs, pull that data out to CSV (and now update it to a MySQL db). The code is now working as planned, except that I occasionally run into a 10054, either because my VPN hiccups, or possibly the source host server is occasionally bouncing me (I have a 30 second delay in my loop but it still kicks me on occasion).
I get the general idea of Try/Except structure, but I'm not quite sure how I would (or if I could) loop inside it to try again. My base code to grab the URL, clean it and parse the table I need looks like this:
for url in contents:
print('Processing record', (num+1), 'of', len(contents))
if url:
print('Retrieving data from ', url[0])
html = requests.get(url[0]).text
soup = BeautifulSoup(html, 'html.parser')
for span in soup('span'):
span.decompose()
trs = soup.select('div#collapseOne tr')
if trs:
print('Processing')
for t in trs:
for header, value in zip(t.select('td')[0], t.select('td:nth-child(2)')):
if num == 0:
headers.append(' '.join(header.split()))
values.append(re.sub(' +', ' ', value.get_text(' ', strip=True)))
After that is just processing the data to CSV and running an update sql statement.
What I'd like to do is if the HTML request call fails is wait 30 seconds, try the request again, then process, or if the retry fails X number of times, go ahead and exit the script (assuming at that point I have a full connection failure).
Is it possible to do something like that in line, or would I need to make the request statement into a function and set up a loop to call it? Have to admit I'm not familiar with how Python works with function returns yet.
You can add an inner loop for the retries and put your try/except block in that. Here is a sketch of what it would look like. You could put all of this into a function and put that function call in its own try/except block to catch other errors that cause the loop to exit.
Looking at requests exception hierarchy, Timeout covers multiple recoverable exceptions and is a good start for everything you may want to catch. Other things like SSLError aren't going to get better just because you retry, so skip them. You can go through the list to see what is reasonable for you.
import itertools
# requests exceptions at
# https://requests.readthedocs.io/en/master/_modules/requests/exceptions/
for url in contents:
print('Processing record', (num+1), 'of', len(contents))
if url:
print('Retrieving data from ', url[0])
retry_count = itertools.count()
# loop for retries
while True:
try:
# get with timeout and convert http errors to exceptions
resp = requests.get(url[0], timeout=10)
resp.raise_for_status()
# the things you want to recover from
except requests.Timeout as e:
if next(retry_count) <= 5:
print("timeout, wait and retry:", e)
time.sleep(30)
continue
else:
print("timeout, exiting")
raise # reraise exception to exit
except Exception as e:
print("unrecoverable error", e)
raise
break
html = resp.text
etc…
I've done a little example by myself to graphic this, and yes, you can put loops inside try/except blocks.
from sys import exit
def example_func():
try:
while True:
num = input("> ")
try:
int(num)
if num == "10":
print("Let's go!")
else:
print("Not 10")
except ValueError:
exit(0)
except:
exit(0)
example_func()
This is a fairly simple program that takes input and if it's 10, then it says "Let's go!", otherwise it tells you it's not 10 (if it's not a valid value, it just kicks you out).
Notice that inside the while loop I put a try/except block, taking into account the necessary indentations. You can take this program as a model and use it on your favor.

How to get the processed results from dramatiq python?

import dramatiq
from dramatiq.brokers.redis import RedisBroker
from dramatiq.results import Results
from dramatiq.results.backends import RedisBackend
broker = RedisBroker(host="127.0.0.1", port=6379)
broker.declare_queue("default")
dramatiq.set_broker(broker)
# backend = RedisBackend()
# broker.add_middleware(Results(backend=backend))
#dramatiq.actor()
def print_words(text):
print('This is ' + text)
print_words('sync')
a = print_words.send('async')
a.get_results()
I was checking alternatives to celery and found Dramatiq. I'm just getting started with dramatiq and I'm unable to retrieve results. I even tried setting the backend and 'save_results' to True. I'm always getting this AttributeError: 'Message' object has no attribute 'get_results'
Any idea on how to get the result?
You were on the right track with adding a result backend. The way to instruct an actor to store results is store_results=True, not save_results and the method to retrieve results is get_result(), not get_results.
When you run get_result() with block=False, you should wait the worker set result ready, like this:
while True:
try:
res = a.get_result(backend=backend)
break
except dramatiq.results.errors.ResultMissing:
# do something like retry N times.
time.sleep(1)
print(res)

Flask: delete file from server after send_file() is completed

I have a Flask backend which generates an image based on some user input, and sends this image to the client side using the send_file() function of Flask.
This is the Python server code:
#app.route('/image',methods=['POST'])
def generate_image():
cont = request.get_json()
t=cont['text']
print(cont['text'])
name = pic.create_image(t) //A different function which generates the image
time.sleep(0.5)
return send_file(f"{name}.png",as_attachment=True,mimetype="image/png")
I want to delete this image from the server after it has been sent to the client.
How do I achieve it?
Ok I solved it. I used the #app.after_request and used an if condition to check the endpoint,and then deleted the image
#app.after_request
def delete_image(response):
global image_name
if request.endpoint=="generate_image": //this is the endpoint at which the image gets generated
os.remove(image_name)
return response
Another way would be to include the decorator in the route. Thus, you do not need to check for the endpoint. Just import after_this_request from the flask lib.
from flask import after_this_request
#app.route('/image',methods=['POST'])
def generate_image():
#after_this_request
def delete_image(response):
try:
os.remove(image_name)
except Exception as ex:
print(ex)
return response
cont = request.get_json()
t=cont['text']
print(cont['text'])
name = pic.create_image(t) //A different function which generates the image
time.sleep(0.5)
return send_file(f"{name}.png",as_attachment=True,mimetype="image/png")
You could have another function delete_image() and call it at the bottom of the generate_image() function

Asyncio shared object at the same address does not hold same values

Okay, so I am created a DataStream object which is just a wrapper class around asyncio.Queue. I am passing this around all over and everything is working fine up until the following functions. I am calling ensure_future to run 2 infinite loops, one that replicates the data in one DataStream object, and one that sends data to a websocket. here is that code:
def start(self):
# make sure that we set the event loop before we run our async requests
print("Starting WebsocketProducer on ", self.host, self.port)
RUNTIME_LOGGER.info(
"Starting WebsocketProducer on %s:%i", self.host, self.port)
#Get the event loop and add a task to it.
asyncio.set_event_loop(self.loop)
asyncio.get_event_loop().create_task(self._mirror_stream(self.data_stream))
asyncio.ensure_future(self._serve(self.ssl_context))enter code here
Ignore the indent issue, SO wont indent correctly.
And here is the method that is failing with the error 'Task was destroyed but it is pending!'. Keep in mind, if I do not include the lines with 'data_stream.get()' the function runs fine. I made sure, the objects in both locations have the same memory address AND value for id(). If i print the data that comes from the await self.data_stream.get() I get the correct data. However after that it seems to just return and break. Here is the code:
async def _mirror_stream(self):
while True:
stream_length = self.data_stream.length
try:
if stream_length > 1:
for _ in range(0, stream_length):
data = await self.data_stream.get()
else:
data = await self.data_stream.get()
except Exception as e:
print(str(e))
# If the data is null, keep the last known value
if self._is_json_serializable(data) and data is not None:
self.payload = json.dumps(data)
else:
RUNTIME_LOGGER.warning(
"Mirroring stream encountered a Null payload in WebsocketProducer!")
await asyncio.sleep(self.poll_rate)enter code here
The issue has been resolved by implementing my own async Queue by utilizing the normal queue.Queue object. For some reason the application would only work if I would 'await' for queue.get(), even though it wasnt an asyncio.Queue object... Not entirely sure why this behavior occured, however the application is running well, and still performing as if the Queue were from the asyncio lib. Thanks to those who looked!

Get a user's keyboard input that was requested by another function

I am using a python package for database managing. The provided class has a method delete() that deletes a record from the database. Before deleting, it asks a user to verify the operation from a console, e.g. Proceed? [yes, No]:
My function needs to perform other actions depending on whether a user chose to delete a record. Can I get user's input requested by the function from the package?
Toy example:
def ModuleFunc():
while True:
a=input('Proceed? [yes, No]:')
if a in ['yes','No']:
#Perform some actions behind a hood
return
This function will wait for one of the two responses and return None once it gets either. After calling this function, can I determine the User's response (without modifying this function)? I think a modification of the Package's source code is not a good idea in general.
Why not just patch the class at runtime? Say you had a file ./lib/db.py defining a class DB like this:
class DB:
def __init__(self):
pass
def confirm(self, msg):
a=input(msg + ' [Y, N]:')
if a == 'Y':
return True
return False
def delete(self):
if self.confirm('Delete?'):
print ('Deleted!')
return
Then in main.py you could do:
from lib.db import DB
def newDelete(self):
if self.confirm('Delete?'):
print('Do some more stuff!')
print('Deleted!')
return
DB.delete = newDelete
test = DB()
test.delete()
See it working here
I would save key events to somewhere(file or memory) with something like Keylogger. Then, you will be able to reuse last one.
However, if you can modify module package 📦 and redistribute, it would be easier.
Return
To
Return a

Resources