Flask: delete file from server after send_file() is completed - python-3.x

I have a Flask backend which generates an image based on some user input, and sends this image to the client side using the send_file() function of Flask.
This is the Python server code:
#app.route('/image',methods=['POST'])
def generate_image():
cont = request.get_json()
t=cont['text']
print(cont['text'])
name = pic.create_image(t) //A different function which generates the image
time.sleep(0.5)
return send_file(f"{name}.png",as_attachment=True,mimetype="image/png")
I want to delete this image from the server after it has been sent to the client.
How do I achieve it?

Ok I solved it. I used the #app.after_request and used an if condition to check the endpoint,and then deleted the image
#app.after_request
def delete_image(response):
global image_name
if request.endpoint=="generate_image": //this is the endpoint at which the image gets generated
os.remove(image_name)
return response

Another way would be to include the decorator in the route. Thus, you do not need to check for the endpoint. Just import after_this_request from the flask lib.
from flask import after_this_request
#app.route('/image',methods=['POST'])
def generate_image():
#after_this_request
def delete_image(response):
try:
os.remove(image_name)
except Exception as ex:
print(ex)
return response
cont = request.get_json()
t=cont['text']
print(cont['text'])
name = pic.create_image(t) //A different function which generates the image
time.sleep(0.5)
return send_file(f"{name}.png",as_attachment=True,mimetype="image/png")

You could have another function delete_image() and call it at the bottom of the generate_image() function

Related

How to get the processed results from dramatiq python?

import dramatiq
from dramatiq.brokers.redis import RedisBroker
from dramatiq.results import Results
from dramatiq.results.backends import RedisBackend
broker = RedisBroker(host="127.0.0.1", port=6379)
broker.declare_queue("default")
dramatiq.set_broker(broker)
# backend = RedisBackend()
# broker.add_middleware(Results(backend=backend))
#dramatiq.actor()
def print_words(text):
print('This is ' + text)
print_words('sync')
a = print_words.send('async')
a.get_results()
I was checking alternatives to celery and found Dramatiq. I'm just getting started with dramatiq and I'm unable to retrieve results. I even tried setting the backend and 'save_results' to True. I'm always getting this AttributeError: 'Message' object has no attribute 'get_results'
Any idea on how to get the result?
You were on the right track with adding a result backend. The way to instruct an actor to store results is store_results=True, not save_results and the method to retrieve results is get_result(), not get_results.
When you run get_result() with block=False, you should wait the worker set result ready, like this:
while True:
try:
res = a.get_result(backend=backend)
break
except dramatiq.results.errors.ResultMissing:
# do something like retry N times.
time.sleep(1)
print(res)

Tornado HTTPServer that adds objects to a queue upon receiving a POST request

I want to create a web server, that automatically handles "orders" when receiving a POST request.
My code so far looks like this:
from json import loads
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
from tornado.web import Application, url, RequestHandler
order_list = list()
class MainHandler(RequestHandler):
def get(self):
pass
def post(self):
if self.__authenticate_incoming_request():
payload = loads(self.request.body)
order_list.append(payload)
else:
pass
def __authenticate_incoming_request(self) -> bool:
# some request authentication code
return True
def start_server():
application = Application([
url(r"/", MainHandler)
])
server = HTTPServer(application)
server.listen(8080)
IOLoop.current().start()
if __name__ == '__main__':
start_server()
Here is what I want to achieve:
Receive a POST request with information about incoming "orders"
Perform an action A n-times based on a value defined in the request.body(concurrently, if possible)
Previously, to perform action A n-times, I have used a threading.ThreadPoolExecutor, but I am not sure, how I should handle this correctly with a web server running in parallel.
My idea was something like this:
start_server()
tpe = ThreadPoolExecutor(max_workers=10)
while True:
if order_list:
new_order = order_list.pop(0)
tpe.submit(my_action, new_order) # my_action is my desired function
sleep(5)
Now this piece of code is of course blocking, and I was hoping that the web server would continue running in parallel, while I am running my while-loop.
Is a setup like this possible? Do I maybe need to utilize other modules? Any help greatly appreciated!
It's not working as expected because time.sleep is a blocking function.
Instead of using a list and a while loop and sleeping to check for new items in a list, use Tornado's queues.Queue which will allow you to check for new items asynchronously.
from tornado.queues import Queue
order_queue = Queue()
tpe = ThreadPoolExecutor(max_workers=10)
async def queue_consumer():
# The old while-loop is now converted into a coroutine
# and an async for loop is used instead
async for new_order in order_queue:
# run your function in threadpool
IOLoop.current().run_in_executor(tpe, my_action, new_order)
def start_server():
# ...
# run queue_consumer function before starting the loop
IOLoop.current().spawn_callback(queue_consumer)
IOLoop.current().start()
Put items in the queue like this:
order_queue.put(payload)

Using Flask Routing in GCP Function?

I am looking to serve multiple routes from a single GCP cloud function using python. While GCP Functions actually use flask under the hood, I can't seem to figure out how to use the flask routing system to serve multiple routes from a single cloud function.
I was working on a very small project, so I wrote a quick router of my own which is working well. Now that I'm using GCP Functions more, I'd either like to figure out how to use the Flask router or invest more time on my hand rolled version and perhaps open source it, though it would seem redundant when it would be a very close copy of flask's routing, so perhaps it would be best to add it directly to Flask if this functionality doesn't exist.
Does anyone have any experience with this issue? I'm guessing I'm missing a simple function to use that's hidden in Flask somewhere but if not this seems like a pretty big/common problem, though I guess GCP Functions python is beta for a reason?
Edit:
Abridged example of my hand rolled version that I'd like to use Flask for if possible:
router = MyRouter()
#router.add('some/path', RouteMethod.GET)
def handle_this(req):
...
#router.add('some/other/path', RouteMethod.POST)
def handle_that(req):
...
# main entry point for the cloud function
def main(request):
return router.handle(request)
Following solution is working for me:
import flask
import werkzeug.datastructures
app = flask.Flask(__name__)
#app.route('some/path')
def handle_this(req):
...
#app.route('some/other/path', methods=['POST'])
def handle_that(req):
...
def main(request):
with app.app_context():
headers = werkzeug.datastructures.Headers()
for key, value in request.headers.items():
headers.add(key, value)
with app.test_request_context(method=request.method, base_url=request.base_url, path=request.path, query_string=request.query_string, headers=headers, data=request.data):
try:
rv = app.preprocess_request()
if rv is None:
rv = app.dispatch_request()
except Exception as e:
rv = app.handle_user_exception(e)
response = app.make_response(rv)
return app.process_response(response)
Based on http://flask.pocoo.org/snippets/131/
Thanks to inspiration from Guillaume Blaquiere's article and some tweaking I have an approach that enables me to use ngrok to generate a public URL for local testing and development of Google Cloud Functions.
I have two key files, app.py and main.py.
I am using VS-Code, and can now open up app.py press F5, select "Debug the current file". Now I can set breakpoints in my function, main.py. I have the 'REST Client' extension installed, which then enables me to configure GET and POST calls that I can run against my local and ngrok urls.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#app.py
import os
from flask import Flask, request, Response
from main import callback
app = Flask(__name__)
#app.route('/', methods=['GET', 'POST'])
def test_function():
return callback(request)
def start_ngrok():
from pyngrok import ngrok
ngrok_tunnel = ngrok.connect(5000)
print(' * Tunnel URL:', ngrok_tunnel.public_url)
if __name__ == '__main__':
if os.environ.get('WERKZEUG_RUN_MAIN') != 'true':
start_ngrok()
app.run(debug=True)
#!/usr/bin/env python3
# This file main.py can be run as a Google Cloud function and deployed with:
# gcloud functions deploy callback --runtime python38 --trigger-http --allow-unauthenticated
from flask import Response
import datetime
now = datetime.datetime.now()
def callback(request):
if request.method == 'POST': # Block is only for POST request
print(request.json)
return Response(status=200)
return Response(f'''
<!doctype html><title>Hello from webhook</title>
<body><h1>Hello! </h1><p>{now:%Y-%m-%d %H:%M}</p>
</body></html>
''', status=200)
Simplified version of #rabelenda's that also works for me:
def main(request):
with app.request_context(request.environ):
try:
rv = app.preprocess_request()
if rv is None:
rv = app.dispatch_request()
except Exception as e:
rv = app.handle_user_exception(e)
response = app.make_response(rv)
return app.process_response(response)
The solution by Martin worked for me until I tried calling request.get_json() in one of my routes. The end result was the response being blocked in a lower level due to the data stream already being consumed.
I came across this question looking for a solution using functions_framework in Google Cloud Run. It is already setting up an app which you can get by importing current_app from flask.
from flask import current_app
app = current_app
I believe functions_framework is used by Google Cloud Functions so it should also work there.
Thanks to #rabelenda's answer above for inspiring my answer, which just tweaks the data/json parameters, as well as enables support for an InternalServerError unhandled exception handler:
import werkzeug.datastructures
def process_request_in_app(request, app):
# source: https://stackoverflow.com/a/55576232/1237919
with app.app_context():
headers = werkzeug.datastructures.Headers()
for key, value in request.headers.items():
headers.add(key, value)
data = None if request.is_json else (request.form or request.data or None)
with app.test_request_context(method=request.method,
base_url=request.base_url,
path=request.path,
query_string=request.query_string,
headers=headers,
data=data,
json=request.json if request.is_json else None):
try:
rv = app.preprocess_request()
if rv is None:
rv = app.dispatch_request()
except Exception as e:
try:
rv = app.handle_user_exception(e)
except Exception as e:
# Fallback to unhandled exception handler for InternalServerError.
rv = app.handle_exception(e)
response = app.make_response(rv)
return app.process_response(response)

How to upload an image with flask and store in couchdb?

A previous question asks how to retrieve at attachment from couchdb and display it in a flask application.
This question asks how to perform the opposite, i.e. how can an image be uploaded using flask and saved as a couchdb attachment.
Take a look at the example from WTF:
from werkzeug.utils import secure_filename
from flask_wtf.file import FileField
class PhotoForm(FlaskForm):
photo = FileField('Your photo')
#app.route('/upload/', methods=('GET', 'POST'))
def upload():
form = PhotoForm()
if form.validate_on_submit():
filename = secure_filename(form.photo.data.filename)
form.photo.data.save('uploads/' + filename)
else:
filename = None
return render_template('upload.html', form=form, filename=filename)
Take a look at the FileField api docs. There you have a stream method giving you access to the uploaded data. Instead of using the save method as in the example you can access the bytes from the stream, base64 encode it and save as an attachment in couchdb, e.g. Using put_attachment. Alternatively, the FileStorage api docs suggest you can use read() to retrieve the data.

python eve gracefully exit from callback

I'm wondering if it's possible to update an item without completely process the PATCH request.
What I'm trying to do is to randomly generate and insert a value inside the db when a user sends a PATCH request to the accounts/ endpoint.If I don't exit from the PATCH request I will get an error because it expects a value but I cannot give it in advance because it will be randomly generated.
def pre_accounts_patch_callback(request, lookup):
if not my_func():
abort(401)
else:
return HTTP 201 OK
What can I do?
Not sure I get what you want to achieve, however keep in mind that you can actually update lookup within your callback, so the API will get back and process the updated version, with validation and all.
import random
def pre_accounts_patch_callback(request, lookup):
lookup['random_field'] = random.randint(0, 10)
app = Eve()
app.on_pre_PATCH_accounts += pre_accounts_patch_callback
if __name__ == '__main__':
app.run()

Resources