I am looking to serve multiple routes from a single GCP cloud function using python. While GCP Functions actually use flask under the hood, I can't seem to figure out how to use the flask routing system to serve multiple routes from a single cloud function.
I was working on a very small project, so I wrote a quick router of my own which is working well. Now that I'm using GCP Functions more, I'd either like to figure out how to use the Flask router or invest more time on my hand rolled version and perhaps open source it, though it would seem redundant when it would be a very close copy of flask's routing, so perhaps it would be best to add it directly to Flask if this functionality doesn't exist.
Does anyone have any experience with this issue? I'm guessing I'm missing a simple function to use that's hidden in Flask somewhere but if not this seems like a pretty big/common problem, though I guess GCP Functions python is beta for a reason?
Edit:
Abridged example of my hand rolled version that I'd like to use Flask for if possible:
router = MyRouter()
#router.add('some/path', RouteMethod.GET)
def handle_this(req):
...
#router.add('some/other/path', RouteMethod.POST)
def handle_that(req):
...
# main entry point for the cloud function
def main(request):
return router.handle(request)
Following solution is working for me:
import flask
import werkzeug.datastructures
app = flask.Flask(__name__)
#app.route('some/path')
def handle_this(req):
...
#app.route('some/other/path', methods=['POST'])
def handle_that(req):
...
def main(request):
with app.app_context():
headers = werkzeug.datastructures.Headers()
for key, value in request.headers.items():
headers.add(key, value)
with app.test_request_context(method=request.method, base_url=request.base_url, path=request.path, query_string=request.query_string, headers=headers, data=request.data):
try:
rv = app.preprocess_request()
if rv is None:
rv = app.dispatch_request()
except Exception as e:
rv = app.handle_user_exception(e)
response = app.make_response(rv)
return app.process_response(response)
Based on http://flask.pocoo.org/snippets/131/
Thanks to inspiration from Guillaume Blaquiere's article and some tweaking I have an approach that enables me to use ngrok to generate a public URL for local testing and development of Google Cloud Functions.
I have two key files, app.py and main.py.
I am using VS-Code, and can now open up app.py press F5, select "Debug the current file". Now I can set breakpoints in my function, main.py. I have the 'REST Client' extension installed, which then enables me to configure GET and POST calls that I can run against my local and ngrok urls.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#app.py
import os
from flask import Flask, request, Response
from main import callback
app = Flask(__name__)
#app.route('/', methods=['GET', 'POST'])
def test_function():
return callback(request)
def start_ngrok():
from pyngrok import ngrok
ngrok_tunnel = ngrok.connect(5000)
print(' * Tunnel URL:', ngrok_tunnel.public_url)
if __name__ == '__main__':
if os.environ.get('WERKZEUG_RUN_MAIN') != 'true':
start_ngrok()
app.run(debug=True)
#!/usr/bin/env python3
# This file main.py can be run as a Google Cloud function and deployed with:
# gcloud functions deploy callback --runtime python38 --trigger-http --allow-unauthenticated
from flask import Response
import datetime
now = datetime.datetime.now()
def callback(request):
if request.method == 'POST': # Block is only for POST request
print(request.json)
return Response(status=200)
return Response(f'''
<!doctype html><title>Hello from webhook</title>
<body><h1>Hello! </h1><p>{now:%Y-%m-%d %H:%M}</p>
</body></html>
''', status=200)
Simplified version of #rabelenda's that also works for me:
def main(request):
with app.request_context(request.environ):
try:
rv = app.preprocess_request()
if rv is None:
rv = app.dispatch_request()
except Exception as e:
rv = app.handle_user_exception(e)
response = app.make_response(rv)
return app.process_response(response)
The solution by Martin worked for me until I tried calling request.get_json() in one of my routes. The end result was the response being blocked in a lower level due to the data stream already being consumed.
I came across this question looking for a solution using functions_framework in Google Cloud Run. It is already setting up an app which you can get by importing current_app from flask.
from flask import current_app
app = current_app
I believe functions_framework is used by Google Cloud Functions so it should also work there.
Thanks to #rabelenda's answer above for inspiring my answer, which just tweaks the data/json parameters, as well as enables support for an InternalServerError unhandled exception handler:
import werkzeug.datastructures
def process_request_in_app(request, app):
# source: https://stackoverflow.com/a/55576232/1237919
with app.app_context():
headers = werkzeug.datastructures.Headers()
for key, value in request.headers.items():
headers.add(key, value)
data = None if request.is_json else (request.form or request.data or None)
with app.test_request_context(method=request.method,
base_url=request.base_url,
path=request.path,
query_string=request.query_string,
headers=headers,
data=data,
json=request.json if request.is_json else None):
try:
rv = app.preprocess_request()
if rv is None:
rv = app.dispatch_request()
except Exception as e:
try:
rv = app.handle_user_exception(e)
except Exception as e:
# Fallback to unhandled exception handler for InternalServerError.
rv = app.handle_exception(e)
response = app.make_response(rv)
return app.process_response(response)
Related
I create a API by fastAPI like the following:
#app.post('/uploadFile/')
async def uploadFile(file:UploadFile = File(...)):
file_name,extension_file_name = os.path.splitext(file.filename)
base_path = os.path.join(os.getcwd(),'static')
path_cheked_exists = os.path.join(base_path,extension_file_name[1:])
if not os.path.exists(path_cheked_exists):
os.makedirs(path_cheked_exists)
try:
shutil.copy(file,path_cheked_exists)
except Exception as exp:
print("can not copy that file")
return {'file':'okay'}
API returned the result but didn't save file .
Thanks for any help with shutil module
Instead of using shutil module, you can use a simple context manager
with open('filename.ext', 'wb+') as file_obj:
file_obj.write(file.file.read())
Also your problem could be that you only wrote file.
Try shutil.copy(file.file,path_cheked_exists)
I have a Flask backend which generates an image based on some user input, and sends this image to the client side using the send_file() function of Flask.
This is the Python server code:
#app.route('/image',methods=['POST'])
def generate_image():
cont = request.get_json()
t=cont['text']
print(cont['text'])
name = pic.create_image(t) //A different function which generates the image
time.sleep(0.5)
return send_file(f"{name}.png",as_attachment=True,mimetype="image/png")
I want to delete this image from the server after it has been sent to the client.
How do I achieve it?
Ok I solved it. I used the #app.after_request and used an if condition to check the endpoint,and then deleted the image
#app.after_request
def delete_image(response):
global image_name
if request.endpoint=="generate_image": //this is the endpoint at which the image gets generated
os.remove(image_name)
return response
Another way would be to include the decorator in the route. Thus, you do not need to check for the endpoint. Just import after_this_request from the flask lib.
from flask import after_this_request
#app.route('/image',methods=['POST'])
def generate_image():
#after_this_request
def delete_image(response):
try:
os.remove(image_name)
except Exception as ex:
print(ex)
return response
cont = request.get_json()
t=cont['text']
print(cont['text'])
name = pic.create_image(t) //A different function which generates the image
time.sleep(0.5)
return send_file(f"{name}.png",as_attachment=True,mimetype="image/png")
You could have another function delete_image() and call it at the bottom of the generate_image() function
I want to create a web server, that automatically handles "orders" when receiving a POST request.
My code so far looks like this:
from json import loads
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
from tornado.web import Application, url, RequestHandler
order_list = list()
class MainHandler(RequestHandler):
def get(self):
pass
def post(self):
if self.__authenticate_incoming_request():
payload = loads(self.request.body)
order_list.append(payload)
else:
pass
def __authenticate_incoming_request(self) -> bool:
# some request authentication code
return True
def start_server():
application = Application([
url(r"/", MainHandler)
])
server = HTTPServer(application)
server.listen(8080)
IOLoop.current().start()
if __name__ == '__main__':
start_server()
Here is what I want to achieve:
Receive a POST request with information about incoming "orders"
Perform an action A n-times based on a value defined in the request.body(concurrently, if possible)
Previously, to perform action A n-times, I have used a threading.ThreadPoolExecutor, but I am not sure, how I should handle this correctly with a web server running in parallel.
My idea was something like this:
start_server()
tpe = ThreadPoolExecutor(max_workers=10)
while True:
if order_list:
new_order = order_list.pop(0)
tpe.submit(my_action, new_order) # my_action is my desired function
sleep(5)
Now this piece of code is of course blocking, and I was hoping that the web server would continue running in parallel, while I am running my while-loop.
Is a setup like this possible? Do I maybe need to utilize other modules? Any help greatly appreciated!
It's not working as expected because time.sleep is a blocking function.
Instead of using a list and a while loop and sleeping to check for new items in a list, use Tornado's queues.Queue which will allow you to check for new items asynchronously.
from tornado.queues import Queue
order_queue = Queue()
tpe = ThreadPoolExecutor(max_workers=10)
async def queue_consumer():
# The old while-loop is now converted into a coroutine
# and an async for loop is used instead
async for new_order in order_queue:
# run your function in threadpool
IOLoop.current().run_in_executor(tpe, my_action, new_order)
def start_server():
# ...
# run queue_consumer function before starting the loop
IOLoop.current().spawn_callback(queue_consumer)
IOLoop.current().start()
Put items in the queue like this:
order_queue.put(payload)
I want to unit test some code that calls a method of the boto3 s3 client.
I can't use moto because this particular method (put_bucket_lifecycle_configuration) is not yet implemented in moto.
I want to mock the S3 client and assure that this method was called with specific parameters.
The code I want to test looks something like this:
# sut.py
import boto3
class S3Bucket(object):
def __init__(self, name, lifecycle_config):
self.name = name
self.lifecycle_config = lifecycle_config
def create(self):
client = boto3.client("s3")
client.create_bucket(Bucket=self.name)
rules = # some code that computes rules from self.lifecycle_config
# I want to test that `rules` is correct in the following call:
client.put_bucket_lifecycle_configuration(Bucket=self.name, \
LifecycleConfiguration={"Rules": rules})
def create_a_bucket(name):
lifecycle_policy = # a dict with a bunch of key/value pairs
bucket = S3Bucket(name, lifecycle_policy)
bucket.create()
return bucket
In my test, I'd like to call create_a_bucket() (though instantiating an S3Bucket directly is also an option) and make sure that the call to put_bucket_lifecycle_configuration was made with the correct parameters.
I have messed around with unittest.mock and botocore.stub.Stubber but have not managed to crack this. Unless otherwise urged, I am not posting my attempts since they have not been successful so far.
I am open to suggestions on restructuring the code I'm trying to test in order to make it easier to test.
Got the test to work with the following, where ... is the remainder of the arguments that are expected to be passed to s3.put_bucket_lifecycle_configuration().
# test.py
from unittest.mock import patch
import unittest
import sut
class MyTestCase(unittest.TestCase):
#patch("sut.boto3")
def test_lifecycle_config(self, cli):
s3 = cli.client.return_value
sut.create_a_bucket("foo")
s3.put_bucket_lifecycle_configuration.assert_called_once_with(Bucket="foo", ...)
if __name__ == '__main__':
unittest.main()
My code tried to create an engine first:
def createEngine(connectionstring):
engine = create_engine(connectionstring,
#pool_size = DEFAULT_POOL_SIZE,
#max_overflow = DEFAULT_MAX_OVERFLOW,
echo = False)
return engine
Then get a session from the engine:
#contextmanager
def getOrmSession(engine):
try:
Session.configure(bind=engine)
session = Session()
yield session
finally:
pass
The client code is as follows:
def composeItems(keyword, itemList):
with getOrmSession(engine) as session:
for i in itemList:
item = QueryItem(query=keyword,
......
active = 0)
session.add(item)
session.commit()
Then when i call composeItems within gevent spawn. Obviously, mysql deadlocks. What had happened? What is wrong with the above usage?
Find the answer by myself.
I need to patch threading when import gevent. So scoped_session will be able to use greenlet's threading local. change the patching and everhything works fine now.