in order to implement a system which exposes REST-api and run another functionality simultaneously, i tried to use flask like the flask like the following:
app = Flask(__name__)
app.run(threaded=True)
foo()
but foo function never starts.
I would to understand how to solve the problem or get an alternative option to implement it.
Thanks!
Going to the documentation for Flask.run() we see that the options provided (such as threaded) are forwarded to the underlying function werkzeug.run_simple(). The documentation for werkzeug.run_simple() says the following about the threaded parameter:
threaded – should the process handle each request in a separate thread?
This means that each REST call will be handled in a separate thread, but does not start the server in the background (which seems to be what you want). Instead, you can use the Process class from multiprocessing:
from flask import Flask
app = Flask(__name__)
from multiprocessing import Process
p = Process(target=app.run)
p.start()
foo()
This will start your flask app in the background, letting you run other functions after starting the app.
Related
I've a very simply flask application, like:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def sth():
return 'STH'
.
I'd like to profile (cProfile, profile) it in a way that it collects the profiling information from the start of the webserver until the end of the webserver run, and writes the data only at the end of the run. I need the results written into a file, and not into any database.
By starting the webserver with
/path/venv/bin/python -m cProfile -m flask --debug --app entry.py run
command, it works as expected, however, for some reason I'd like to add this logic into the entry.py file. (Note the -m cProfile switch.).
What'd be the best way to achieve this?
I tried to play around with the cProfile.run() and app.run(), but that did not really work, and also not advised to use in production.
I also tried the profilerMiddleware from werkzeug.middleware.profile and the flask_profiler, but had no luck with my goal.
Thanks.
I have some integration tests that make a few web requests per test, using the aiohttp package.
In each integration test, I have the below line of code to create a persistent session variable:
async with aiohttp.ClientSession() as session:
..make web call 1
..make web call 2
This all works fine but I'd like to move the 'async with aiohttp.ClientSession() as session' out of the test case, and into the base class so that it becomes a variable that all test cases can access. Despite my best efforts,I can only get to a point where the first call in the test is made successfully, but then the session is closed, causing the second call to fail. Does anyone know how to setup the ClientSession(), perhaps in the setUp() declaration.
Ref: https://docs.aiohttp.org/en/latest/http_request_lifecycle.html
I am fairly new to Flask applications, but well versed in Python. I have recently begun making web applications instead of regular application and I'm trying to gather some of them on a single server. Enter "Application Dispatching".
I was hoping to be able to develop an app locally and then deploy it on the server using dispatching. This means that locally I will have a script that launches the Flask app (wsgi.py), which imports stuff in the application. Now, once I add it to the dispatcher, I import the new application. This means that before the wsgi.py was a script and now it is a module - all hell breaks loose.
dispatcher.py:
from flask import Flask
from werkzeug.middleware.dispatcher import DispatcherMiddleware
from werkzeug.exceptions import NotFound
from app1 import app as app1
from app2 import app as app2
from app3 import app as app3
from app4 import app as app4
app = Flask(__name__)
app.wsgi_app = DispatcherMiddleware(NotFound(), {
"/app1": app1,
'/app2': app2,
'/app3': app3,
'/app4': app4,
})
if __name__ == "__main__":
app.run()
app1\__init__.py: (works like this, but merely a proof of concept / simple app)
from flask import Flask
app = Flask(__name__)
#app.route("/")
def index_one():
return "Hi im 1"
if __name__ == "__main__":
app.run()
Both of these work - the dispatcher and the app1 can be run independently. Now, let's say I need to import a module to make app1 more functional:
from flask import Flask
import db
...
Since the dispatcher is in a parent directory, in order for it to work I need to do something like this:
from . import db
# or
from app1 import db
Now the application doesn't work independently anymore. I would like to avoid either having to refactor the application every time it needs to be deployed or adding a lot of boilerplate code like this:
if __name__ == "__main__":
import db
else:
from . import db
In any case this doesn't work when configuring the app with app.config.from_object("config.Config") as it can't be forced to be relative import (?) and otherwise can't find it without explicitly telling it which module it resides in.
From the tutorial, I got the sense that I could isolate the applications from each other:
Application dispatching is the process of combining multiple Flask
applications on the WSGI level. You can combine not only Flask
applications but any WSGI application. This would allow you to run a
Django and a Flask application in the same interpreter side by side if
you want. The usefulness of this depends on how the applications work
internally.
The fundamental difference from Large Applications as Packages is that
in this case you are running the same or different Flask applications
that are entirely isolated from each other. They run different
configurations and are dispatched on the WSGI level.
What am I doing wrong and can I get it working like I describe, by being able to launch the applications isolated or on the dispatcher, without changing my code or adding a bunch of unrelated boilerplate code to make it work?
I figured it out myself and indeed was using application dispatcher wrong. It will always integrate the different applications into one server instance.
Instead, I found out that using nginx could be used to forward to different server instances, thus completely separating the applications in each virtual environment.
OK, so I have been through some tutorials to get a flask app onto google cloud, which is fine.
I have also been through the flask tutorial to build a flaskr blog:
http://flask.pocoo.org/docs/1.0/tutorial/
It occurred to me that a sensible thing to do would be to create a database (MySQL in mycase) on google and then modify the code so that it uses that. This is fine and I can get it to work on my local machine.
However, now that I am coming to deploying this, I have hit a problem.
The google cloud tutorials tend to use a flask app that is initiated in a single file such as main.py, eg:
from flask import Flask, render_template
app = Flask(__name__)
....
The flask tutorial mentioned above uses a package and puts the code to create_app() in the __init__.py file and at present I cannot get this to start in the same way. (see sample code).
from flask import Flask
def create_app(test_config=None):
# create and configure the app
app = Flask(__name__, instance_relative_config=True)
app.config.from_mapping(
SECRET_KEY='dev'
)
Are there some adjustments that I need to make to something like the app.yaml file to get it to recognise flask as the flaskr package or do I need to rewrite the whole thing so that it uses a main.py file ?
I feel that this is one of the points in time where I could really pick up a bad habit. What in general is the preferred way to write flask apps on google cloud ?
I am using the standard environment in google.
Thanks for your advice.
Mark
Since you have an application factory, you can create the app anywhere. Just create it in main.py, since this is what App Engine expects:
from my_package import create_app
app = create_app()
I want to run a bokeh interactive application without using "bokeh serve --show" command. Instead, I want to use 'python script_name.py' syntax.
Is there any way to do this?
bigreddot is correct, but the commands given there won't start a bokeh server by itself; you'll need an existing Tornado server running; so here's a standalone solution given in bokeh's docs in the same section:-
Here's the relevant section that starts the server. For the complete example refer to the example code from bokeh's documentation
server = Server({'/': bkapp}, num_procs=4)
server.start()
if __name__ == '__main__':
print('Opening Bokeh application on http://localhost:5006/')
server.io_loop.add_callback(server.show, "/")
server.io_loop.start()
This is covered in the project documentation:
https://docs.bokeh.org/en/latest/docs/user_guide/server.html#embedding-bokeh-server-as-a-library
from bokeh.server.server import Server
server = Server(
bokeh_applications, # list of Bokeh applications
io_loop=loop, # Tornado IOLoop
**server_kwargs # port, num_procs, etc.
)
# start timers and services and immediately return
server.start()