Handling atexit for multiple app objects with Flask dev server reloader - python-3.x

This is yet another flask dev server reloader question. There are a million questions asking why it loads everything twice, and this is not one of them. I understand that it loads everything twice, my question involves dealing with this reality and I haven't found an answer that I think addresses what I'm trying to do.
My question is, how can I cleanup all app objects at exit?
My current approach is shown below. In this example I run my cleanup code using an atexit function.
from flask import Flask
app = Flask(__name__)
print("start_app_id: ", '{}'.format(id(app)))
import atexit
#atexit.register
def shutdown():
print("AtExit_app_id: ", '{}'.format(id(app)))
#do some cleanup on the app object here
if __name__ == "__main__":
import os
if os.environ.get('WERKZEUG_RUN_MAIN') == "true":
print("reloaded_main_app_id: ", '{}'.format(id(app)))
else:
print("first_main_app_id: ", '{}'.format(id(app)))
app.run(host='0.0.0.0', debug=True)
The output of this code is as follows:
start_app_id: 140521561348864
first_main_app_id: 140521561348864
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
start_app_id: 140105598483312
reloaded_main_app_id: 140105598483312
* Debugger is active!
* Debugger pin code: xxx-xxx-xxx
^CAtExit_app_id: 140521561348864
Note that when first loaded, an app object with ID '864 is created. During the automatic reloading, a new app object with ID '312 is created. Then when I hit Ctrl-C (last line), the atexit routine is called and the original '864 app object is the one that is accessible using the app variable -- not the newer '312 app object.
I want to be able to do cleanup on all app objects floating around when the server closes or is Ctrl-C'd (in this case both '864 and '312). Any recs on how to do this?
Or alternately, if I could just run the cleanup on the newer '312 object created after reloading I could also make that work -- however my current approach only lets me cleanup the original app object.
Thanks.
UPDATE1: I found a link that suggested using try/finally instead of the atexit hook to accomplish what I set out to do above. Switching to this results in exactly the same behavior as atexit and therefore doesn't help with my issue:
from flask import Flask
app = Flask(__name__)
print("start_app_id: ", '{}'.format(id(app)))
if __name__ == "__main__":
import os
if os.environ.get('WERKZEUG_RUN_MAIN') == "true":
print("reloaded_main_app_id: ", '{}'.format(id(app)))
else:
print("first_main_app_id: ", '{}'.format(id(app)))
try:
app.run(host='0.0.0.0', debug=True)
finally:
print("Finally_app_id: ", '{}'.format(id(app)))
#do app cleanup code here

After some digging through the werkzeug source I found the answer. The answer is that it isn't possible to do what I wanted -- and this is by design.
When using the flask dev server (werkzeug) it isn't possible to cleanup all existing app objects upon termination (e.g. ctrl-C) because the werkzeug server catches the keyboardinterrupt exception and "passes" on it. You can see this in the last lines of werkzeug's _reloader.py in the run_with_reloader function:
def run_with_reloader(main_func, extra_files=None, interval=1,
reloader_type='auto'):
"""Run the given function in an independent python interpreter."""
import signal
reloader = reloader_loops[reloader_type](extra_files, interval)
signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))
try:
if os.environ.get('WERKZEUG_RUN_MAIN') == 'true':
t = threading.Thread(target=main_func, args=())
t.setDaemon(True)
t.start()
reloader.run()
else:
sys.exit(reloader.restart_with_reloader())
except KeyboardInterrupt:
pass
If you replace the above "except KeyboardInterrupt:" with "finally:", and then run the second code snippet in the original question, you observe that both of the created app objects are cleaned up as desired. Interestingly, the first code snippet (that uses #atexit) still doesn't work as desired after making these changes.
So in conclusion, you can cleanup all existing app objects when using the flask dev server, but you need to modify the werkzeug source to do so.

Related

What is best practice to interact with subprocesses in python

I'm building an apllication which is intended to do a bulk-job processing data within another software. To control the other software automatically I'm using pyautoit, and everything works fine, except for application errors, caused from the external software, which occur from time to time.
To handle those cases, I built a watchdog:
It starts the script with the bulk job within a subprocess
process = subprocess.Popen(['python', job_script, src_path], stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
It listens to the system event using winevt.EventLog module
EventLog.Subscribe('System', 'Event/System[Level<=2]', handle_event)
In case of an error occurs, it shuts down everything and re-starts the script again.
Ok, if an system error event occurs, this event should get handled in a way, that the supprocess gets notified. This notification should then lead to the following action within the subprocess:
Within the subprocess there's an object controlling everything and continuously collecting
generated data. In order to not having to start the whole job from the beginnig, after re-starting the script, this object has to be dumped using pickle (which isn't the problem here!)
Listening to the system event from inside the subprocess didn't work. It results in a continuous loop, when calling subprocess.Popen().
So, my question is how I can either subscribe for system events from inside a childproces, or communicate between the parent and childprocess - means, sending a message like "hey, an errorocurred", listening within the subprocess and then creating the dump?
I'm really sorry not being allowed to post any code in this case. But I hope (and actually think), that my description should be understandable. My question is just about what module to use to accomplish this in the best way?
Would be really happy, if somebody could point me into the right direction...
Br,
Mic
I believe the best answer may lie here: https://docs.python.org/3/library/subprocess.html#subprocess.Popen.stdin
These attributes should allow for proper communication between the different processes fairly easily, and without any other dependancies.
Note that Popen.communicate() may suit better if other processes may cause issues.
EDIT to add example scripts:
main.py
from subprocess import *
import sys
def check_output(p):
out = p.stdout.readline()
return out
def send_data(p, data):
p.stdin.write(bytes(f'{data}\r\n', 'utf8')) # auto newline
p.stdin.flush()
def initiate(p):
#p.stdin.write(bytes('init\r\n', 'utf8')) # function to send first communication
#p.stdin.flush()
send_data(p, 'init')
return check_output(p)
def test(p, data):
send_data(p, data)
return check_output(p)
def main()
exe_name = 'Doc2.py'
p = Popen([sys.executable, exe_name], stdout=PIPE, stderr=STDOUT, stdin=PIPE)
print(initiate(p))
print(test(p, 'test'))
print(test(p, 'test2')) # testing responses
print(test(p, 'test3'))
if __name__ == '__main__':
main()
Doc2.py
import sys, time, random
def recv_data():
return sys.stdin.readline()
def send_data(data):
print(data)
while 1:
d = recv_data()
#print(f'd: {d}')
if d.strip() == 'test':
send_data('return')
elif d.strip() == 'init':
send_data('Acknowledge')
else:
send_data('Failed')
This is the best method I could come up with for cross-process communication. Also make sure all requests and responses don't contain newlines, or the code will break.

What's the proper way to test a MongoDB connection with motor io?

I've got a simple FastAPI webapp going and I'd like to be able to check the database connection on startup (and retry connection if it fails)
I've got the following code, but it doesn't feel right
# main.py
import uvicorn
from backend.app import app
if __name__ == "__main__":
uvicorn.run(app, port=8001)
# app.py
# ... omitted for brevity
from backend.database import notes, tags
# ... omitted for brevity
# database.py
from motor.motor_asyncio import AsyncIOMotorClient
from asyncio import get_event_loop
client = AsyncIOMotorClient("localhost", 27027)
loop = get_event_loop()
data = loop.run_until_complete(client.server_info())
db = client.notes_db
notes = db.notes
tags = db.tags
Without get_event_loop() and the subsequent loop.run_until_complete() call it won't test the database connection until you actually try to access / write to it.
My goal is to be able to halt the startup process until it can successfully connect to a database, is there any clean way to do this with Python and motor.io (https://motor.readthedocs.io/, sorry there's no tag for it) ?
the startup event in FastAPI is the deal here I guess. I addition this repository is a nice example and this thread could even provide you with more information. You could execute your tests within the startup event. This means the application won't start until the startup event has been successfully executed.

Python RQ-Scheduler not giving any output

I am unable to get rq_scheduler working. Here is a simple example:
app.py
from flask import Flask
import datetime
from redis import Redis
from rq import Queue
from rq_scheduler import Scheduler
from tasks import example
app=Flask(__name__)
app.secret_key='abc'
app.redis = Redis.from_url('redis://')
app.task_queue = Queue('test', connection=app.redis)
scheduler = Scheduler(queue=app.task_queue,connection=app.redis)
#app.task_queue.enqueue('tasks.example',2)
#scheduler.enqueue_at(datetime.datetime(2020,4,16,10,46), example, 2)
scheduler.enqueue_in(datetime.timedelta(seconds=1), example, 2)
if __name__=='__main__':
app.run(host='0.0.0.0', port=5000, debug=True)
tasks.py
import time
def example(seconds):
print('Starting task')
for i in range(seconds):
print(i)
time.sleep(1)
print('Task completed')
In the app directory in terminal, I start the following in separate tabs:
$redis-server
$rq worker test
$rqscheduler
$python app.py
The first queue.enqueue works fine. Both scheduler tasks do nothing. What is wrong?
I suspect that you may be getting confused because rqscheduler by default checks for new jobs every one minute. You can tweak this with the -i flag to set the interval in seconds, and also add the -v flag for more verbose output:
rqscheduler -i 1 -v
However I also noticed another issue with the above Flask code...
Probably due to the dev server spawning a separate process I was finding that the scheduler.enqueue_in function was enqueuing the job twice. This probably wouldn't be an issue if the enqueue_in function was called inside a view function. However where you have it placed this actually runs when the application is started.
So when launching with the dev server this gets executed twice. This will then run once every time the autoreloader senses a code change: So after starting the dev server, then saving a change to the code, 3 jobs total have been enqueued.
For the purpose of testing this, it may be advisable just to have a simple python script which doesn't actually run the Flask app:
# enqueue_test.py
from redis import Redis
from rq import Queue
from rq_scheduler import Scheduler
from tasks import example
r = Redis.from_url('redis://localhost:6379')
q = Queue('test', connection=r)
scheduler = Scheduler(queue=q, connection=r)
scheduler.enqueue_in(datetime.timedelta(seconds=1), example, 2)

Configuring my Python Flask app to call a function every 30 seconds

I have a flask app that returns a JSON response. However, I want it to call that function every 30 seconds without clicking the refresh button on the browser. Here is what I did
Using apscheduler
. This code in application.py
from apscheduler.schedulers.background import BachgroundScheduler
def create_app(config_filname):
con = redis.StrictRedis(host= "localhost", port=6379, charset ="utf-8", decode_responses=True, db=0)
application = Flask(__name__)
CORS(application)
sched = BackgroundScheduler()
#application.route('/users')
#cross_origin()
#sched.scheduled_job('interval', seconds = 20)
def get_users():
//Some code...
return jsonify(users)
sched.start()
return application
Then in my wsgi.py
from application import create_app
application = create_app('application.cfg')
with application.app_context():
if __name__ == "__main__":
application.run()
When I run this appliaction, I get the json output but it does not refresh instead after 20 seconds it throws
RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that needed
to interface with the current application object in some way. To solve
this, set up an application context with app.app_context(). See the
documentation for more information.
What am I doing wrong? I would appreciate any advise.
Apologies if this in a way subverting the question, but if you want the users to be sent every 30 seconds, this probably shouldn't be done in the backend. The backend should only ever send out data when a request is made. In order for the data to be sent at regular intervals the frontend needs to be configured to make requests at regular intervals
Personally I'd recommend doing this with a combination of i-frames and javascript, as described in this stack overflow question:
Auto Refresh IFrame HTML
Lastly, when it comes to your actual code, it seems like there is an error here:
if __name__ == "__main__":
application.run()
The "application.run()" line should be indented as it is inside the if statement

Trying to combine flask and guizero

So i was trying flask when i got an funny idea. If i could combine guizero with my server i could make like a console for my simple server. So i began working when i stumbled over 2 problems.
Here's my code:
from flask import Flask, render_template
from guizero import App, PushButton, Text, TextBox
app = Flask(__name__)
app.debug = True
console = App(title='web server')
text_input = "no message"
def message():
text_input = textbox.get
textbox.clear
header = Text(console, text="Web server console", size= 50)
description = Text(console, text="type message here:")
textbox = TextBox(console)
button = PushButton(console, text="send", command=message)
#app.route('/')
def index():
return render_template('index.html', text= text_input)
#app.route('/next')
def next():
return render_template('game.html')
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
app.display()
The template index.html is just simply a paragraph with {{text}}. It does show the "no message" string.
Now i'm experiencing 2 problems with this code.
1: If i run it it only starts the server, but when i run it again it gives the "already in use" error and then opens the gui
2: If i use the gui the website won't update when i push the button, i think because the gui doesnt run in the same instance of the script as the server. And if it does i don't think the debug function works with variables in the script.
running the server on a raspi 3B on ethernet if that is important
i'm very new to flask and html so maybe i won't understand your answer but i'd be glad if you could help
I also am new to Flask but I think you have to make a new thread (either for the Flask app or the GUI). This makes sure that both Flask and the GUI can run simultaneously. Now you try to run two loops at the same time and tht doesn't work.

Resources