Run scheduled task with APScheduler with flask (using mod_wsgi) - multithreading

I am trying to send an email in the background at a specific time everyday in flask. The app hangs just about when I add a job, and I think I am having an issue with threading. The config looks like this
jobstores = {
'default': SQLAlchemyJobStore(url='path_to_my_db')
}
executors = {
'default': ThreadPoolExecutor(5),
'processpool': ProcessPoolExecutor(3)
}
job_defaults = {
'coalesce': False,
'max_instances': 3
}
scheduler = BackgroundScheduler(jobstores=jobstores, executors=executors, job_defaults=job_defaults, timezone=utc)
scheduler.start()
then am adding my job
def send_reports():
msg = Message("Microphone testing 1, 2",
recipients=["me#mycompany.com"])
mail.send(msg)
scheduler.add_job(send_reports, 'cron', hour=8, minute=23)
If i comment out scheduler.add_job line the app runs normally
In the virtual host I have the lines
WSGIDaemonProcess www.mycomapny.com processes=2 threads=5
WSGIScriptAlias / /var/www/html/myapp.wsgi
Will appreciate your assistance

I Finally managed to send emails with APSchedular.
My settings in Apache Virtual host to allow multiple threads (am using mod_wsgi)
WSGIDaemonProcess app threads=15 maximum-requests=10000
WSGIScriptAlias / /var/www/html/myapp.wsgi
WSGIProcessGroup app
WSGIApplicationGroup %{GLOBAL}
Then in my app, i first import background BackgroundScheduler
from apscheduler.schedulers.background import BackgroundScheduler
Instantiate my scheduler with a timezone, but use all other default configuration
scheduler = BackgroundScheduler(timezone='Africa/Nairobi')
Then before first request, i start the scheduler and add the send_reports job
#app.before_first_request
def initialize():
scheduler.start()
scheduler.add_job(send_reports, 'cron', hour=10, minute=10, end_date='2055-05-30')
Sending the reports as pdf attachment using pdfkit and flask-email was another matter, but the gist of it is installing the right version of wkhtmltopdf and having the right env path, plus making sure you pass app context to flask-mail to send mail in a background thread.
So this sends reports to specified emails everyday at 1010 am EAT. Hope someone finds this helpful

Related

Telethon doesn't send messages when running as service

I've created a script which sends messages using telethon. The receivers are not always the same: the number of receivers and their IDs are taken from a MySQL table. The multi processing script runs okay in the expected loop when started from the command prompt. But as soon as it's started as a service the messages are not send.
Please see the code below which includes the function to send out the messages. This function is called by another function which loops over the result of a MySQL query.
Can someone shine a light on the question why the function runs fine from the prompt and not as a service?
import configparser
# get configuration
config = configparser.ConfigParser()
config.read('/etc/p2000.cfg')
telegram_api_id = config.get('telegram','api_id')
telegram_api_hash = config.get('telegram','api_hash')
telegram_bot_name = config.get('telegram','bot_name')
client = TelegramClient(telegram_bot_name, telegram_api_id, telegram_api_hash)
def p2k_send_telegram(PeerID,Message):
async def main():
await client.send_message(int(PeerID), Message)
with client:
client.loop.run_until_complete(main())
Okay, the answer was easy and right in front of me! The issue could be isolated to the client variable. When running as a service under systemd the session (file) has to be defined with its full path!
Something like this:
client = TelegramClient('/full/path/to/my.session', telegram_api_id, telegram_api_hash)

how to keep running a client program in python which uses Twilio

I have deployed a Flask application on an Ubuntu server. In order to make a check on the Flask application, I have used Twilio, such that the data will be sent to the server from the client every 5 minutes. In case something goes wrong, I should be getting a text message on my phone. Right now I am doing this on my local machine but I want to know how can I make it run always? Do I have to run the below client code on the Ubuntu server or how it could be done?
import json
import requests
def localClient():
try:
data = {"inputData": "Bank of America", "dataId": 12345}
response = requests.post("http://12.345.567.890/inputData", json=data).json()
except:
from twilio.rest import Client
account_sid = "XXXXXXXXXXXXXXX"
auth_token = "XXXXXXXXX"
client = Client(account_sid, auth_token)
message = client.messages \
.create(
body='Server is down',
from_='+12345678901',
to='+19876543210' )
while True:
localClient()
time.sleep(300)
Use supervisor in Ubuntu. This will auto restart your code whenever you restart server. You don't need to start every time. This will run forever until you stop manually.
Refer to the following link to setup supervisor :
supervisor

How to trigger a function before Gunicorn starts it's main process

I'm setting up a Flask app with Gunicorn in a Docker environment.
When I want to spin up my containers, I want my Flask container to create database tables (based on my models) if my database is empty. I included a function in my wsgi.py file, but that seems to trigger the function each time a worker is initialized. After that I tried to use server hooks in my gunicorn.py config file, like below.
"""gunicorn WSGI server configuration."""
from multiprocessing import cpu_count
from setup import init_database
def on_starting(server):
"""Executes code before the master process is initialized"""
init_database()
def max_workers():
"""Returns an amount of workers based on the number of CPUs in the system"""
return 2 * cpu_count() + 1
bind = '0.0.0.0:8000'
worker_class = 'eventlet'
workers = max_workers()
I expect gunicorn to trigger the on_starting function automatically but the hook never seems to trigger. The app seems to startup normally, but when I try to make a request that wants to insert a database entry it says that the table doesn't exist. How do I trigger the on_starting hook?
I fixed my issue by preloading the app first before creating workers to serve my app. I did this by adding this line to my gunicorn.py config file:
...
preload_app = True
This way the app is already running and can accept commands to create the necessary database tables.
Gunicorn imports a module in order to get at app (or whatever other name you tell Gunicorn the WSGI application object lives at). During that import, which happens before Gunicorn starts directing traffic to the app, code is executing. Put your startup code there, after you've created db (assuming you're using SQLAlchemy), and imported your models (so that SQLAlchemy will know about then and will hence know what tables to create).
Alternatively, populate your container with an pre-created database.

How can I execute asynchronous tasks in the background as scheduled in Google Cloud Platform?

Problem
I want to get a lot of game data at 9 o'clock every morning. Therefore I use App Engine & cron job. However I would like to add Cloud Tasks, I don't know how to do.
Question
How can I execute asynchronous tasks in the background as scheduled in Google Cloud Platform?
Which is natural to implement (Cloud Scheduler + Cloud Tasks) or (cron job+ Cloud Tasks)?
Development Environment
App Engine Python (Flexible environment).
Python 3.6
Best regards,
Cloud Tasks are asynchronous by design. As you mentioned, the best way would be to pair them with Cloud Scheduler.
First of all, since cloud Scheduler needs either a Pub/Sub or an HTTP endpoint, to call once it runs the jobs, I recommend to you to create an App Engine handler, to which the Cloud Scheduler will call, that creates and sends the task.
You can do so by following this documentation. First of all you will have to create a queue, and afterwards I recommend you to deploy simple application that has a handler to create the tasks. A small example:
from google.cloud import tasks_v2beta3
from flask import Flask, request
app = Flask(__name__)
#app.route('/createTask', methods=['POST'])
def create_task_handler():
client = tasks_v2beta3.CloudTasksClient()
parent = client.queue_path('YOUR_PROJECT', 'PROJECT_LOCATION', 'YOUR_QUEUE_NAME')
task = {
'app_engine_http_request': {
'http_method': 'POST',
'relative_uri': '/handler_to_call'
}
}
response = client.create_task(parent, task)
return response
Where the 'relative_uri' is the handler that the task will call, and processes your data.
Once that is done, follow the Cloud Scheduler documentation to create jobs, and specify the target to be App Engine HTTP, set the URL to '/createTask', the service to whichever is handling the URL, and the HTTP method to POST. Fill the rest of parameters as required, and you can set the Frequency to 'every monday 09:00'.

bottle.py WSGI server stops responding

I'm trying to build a simple API with the bottle.py (Bottle v0.11.4) web framework. To 'daemonize' the app on my server (Ubuntu 10.04.4), I'm running the shell
nohup python test.py &
, where test.py is the following python script:
import sys
import bottle
from bottle import route, run, request, response, abort, hook
#hook('after_request')
def enable_cors():
response.headers['Access-Control-Allow-Origin'] = '*'
#route('/')
def ping():
return 'Up and running!'
if __name__ == '__main__':
run(host=<my_ip>, port=3000)
I'm running into the following issue:
This works initially but the server stops responding after some time (~24hours). Unfortunately, the logs don't contain any revealing error messages.
The only way I have been able to reproduce the issue is when I try to run a second script on my Ubuntu server that creates another server listening to a different port (ie.: exactly the same script as above but port=3001). If I send a request to the newly created server, I also do not get a response and the connection eventually times out.
Any suggestions are greatly appreciated. I'm new to this, so if there's something fundamentally wrong with this approach, any links to reference guides would also be appreciated. Thank you!
Can you make sure the server isn't sleeping.
If it is, try enabling Wake On LAN http://ubuntuforums.org/showthread.php?t=234588

Resources