apscheduler: how to prevent console printing of job misfire warning message? - python-3.x

How to prevent apscheduler from printing job misfire (error) warning to the console ?
As you can see in console output, the job misfire event is captured and handled on a proper way.
But the red message from apscheduler scare normal users, they think the program is crashed, while nothing is wrong at all.
Why print this to the console, if an event scheduler is defined ? After defining an scheduler (EVENT_JOB_MISSED) event listener, the programmer is responsible for the console output.
Apscheduler is a great module, but this issue is a little minor.
def SetScheduler():
global shedul
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.events import EVENT_JOB_ERROR, EVENT_JOB_MISSED
shedul = BackgroundScheduler()
shedul.add_listener(shed_listener, EVENT_JOB_MISSED | EVENT_JOB_ERROR)
Console output:

You have to adjust your logging configuration to filter out these messages.
If you don't need to display any apscheduler logging, you could do:
import logging
logging.getLogger("apscheduler").propagate = False
If you want to display other messages but not these specific ones, you may need to add a filter to that logger, but I have no experience on that. You can check out the documentation on that yourself.

Related

What is best practice to interact with subprocesses in python

I'm building an apllication which is intended to do a bulk-job processing data within another software. To control the other software automatically I'm using pyautoit, and everything works fine, except for application errors, caused from the external software, which occur from time to time.
To handle those cases, I built a watchdog:
It starts the script with the bulk job within a subprocess
process = subprocess.Popen(['python', job_script, src_path], stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
It listens to the system event using winevt.EventLog module
EventLog.Subscribe('System', 'Event/System[Level<=2]', handle_event)
In case of an error occurs, it shuts down everything and re-starts the script again.
Ok, if an system error event occurs, this event should get handled in a way, that the supprocess gets notified. This notification should then lead to the following action within the subprocess:
Within the subprocess there's an object controlling everything and continuously collecting
generated data. In order to not having to start the whole job from the beginnig, after re-starting the script, this object has to be dumped using pickle (which isn't the problem here!)
Listening to the system event from inside the subprocess didn't work. It results in a continuous loop, when calling subprocess.Popen().
So, my question is how I can either subscribe for system events from inside a childproces, or communicate between the parent and childprocess - means, sending a message like "hey, an errorocurred", listening within the subprocess and then creating the dump?
I'm really sorry not being allowed to post any code in this case. But I hope (and actually think), that my description should be understandable. My question is just about what module to use to accomplish this in the best way?
Would be really happy, if somebody could point me into the right direction...
Br,
Mic
I believe the best answer may lie here: https://docs.python.org/3/library/subprocess.html#subprocess.Popen.stdin
These attributes should allow for proper communication between the different processes fairly easily, and without any other dependancies.
Note that Popen.communicate() may suit better if other processes may cause issues.
EDIT to add example scripts:
main.py
from subprocess import *
import sys
def check_output(p):
out = p.stdout.readline()
return out
def send_data(p, data):
p.stdin.write(bytes(f'{data}\r\n', 'utf8')) # auto newline
p.stdin.flush()
def initiate(p):
#p.stdin.write(bytes('init\r\n', 'utf8')) # function to send first communication
#p.stdin.flush()
send_data(p, 'init')
return check_output(p)
def test(p, data):
send_data(p, data)
return check_output(p)
def main()
exe_name = 'Doc2.py'
p = Popen([sys.executable, exe_name], stdout=PIPE, stderr=STDOUT, stdin=PIPE)
print(initiate(p))
print(test(p, 'test'))
print(test(p, 'test2')) # testing responses
print(test(p, 'test3'))
if __name__ == '__main__':
main()
Doc2.py
import sys, time, random
def recv_data():
return sys.stdin.readline()
def send_data(data):
print(data)
while 1:
d = recv_data()
#print(f'd: {d}')
if d.strip() == 'test':
send_data('return')
elif d.strip() == 'init':
send_data('Acknowledge')
else:
send_data('Failed')
This is the best method I could come up with for cross-process communication. Also make sure all requests and responses don't contain newlines, or the code will break.

Azure Function Python Exceptions not logging correctly to App Insights

I am seeing some unexpected behaviour when using logging.error(..) in a Python Azure Function.. essentially, the error is getting logged as a trace, which seems a bit odd.. it's the same w. logging.exception(..) too.
Some simple sample code:
import logging
import azure.functions as func
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
logging.error('[Error] I am an error.')
return func.HttpResponse()
When this is executed, the following winds up in Application Insights.. why is this not available under exceptions?
Query for ref:
union *
| where operation_Id == 'ca8f0c7d7798a54996b486940641b159'
| project operation_Id, message, severityLevel, itemType, customDimensions.['LogLevel']
Any clues as to what I am doing wrong?? 👀
I am seeing some unexpected behaviour when using in a Python Azure
Function.. essentially, the error is getting logged as a trace, which
seems a bit odd.. it's the same w.
too.logging.error(..)logging.exception(..)
These will be classified as trace in Application Insights, the only manifestation is log level (at least Application Insights is like this). If you need this to classify it as 'exceptions' in Application Insights, you need to use below code(and don't deal with it, once you use try-except and other processing, you will enter the trace again):
raise Exception('Boom!')
But this will stop the function, so it doesn't make much sense. I suggest you search for ‘trace’ and set filters to extract exception or error levels.
Currently, the only way to log messages to exceptions table in application insights, is that put the logger.exception() method in the try except code block. Like below:
try:
result = 1/0
except Exception:
logger.exception("it is an exception 88 which is in the try block...")
then in application insights, you can find the messages in exceptions table:
There is an issue about this, let us wait for the feedback from MS.

What's the proper way to test a MongoDB connection with motor io?

I've got a simple FastAPI webapp going and I'd like to be able to check the database connection on startup (and retry connection if it fails)
I've got the following code, but it doesn't feel right
# main.py
import uvicorn
from backend.app import app
if __name__ == "__main__":
uvicorn.run(app, port=8001)
# app.py
# ... omitted for brevity
from backend.database import notes, tags
# ... omitted for brevity
# database.py
from motor.motor_asyncio import AsyncIOMotorClient
from asyncio import get_event_loop
client = AsyncIOMotorClient("localhost", 27027)
loop = get_event_loop()
data = loop.run_until_complete(client.server_info())
db = client.notes_db
notes = db.notes
tags = db.tags
Without get_event_loop() and the subsequent loop.run_until_complete() call it won't test the database connection until you actually try to access / write to it.
My goal is to be able to halt the startup process until it can successfully connect to a database, is there any clean way to do this with Python and motor.io (https://motor.readthedocs.io/, sorry there's no tag for it) ?
the startup event in FastAPI is the deal here I guess. I addition this repository is a nice example and this thread could even provide you with more information. You could execute your tests within the startup event. This means the application won't start until the startup event has been successfully executed.

Dynamically set Sanic log level in Python

I have exposed a route in my Sanic app to set the log-level based on the client call. E.g.
from sanic.log import logger, logging
#route("/main")
async def sanic_main(request):
logger.info("Info mesg")
logger.debug("Debug mesg")
return json("processed")
#route("/setlevel")
async def setlevel(request):
level = request.json["level"]
if level == "info":
loglevel = logging.INFO
elif level == "debug":
loglevel = logging.DEBUG
logger.setLevel(loglevel)
return json("done")
On setting log levels between DEBUG and INFO, however, I am observing flaky behavior where the DEBUG messages (from "/main") get printed only some times and vice versa.
NOTE: I am running multiple Sanic workers
How should I go about dynamically setting the log level?
I have never done anything like this, but the sanic.log.logger is just an instance of <class 'logging.Logger'>. So, using setLevel should be fine.
The question is how are you running your app, and how many workers are you using? If you are in a situation where you have multiple processes, then using /setlevel would only change the logger for that one process.
One way to do this is using aioredis as captured in the blog here:
https://medium.com/#sandeepvaday/route-triggered-state-update-across-python-sanic-workers-a0f7ab0f6e4

Python logging, supressing output

Here is a simple example that gets a new logger and attempts to
import logging
log = logging.getLogger("MyLog")
log.setLevel(logging.INFO)
log.info("hello")
log.debug("world")
If I call logging.basicConfig(level=logging.INFO) right after importing, "hello" will print, but not "world", (which seems strange since I set the level to debug).
How can the logging API be adjusted so all built-in levels are printed to the stdout?
If you call basicConfig with level X none of the log messages that is not covered by X will be printed.
You called logging.basicConfig(level=logging.INFO) Here, logging.INFO doesn't cover logging.DEBUG.
May be you wanted other way round?
logging.basicConfig(level=logging.DEBUG)
This prints both info and debug output:
import logging
logging.basicConfig(level=logging.DEBUG)
log = logging.getLogger("MyLog")
log.setLevel(logging.DEBUG)
log.info("hello")
log.debug("world")

Resources