Logging module to send all unsent message to another file (Python) - python-3.x

I am trying to log messages to a log file and then do a shutdown on logging and send all remaining messages to a new log file.
But, I am observing that messages are going to the previously-created log file only. How can I fix this?
Below is my code:
logger_name = 'create_request'
request_Create_log = Code_vars.Requests_path+'Request_creation.log'
formatter = "%(asctime)s %(levelname)s %(message)s"
logging.basicConfig(filename=request_Create_log,filemode='a',level=logging.DEBUG,format=formatter)
req_logger = logging.getLogger(logger_name)
logging.shutdown()
#new logfile creation
logger_name = 'request_'+str(req_id)
formatter = "%(asctime)s %(levelname)s %(message)s"
logging.basicConfig(filename=log_file,filemode='a',level=logging.DEBUG,format=formatter)
logger = logging.getLogger(logger_name)

From logging.shutdown() documentation:
Informs the logging system to perform an orderly shutdown by flushing and closing all handlers. This should be called at application exit and no further use of the logging system should be made after this call.
So first, you're using it wrong if you just want to switch your handlers/config - it's not intended for that. Second, notice it says that it closes handlers, it doesn't remove them so they're still in the system. What you might want instead is to clean up the old handlers:
logger = logging.getLogger() # get the root logger
handlers = logger.handlers[:] # make a copy of the current handlers
for handler in handlers: # go through all available handlers and...
logger.removeHandler(handler) # remove them
And then rebuild your loggers to do what you want.

Related

Dynamically set Sanic log level in Python

I have exposed a route in my Sanic app to set the log-level based on the client call. E.g.
from sanic.log import logger, logging
#route("/main")
async def sanic_main(request):
logger.info("Info mesg")
logger.debug("Debug mesg")
return json("processed")
#route("/setlevel")
async def setlevel(request):
level = request.json["level"]
if level == "info":
loglevel = logging.INFO
elif level == "debug":
loglevel = logging.DEBUG
logger.setLevel(loglevel)
return json("done")
On setting log levels between DEBUG and INFO, however, I am observing flaky behavior where the DEBUG messages (from "/main") get printed only some times and vice versa.
NOTE: I am running multiple Sanic workers
How should I go about dynamically setting the log level?
I have never done anything like this, but the sanic.log.logger is just an instance of <class 'logging.Logger'>. So, using setLevel should be fine.
The question is how are you running your app, and how many workers are you using? If you are in a situation where you have multiple processes, then using /setlevel would only change the logger for that one process.
One way to do this is using aioredis as captured in the blog here:
https://medium.com/#sandeepvaday/route-triggered-state-update-across-python-sanic-workers-a0f7ab0f6e4

Capture logging message in python 3

I would like to capture the logs printed by a python program, and save it to either a variable or a file. Is there any way to achieve this without adding handlers or modifying logger config? (this is because the logger class will be used by many other modules, and we want it to be generic)
Code snippet:
import logging
from io import StringIO
from contextlib import redirect_stdout
logging.basicConfig(level=logging.DEBUG, format="%(asctime)s %(levelname)s %(message)s")
with StringIO() as buf, redirect_stdout(buf), open("test.txt", "w+") as f:
logger.debug("Debug message")
logger.info("Info message")
logger.error("Error message")
buf.flush()
f.write(buf.getvalue())
Console output:
xxxx-xx-xx xx:xx:xx,xxx DEBUG Debug message
xxxx-xx-xx xx:xx:xx,xxx INFO Info message
xxxx-xx-xx xx:xx:xx,xxx ERROR Error message
What confuses me is, since logger prints all the logs to stdout by default, redirecting stdout using context manager should do the trick. However, the logs are still printed to the console, and nothing is written to file. Any idea?
Logging library already have a utility for that.
Suppose you want to start logging event in file my.log then
import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG)
logging.info('Some info')
logging.warning('Some warning')
logging.debug('Debug messages')
For more check the python documentation.
Edit: OP asked if there is alternative way to do that without using basicConfig method. Here is another approach I found that utilize file handler. Using this you can declare a file handler separately and then assign it to a logger.
logger = logging.basicConfig(level=logging.DEBUG, format="%(asctime)s %(levelname)s %(message)s")
# create file handler
fh = logging.FileHandler('my.log')
fh.setLevel(logging.DEBUG)
logger.addHandler(fh)

Python Logging leaving file empty

I know there are a lot of questions related to this but I haven't found the one that applies to my case.
I'm running a script (Python 3.7.0) on Windows that should log some events, but it is only creating the empty file log_minera.log.
Logging level seems to be ok, same as writting mode, handler connected to logger... I suspect closing the window just kills unflushed streams so never get written, but adding the line to flush doesn't work either hitting ENTER or closing. Help please!
import logging
logger = logging.getLogger(__name__)
handler = logging.FileHandler('log_minera.log', mode='w')
formatter = logging.Formatter('* %(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
handler.setLevel(logging.INFO)
logger.addHandler(handler)
while True:
logger.info('info to be logged')
# code...
logger.error('other info related to errors')
# more code
#logger.handlers[0].flush() <- does nothing
answer = input('Press ENTER to repeat or close the window to exit.')
You do not set the level of your custom logger. Therefore, it has the same level as the parent logger, in your case, probably the root logger.
I tried your code and in my setup, logger has a level WARNING. It means that "info to be logged" will not get logged inside the file.
On your setup, it could be that you set your root logger level to something greater than ERROR, maybe CRITICAL, creating your logger with that level. In that case, nothing under logger.critical would be printed.
Either set your root logger level to INFO or set the logger level to INFO explicitely, like this:
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO) # <- HERE

Info logger is not printing

Here is my code
import logging
logger = logging.getLogger('test')
logger.setLevel(level=logging.INFO)
logger.info('Hello World')
I expect it to print out 'Hello World'. It does not do so.
Could someone help me understanding why it does not print the message out?
You haven't specified a handler for your logger. The message is therefor propagated to the root handler which has a different log level.
The root logger can be configured as follows:
logging.basicConfig(level=logging.INFO)
Alternatively you can add a handler that forwards the messages to stderr:
logger.addHandler(logging.StreamHandler())
This behavior is documented here.

Python logging, supressing output

Here is a simple example that gets a new logger and attempts to
import logging
log = logging.getLogger("MyLog")
log.setLevel(logging.INFO)
log.info("hello")
log.debug("world")
If I call logging.basicConfig(level=logging.INFO) right after importing, "hello" will print, but not "world", (which seems strange since I set the level to debug).
How can the logging API be adjusted so all built-in levels are printed to the stdout?
If you call basicConfig with level X none of the log messages that is not covered by X will be printed.
You called logging.basicConfig(level=logging.INFO) Here, logging.INFO doesn't cover logging.DEBUG.
May be you wanted other way round?
logging.basicConfig(level=logging.DEBUG)
This prints both info and debug output:
import logging
logging.basicConfig(level=logging.DEBUG)
log = logging.getLogger("MyLog")
log.setLevel(logging.DEBUG)
log.info("hello")
log.debug("world")

Resources