Python logging, supressing output - python-3.x

Here is a simple example that gets a new logger and attempts to
import logging
log = logging.getLogger("MyLog")
log.setLevel(logging.INFO)
log.info("hello")
log.debug("world")
If I call logging.basicConfig(level=logging.INFO) right after importing, "hello" will print, but not "world", (which seems strange since I set the level to debug).
How can the logging API be adjusted so all built-in levels are printed to the stdout?

If you call basicConfig with level X none of the log messages that is not covered by X will be printed.
You called logging.basicConfig(level=logging.INFO) Here, logging.INFO doesn't cover logging.DEBUG.
May be you wanted other way round?
logging.basicConfig(level=logging.DEBUG)
This prints both info and debug output:
import logging
logging.basicConfig(level=logging.DEBUG)
log = logging.getLogger("MyLog")
log.setLevel(logging.DEBUG)
log.info("hello")
log.debug("world")

Related

How to view logging.info from Google App Engine?

I have the following function:
#app.route('/db')
def db():
_, cursor = get_db()
print ('11111111111')
logging.info('2222222222')
return jsonify(x="new")
For whatever reason, the logging items don't show up in Logs Explorer, I just see the print statements which show up with the default severity. Where are my logging logs going and how can I view them?
Did you set the level? Remember the default is WARNING so you have to do something like logging.basicConfig(level=logging.INFO) before you start using logging.info()

apscheduler: how to prevent console printing of job misfire warning message?

How to prevent apscheduler from printing job misfire (error) warning to the console ?
As you can see in console output, the job misfire event is captured and handled on a proper way.
But the red message from apscheduler scare normal users, they think the program is crashed, while nothing is wrong at all.
Why print this to the console, if an event scheduler is defined ? After defining an scheduler (EVENT_JOB_MISSED) event listener, the programmer is responsible for the console output.
Apscheduler is a great module, but this issue is a little minor.
def SetScheduler():
global shedul
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.events import EVENT_JOB_ERROR, EVENT_JOB_MISSED
shedul = BackgroundScheduler()
shedul.add_listener(shed_listener, EVENT_JOB_MISSED | EVENT_JOB_ERROR)
Console output:
You have to adjust your logging configuration to filter out these messages.
If you don't need to display any apscheduler logging, you could do:
import logging
logging.getLogger("apscheduler").propagate = False
If you want to display other messages but not these specific ones, you may need to add a filter to that logger, but I have no experience on that. You can check out the documentation on that yourself.

Capture logging message in python 3

I would like to capture the logs printed by a python program, and save it to either a variable or a file. Is there any way to achieve this without adding handlers or modifying logger config? (this is because the logger class will be used by many other modules, and we want it to be generic)
Code snippet:
import logging
from io import StringIO
from contextlib import redirect_stdout
logging.basicConfig(level=logging.DEBUG, format="%(asctime)s %(levelname)s %(message)s")
with StringIO() as buf, redirect_stdout(buf), open("test.txt", "w+") as f:
logger.debug("Debug message")
logger.info("Info message")
logger.error("Error message")
buf.flush()
f.write(buf.getvalue())
Console output:
xxxx-xx-xx xx:xx:xx,xxx DEBUG Debug message
xxxx-xx-xx xx:xx:xx,xxx INFO Info message
xxxx-xx-xx xx:xx:xx,xxx ERROR Error message
What confuses me is, since logger prints all the logs to stdout by default, redirecting stdout using context manager should do the trick. However, the logs are still printed to the console, and nothing is written to file. Any idea?
Logging library already have a utility for that.
Suppose you want to start logging event in file my.log then
import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG)
logging.info('Some info')
logging.warning('Some warning')
logging.debug('Debug messages')
For more check the python documentation.
Edit: OP asked if there is alternative way to do that without using basicConfig method. Here is another approach I found that utilize file handler. Using this you can declare a file handler separately and then assign it to a logger.
logger = logging.basicConfig(level=logging.DEBUG, format="%(asctime)s %(levelname)s %(message)s")
# create file handler
fh = logging.FileHandler('my.log')
fh.setLevel(logging.DEBUG)
logger.addHandler(fh)

Python Logging leaving file empty

I know there are a lot of questions related to this but I haven't found the one that applies to my case.
I'm running a script (Python 3.7.0) on Windows that should log some events, but it is only creating the empty file log_minera.log.
Logging level seems to be ok, same as writting mode, handler connected to logger... I suspect closing the window just kills unflushed streams so never get written, but adding the line to flush doesn't work either hitting ENTER or closing. Help please!
import logging
logger = logging.getLogger(__name__)
handler = logging.FileHandler('log_minera.log', mode='w')
formatter = logging.Formatter('* %(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
handler.setLevel(logging.INFO)
logger.addHandler(handler)
while True:
logger.info('info to be logged')
# code...
logger.error('other info related to errors')
# more code
#logger.handlers[0].flush() <- does nothing
answer = input('Press ENTER to repeat or close the window to exit.')
You do not set the level of your custom logger. Therefore, it has the same level as the parent logger, in your case, probably the root logger.
I tried your code and in my setup, logger has a level WARNING. It means that "info to be logged" will not get logged inside the file.
On your setup, it could be that you set your root logger level to something greater than ERROR, maybe CRITICAL, creating your logger with that level. In that case, nothing under logger.critical would be printed.
Either set your root logger level to INFO or set the logger level to INFO explicitely, like this:
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO) # <- HERE

Info logger is not printing

Here is my code
import logging
logger = logging.getLogger('test')
logger.setLevel(level=logging.INFO)
logger.info('Hello World')
I expect it to print out 'Hello World'. It does not do so.
Could someone help me understanding why it does not print the message out?
You haven't specified a handler for your logger. The message is therefor propagated to the root handler which has a different log level.
The root logger can be configured as follows:
logging.basicConfig(level=logging.INFO)
Alternatively you can add a handler that forwards the messages to stderr:
logger.addHandler(logging.StreamHandler())
This behavior is documented here.

Resources