Python logging does not log when used inside a Pytest fixture - python-3.x

I have a Pytest + Selenium project and I would like to use the logging module.
However, when I set up logging in conftest.py like this
#pytest.fixture(params=["chrome"], scope="class")
def init_driver(request):
start = datetime.now()
logging.basicConfig(filename='.\\test.log', level=logging.INFO)
if request.param == "chrome":
options = ChromeOptions()
options.add_argument("--start-maximized")
web_driver = webdriver.Chrome(ChromeDriverManager().install(), options=options)
if request.param == "firefox":
web_driver = webdriver.Firefox(GeckoDriverManager().install())
request.cls.driver = web_driver
yield
end = datetime.now()
logging.info(f"{end}: --- DURATION: {end - start}")
web_driver.close()
looks like test.log is not created at all and there are no error messages or other indications something went wrong.
How can I make this work?

Two facts first:
logging.basicConfig() only has an effect if no logging configuration was done before invoking it (the target logger has no handlers registered).
pytest registers custom handlers to the root logger to be able to capture log records emitted in your code, so you can test whether your program logging behaviour is correct.
This means that calling logging.basicConfig(filename='.\\test.log', level=logging.INFO) in a fixture will do nothing, since the test run has already started and the root logger has handlers attached by pytest. You thus have two options:
Disable the builtin logging plugin completely. This will stop log records capturing - if you have tests where you are analyzing emitted logs (e.g. using the caplog fixture), those will stop working. Invocation:
$ pytest -p no:logging ...
You can persist the flag in pyproject.toml so it is applied automatically:
[tool.pytest.ini_options]
addopts = "-p no:logging"
Or in pytest.ini:
[pytest]
addopts = -p no:logging
Configure and use live logging. The configuration in pyproject.toml, equivalent to your logging.basicConfig() call:
[tool.pytest.ini_options]
log_file = "test.log"
log_file_level = "INFO"
In pytest.ini:
[pytest]
log_file = test.log
log_file_level = INFO
Of course, the logging.basicConfig() line can be removed from the init_driver fixture in this case.

Related

Uvicorn + logging basic config example

I have a logger that is used by my app:
import logging
logging.basicConfig(format='%(asctime)s [%(name)s]: %(levelname)s : %(message)s', level=logging.INFO)
In addition, in the same __main__.py I'm setting up my uvicorn logs:
if __name__ == "__main__":
LOGGING_CONFIG["formatters"]["default"]["fmt"] = "%(asctime)s [%(name)s] %(levelprefix)s %(message)s"
LOGGING_CONFIG["formatters"]["access"]["fmt"] = '%(asctime)s [%(name)s] %(levelprefix)s %(client_addr)s - "%(request_line)s" %(status_code)s'
uvicorn.run(app, host="0.0.0.0", port=3001)
The problem is that uvicorn.error logs are printed duplicates.
I understand why it happens, but I don't understand what is the right way to change it:
Can I define a basicConfig for all the logs except uvicorn.error? (If I don't use BasicConfig, all my app logs are not printed)
Should I set uvicorn.error Propagate = False? If so - I didn't find a way
Is there a way to make my app use the uvicorn logger?
So, the basic question is what is the right way to avoid my duplicate logs.

How to prevent Python Tornado from logging to stdout/console?

I need to disable Tornado from logging to STDOUT. I am using Python 3.8 and is running on Ubuntu 18.04. I want my log statements to be handled by a rotating file logger only. The issue is that logged statements are logged into the file and also to console:
import logging
from logging.handlers import RotatingFileHandler
logger = logging.getLogger("ex_logger")
nh = logging.NullHandler()
rfh = RotatingFileHandler(filename="./logs/process.log", mode='a', maxBytes=50000000, backupCount=25, encoding=None, delay=False)
rfh.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
rfh.setFormatter(formatter)
logger.handlers = []
logger.propagete = False
logger.addHandler(rfh)
logging.getLogger("tornado.access").handlers = []
logging.getLogger("tornado.application").handlers = []
logging.getLogger("tornado.general").handlers = []
logging.getLogger("tornado.access").addHandler(nh)
logging.getLogger("tornado.application").addHandler(nh)
logging.getLogger("tornado.general").addHandler(nh)
logging.getLogger("tornado.access").propagate = False
logging.getLogger("tornado.application").propagate = False
logging.getLogger("tornado.general").propagate = False
....
def main():
######
# this message eppears in both the output log file and stdout
######
logger.info(" application init ... ")
asyncio.set_event_loop_policy(tornado.platform.asyncio.AnyThreadEventLoopPolicy())
tornado.options.parse_command_line()
app = Application()
app.listen(options.port)
tornado.ioloop.IOLoop.current().start()
if __name__ == "__main__":
main()
The problem is that from the moment you start up your IDE, logging.getLogger("tornado") may have a StreamHandler attached. This doesn't happen with every IDE but it does happen with Spyder. So that is the one you have to replace by a NullHandler:
import logging
nh = logging.NullHandler()
tornado_logger = logging.getLogger("tornado")
tornado_logger.handlers.clear()
# tornado_logger.handlers = [] # would work instead of .clear() but it's best practice to change the list and not replace it, just in case some other variable name somewhere else still refers to it.
tornado_logger.addHandler(nh)
You don't need to do anything with the children of the "tornado" logger, e.g. "tornado.access", et cetera.
You also need to define a logging policy for the root handler (logging.getLogger("")). Tornado looks at the root handler to decide whether logging has already been configured or needs a default setup.

Why don't my lower level logs write to the file, but the error and above do?

I created a small function to setup logging, with a filehandler for 'everything', and smtphandler for error and above. Error logs write to the log file and send correctly to email, but debug, info, notset don't, even though setlevel is set to 0 for filehandler. Why's that? Code below
#logsetup.py
import logging
import logging.handlers
def _setup_logger(name, log_file):
"""Function to setup logger"""
logger = logging.getLogger(name)
#Create Formatters
file_formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
mail_formatter = logging.Formatter('%(name)s - %(message)s')
#Create handler, set formatting, set log level
file_handler_obj = logging.FileHandler(log_file)
file_handler_obj.setFormatter(file_formatter)
file_handler_obj.setLevel(0)
#Create handler, set formatting, set log level
smtp_handler_obj = logging.handlers.SMTPHandler(mailhost=('smtp.gmail.com', 587),
fromaddr='mymail#example.com',
toaddrs='mymail#example.com',
subject='Error in Script',
credentials=('mymail#example.com', 'pwexample'), #username, password
secure=())
smtp_handler_obj.setFormatter(mail_formatter)
smtp_handler_obj.setLevel(logging.ERROR)
# add the handlers to logger
logger.addHandler(smtp_handler_obj)
logger.addHandler(file_handler_obj)
return logger
#mytest.py
import time
import logsetup
if __name__ == '__main__':
TEST_SETTINGS = config_funcs._get_config('TEST_SETTINGS')
logtime = time.strftime('%Y%m%d') # -%H%M%S")
log = logsetup._setup_logger('TEST', TEST_SETTINGS['logging_dir'] + 'Py_Log_%s.log' % logtime)
log.error('Writes to log file and sends email')
log.debug('Supposed to write to log file, does nothing.')
Apparently, logging needs it's own logging level aside from the handlers. Setting logger.setLevel(logging.DEBUG) right before returning logger causes it to work correctly. Documentation says
When a logger is created, the level is set to NOTSET (which causes all
messages to be processed when the logger is the root logger, or
delegation to the parent when the logger is a non-root logger). Note
that the root logger is created with level WARNING.
Which means that if the handlers are lower level than the root logger (which ERROR is not, but DEBUG is) then the handlers which I guess is a child because I'm getting a named logger? Not quite sure on the why of it, but that does 'fix' it, in case anyone comes to this later.

logging disabled when used with pytest

I am having a problem when using pytest and logging together. When I run a program on its own, I can see its messages printed on screen as well as in the file test.log.
python3 main.py -> prints on terminal, and also in test.log
However, when I am running the same program with pytest, I am seeing the messages only on screen, but the file test.log is not being created.
pytest -vs test -> prints only on terminal, but not in test.log
Why is pytest interfering with the logging utility, and what should I do to create these log files when using pytest?
My versions are the following:
platform linux -- Python 3.6.7, pytest-4.0.2, py-1.7.0, pluggy-0.8.0 -- /usr/bin/python3
The directory structure is the following:
├── logger.py
├── main.py
└── test
├── __init__.py
└── test_module01.py
The code for these files are given below:
# logger.py ===================================
import logging
def logconfig(logfile, loglevel):
print('logconfig: logfile={} loglevel={}..'.format(logfile,loglevel))
logging.basicConfig(filename=logfile, level=logging.INFO, format='%(asctime)s :: %(message)s')
def logmsg(log_level, msg):
print(log_level,': ',msg)
logging.info('INFO: ' + msg)
# main.py =====================================
from datetime import datetime
from logger import *
def main(BASE_DIR):
LOG_FILE = BASE_DIR + 'test.log'
logconfig(LOG_FILE,'INFO')
logmsg('INFO',"Starting PROGRAM#[{}] at {}=".format(BASE_DIR,datetime.now()))
logmsg('INFO',"Ending PROGRAM at {}=".format(datetime.now()))
if __name__ == "__main__":
main('./')
# __init__.py =================================
all = ["test_module01"]
# test_module01.py ============================
import pytest
import main
class TestClass01:
def test_case01(self):
print("In test_case01()")
main.main('./test/')
By default, pytest captures all log records emitted by your program. This means that all logging handlers defined in your code will be replaced with the custom handler pytest uses internally; if you pass -s, it will print the emitted records to the terminal, otherwise it will print nothing and no further output is being made. The internal handler will capture all records emitted from your code. To access them in the tests, use the caplog fixture. Example: imagine you need to test the following program:
import logging
import time
def spam():
logging.basicConfig(level=logging.CRITICAL)
logging.debug('spam starts')
time.sleep(1)
logging.critical('Oopsie')
logging.debug('spam ends')
if __name__ == '__main__':
spam()
If you run the program, you'll see the output
CRITICAL:root:Oopsie
but there's no obvious way to access the debug messages. No problem when using caplog:
def test_spam(caplog):
with caplog.at_level(logging.DEBUG):
spam()
assert len(caplog.records) == 3
assert caplog.records[0].message == 'spam starts'
assert caplog.records[-1].message == 'spam ends'
If you don't need log capturing (for example, when writing system tests with pytest), you can turn it off by disabling the logging plugin:
$ pytest -p no:logging
Or persist it in the pyproject.toml to not having to enter it each time:
[tool.pytest.ini_options]
addopts = "-p no:logging"
The same configuration with (legacy) pytest.cfg:
[pytest]
addopts = -p no:logging
Of course, once the log capturing is explicitly disabled, you can't rely on caplog anymore.

Python logging printing double to console

It seems that on a couple machines I'm getting double output like this:
INFO LED NOTIFICATION STARTED
INFO:output_logger:LED NOTIFICATION STARTED
This is the function I'm using:
def setup_logger(name, log_file, level=logging.INFO, ContentFormat='%(asctime)s %(levelname)s %(message)s', DateTimeFormat="%Y-%m-%d %H:%M:%S", CreateConsoleLogger=False):
"""Function setup as many loggers as you want"""
logger = logging.getLogger(name)
logger.setLevel(level)
if CreateConsoleLogger:
# create console handler
handler = logging.StreamHandler()
handler.setLevel(level)
formatter = logging.Formatter("%(levelname)s %(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)
# create a file handler
handler = RotatingFileHandler(log_file, maxBytes=2000000, backupCount=5)
handler.setLevel(level)
formatter = logging.Formatter(ContentFormat, DateTimeFormat)
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
This is how I'm creating the logger:
output_logger = setup_logger('output_logger', 'log/autofy.log', level=logging.DEBUG, CreateConsoleLogger=True)
And this is how I call it:
output_logger.info("LED NOTIFICATION STARTED")
On a most of computers I just see the same message printed to the console that's saved to the file as expected ("INFO LED NOTIFICATION STARTED"), but on other computers it's doing this weird double output thing. My code is exactly the same from one computer to another, so any ideas what could be causing this on some computers and not others?
EDIT
I'm writing the script using notepad++ and running it in a terminal window on an Ubuntu 16.04 machine. I'm using python3.
Try adding this to your code:
logging._get_logger().propagate = False

Resources