logging disabled when used with pytest - python-3.x

I am having a problem when using pytest and logging together. When I run a program on its own, I can see its messages printed on screen as well as in the file test.log.
python3 main.py -> prints on terminal, and also in test.log
However, when I am running the same program with pytest, I am seeing the messages only on screen, but the file test.log is not being created.
pytest -vs test -> prints only on terminal, but not in test.log
Why is pytest interfering with the logging utility, and what should I do to create these log files when using pytest?
My versions are the following:
platform linux -- Python 3.6.7, pytest-4.0.2, py-1.7.0, pluggy-0.8.0 -- /usr/bin/python3
The directory structure is the following:
├── logger.py
├── main.py
└── test
├── __init__.py
└── test_module01.py
The code for these files are given below:
# logger.py ===================================
import logging
def logconfig(logfile, loglevel):
print('logconfig: logfile={} loglevel={}..'.format(logfile,loglevel))
logging.basicConfig(filename=logfile, level=logging.INFO, format='%(asctime)s :: %(message)s')
def logmsg(log_level, msg):
print(log_level,': ',msg)
logging.info('INFO: ' + msg)
# main.py =====================================
from datetime import datetime
from logger import *
def main(BASE_DIR):
LOG_FILE = BASE_DIR + 'test.log'
logconfig(LOG_FILE,'INFO')
logmsg('INFO',"Starting PROGRAM#[{}] at {}=".format(BASE_DIR,datetime.now()))
logmsg('INFO',"Ending PROGRAM at {}=".format(datetime.now()))
if __name__ == "__main__":
main('./')
# __init__.py =================================
all = ["test_module01"]
# test_module01.py ============================
import pytest
import main
class TestClass01:
def test_case01(self):
print("In test_case01()")
main.main('./test/')

By default, pytest captures all log records emitted by your program. This means that all logging handlers defined in your code will be replaced with the custom handler pytest uses internally; if you pass -s, it will print the emitted records to the terminal, otherwise it will print nothing and no further output is being made. The internal handler will capture all records emitted from your code. To access them in the tests, use the caplog fixture. Example: imagine you need to test the following program:
import logging
import time
def spam():
logging.basicConfig(level=logging.CRITICAL)
logging.debug('spam starts')
time.sleep(1)
logging.critical('Oopsie')
logging.debug('spam ends')
if __name__ == '__main__':
spam()
If you run the program, you'll see the output
CRITICAL:root:Oopsie
but there's no obvious way to access the debug messages. No problem when using caplog:
def test_spam(caplog):
with caplog.at_level(logging.DEBUG):
spam()
assert len(caplog.records) == 3
assert caplog.records[0].message == 'spam starts'
assert caplog.records[-1].message == 'spam ends'
If you don't need log capturing (for example, when writing system tests with pytest), you can turn it off by disabling the logging plugin:
$ pytest -p no:logging
Or persist it in the pyproject.toml to not having to enter it each time:
[tool.pytest.ini_options]
addopts = "-p no:logging"
The same configuration with (legacy) pytest.cfg:
[pytest]
addopts = -p no:logging
Of course, once the log capturing is explicitly disabled, you can't rely on caplog anymore.

Related

rumps.notification not working - silently fails to show notification

I have a simple python3.9 rumps app, roughly following the documented example https://rumps.readthedocs.io/en/latest/examples.html.
main.py:
import rumps
class SiMenuBarApp(rumps.App):
def __init__(self):
super(SiMenuBarApp, self).__init__("SiProdHacks")
self.menu = ["Item1"]
#rumps.clicked("Item1")
def item_one(self, _):
print("Hi Si!")
rumps.notification("SiProdHacks", "Keeping Si's Brain Free since 2021", "KAPOWIE!")
if __name__ == '__main__':
app = SiMenuBarApp()
app.icon = "happyicon.png"
app.run()
It runs fine, but when I click the menu bar item1, it prints my console message, but no notification occurs.
I am using python 3.9.0, rumps=0.3.0, iTerm and Mac OS 10.15.7 (Catalina).
Console output is:
❯ pipenv run python main.py
Hi Si!
Ok, I did some more digging on this, and discovered the debug mode of rumps:
import rumps
rumps.debug_mode(True)
which added the following to the output:
In this case there is no file at "/Users/simonrowland/.local/share/virtualenvs/si-menu-productivity-mlyLc7OG/bin/Info.plist"
Running the following command should fix the issue:
/usr/libexec/PlistBuddy -c 'Add :CFBundleIdentifier string "rumps"' /Users/simonrowland/.local/share/virtualenvs/si-menu-productivity-mlyLc7OG/bin/Info.plist
Running the suggested command:
/usr/libexec/PlistBuddy -c 'Add :CFBundleIdentifier string "rumps"' ${PATH_TO_VENV_BIN_DIR}/bin/Info.plist
Made it work!

Python logging does not log when used inside a Pytest fixture

I have a Pytest + Selenium project and I would like to use the logging module.
However, when I set up logging in conftest.py like this
#pytest.fixture(params=["chrome"], scope="class")
def init_driver(request):
start = datetime.now()
logging.basicConfig(filename='.\\test.log', level=logging.INFO)
if request.param == "chrome":
options = ChromeOptions()
options.add_argument("--start-maximized")
web_driver = webdriver.Chrome(ChromeDriverManager().install(), options=options)
if request.param == "firefox":
web_driver = webdriver.Firefox(GeckoDriverManager().install())
request.cls.driver = web_driver
yield
end = datetime.now()
logging.info(f"{end}: --- DURATION: {end - start}")
web_driver.close()
looks like test.log is not created at all and there are no error messages or other indications something went wrong.
How can I make this work?
Two facts first:
logging.basicConfig() only has an effect if no logging configuration was done before invoking it (the target logger has no handlers registered).
pytest registers custom handlers to the root logger to be able to capture log records emitted in your code, so you can test whether your program logging behaviour is correct.
This means that calling logging.basicConfig(filename='.\\test.log', level=logging.INFO) in a fixture will do nothing, since the test run has already started and the root logger has handlers attached by pytest. You thus have two options:
Disable the builtin logging plugin completely. This will stop log records capturing - if you have tests where you are analyzing emitted logs (e.g. using the caplog fixture), those will stop working. Invocation:
$ pytest -p no:logging ...
You can persist the flag in pyproject.toml so it is applied automatically:
[tool.pytest.ini_options]
addopts = "-p no:logging"
Or in pytest.ini:
[pytest]
addopts = -p no:logging
Configure and use live logging. The configuration in pyproject.toml, equivalent to your logging.basicConfig() call:
[tool.pytest.ini_options]
log_file = "test.log"
log_file_level = "INFO"
In pytest.ini:
[pytest]
log_file = test.log
log_file_level = INFO
Of course, the logging.basicConfig() line can be removed from the init_driver fixture in this case.

Why is pytest giving me an "unrecognized option" error when I try and create a custom command line option?

I'm using Python 3.8 and pytest 6.0.1. How do I create a custom command line option for pytest? I thought it was as simple as adding this to conftest.py ...
def pytest_addoption(parser):
    parser.addoption('--option1', action='store_const', const=True)
but when I pytest, I get an unrecognized option error
# pytest --option1=Y -c tests/my_test.py
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --option1=Y
What's the right way to add a custom option?
Edit: I tried the answer given. I included some other things in my tests/conftest.py file in case those are the reason the answer isn't working. File contains
def pytest_generate_tests(metafunc):
option1value = metafunc.config.getoption("--option1")
print(f'Option1 Value = {option1value}')
def pytest_configure(config):
use_docker = False
try:
use_docker = config.getoption("--docker-compose-remove-volumes")
except:
pass
plugin_name = 'wait_for_docker' if use_docker else 'wait_for_server'
if not config.pluginmanager.has_plugin(plugin_name):
config.pluginmanager.import_plugin("tests.plugins.{}".format(plugin_name))
But output when running is
$ pytest -s --option1 tests/shared/model/test_crud_functions.py
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --option1
inifile: /Users/davea/Documents/workspace/my_project/pytest.ini
rootdir: /Users/davea/Documents/workspace/my_project
As already mentioned in a comment action='store_const' makes your option a flag. The value you receive when you read option value provided it is specified on cli is the one specified by const i.e. True in your case.
Try this:
add below function to conftest.py
def pytest_generate_tests(metafunc):
option1value = metafunc.config.getoption("--option1")
print(f'Option1 Value = {option1value}')
pytest invoked with option pytest -s --option1
Output will have : Option1 Value = True
pytest invoked without option pytest -s
Output will have : Option1 Value = None
action=store might give you the desired behavior.
Solution:
# Change the action associated with your option to action='store'
def pytest_addoption(parser):
parser.addoption('--option1', action='store')
def pytest_configure(config):
x = config.getoption('option1')
print(x) # Any logic that uses option value
Output:
pytest -s --option1=Y -c=test.py
Y
============================================================================= test session starts ==============================================================================
platform darwin -- Python 3.8.5, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
You can find details on available action(s) and more here: https://docs.python.org/3/library/argparse.html#action
I think order matters in this case.
Try pytest -c tests/my_test.py --option1=Y

How to redirect abseil logging messages to stackdriver using google.cloud.logging without having duplicate with wrong "label"?

I am using AI Platform training to run ML training job using python 3.7.6. I am using the abseil module for logging messages with absl-py 0.9.0. I look at the instruction on how to direct python logging messages to stackdriver configuration medium article. I am using google-cloud-logging 1.15.0. I did some very basic code to understand the issue with my configuration.
from absl import logging
from absl import flags
from absl import app
import logging as logger
import google.cloud.logging
import sys
import os
FLAGS = flags.FLAGS
def main(argv):
logging.get_absl_handler().python_handler.stream = sys.stdout
# Instantiates a client
client = google.cloud.logging.Client()
# Connects the logger to the root logging handler; by default this captures
# all logs at INFO level and higher
client.setup_logging()
fmt = "[%(levelname)s %(asctime)s %(filename)s:%(lineno)s] %(message)s"
formatter = logger.Formatter(fmt)
logging.get_absl_handler().setFormatter(formatter)
# set level of verbosity
logging.set_verbosity(logging.DEBUG)
print(' 0 print --- ')
logging.info(' 1 logging:')
logging.info(' 2 logging:')
print(' 3 print --- ')
logging.debug(' 4 logging-test-debug')
logging.info(' 5 logging-test-info')
logging.warning(' 6 logging-test-warning')
logging.error(' 7 logging test-error')
print(' 8 print --- ')
print(' 9 print --- ')
if __name__ == '__main__':
app.run(main)
First abseil send all logs to the stderr. Note sure if this is expected or not. In the screenshot below we see:
print messages using print are display (later in the logfile from Stackdriver)
Abseil logging messages appear 2 times. One with the right label in stack driver (DEBUG, INFO, WARNING or ERROR) and one more time with the special formatting [%(levelname)s %(asctime)s %(filename)s:%(lineno)s] %(message)s but always with the ERROR label in Stackdriver.
When I run the code locally I don't see duplicate.
Any idea how to have this setup properly to see the logging messages (using abseil) once with the proper "label" in Stackdriver ?
----- EDIT --------
I am seeing the issue locally and not only when running on GCP.
The duplicate log messages appear when I add this line: client.setup_logging(). Before, I have no duplicate and all log messages are in the stdout stream
If I look at the logger logger.root.manager.loggerDict.keys(), I see a lot of them:
dict_keys(['absl', 'google.auth.transport._http_client', 'google.auth.transport', 'google.auth', 'google','google.auth._default', 'grpc._cython.cygrpc', 'grpc._cython', 'grpc', 'google.api_core.retry', 'google.api_core', 'google.auth.transport._mtls_helper', 'google.auth.transport.grpc', 'urllib3.util.retry', 'urllib3.util', 'urllib3', 'urllib3.connection', 'urllib3.response', 'urllib3.connectionpool', 'urllib3.poolmanager', 'urllib3.contrib.pyopenssl', 'urllib3.contrib', 'socks', 'requests', 'google.auth.transport.requests', 'grpc._common', 'grpc._channel', 'google.cloud.logging.handlers.transports.background_thread', 'google.cloud.logging.handlers.transports', 'google.cloud.logging.handlers', 'google.cloud.logging', 'google.cloud', 'google_auth_httplib2'])
If I look at:
root_logger = logger.getLogger()
for handler in root_logger.handlers:
print("handler ", handler)
I see:
handler <ABSLHandler (NOTSET)>
handler <CloudLoggingHandler <stderr> (NOTSET)>
handler <StreamHandler <stderr> (NOTSET)>
and we can see that the stream is stderr and not stdout. I didn''t managed to change it.
I saw this discussion stackoverflow thread and I tried the last solution by #Andy Carlson but then all my logging message are gone.

pytest and Failed: Database access not allowed, use the "django_db" mark, or the "db" or "transactional_db" fixtures to enable it

During invoking pytest from the shell I get the following output, because my test is stored in apps.business.metrics.tools.tests.py, and during import of the module
apps/business/metrics/widgets/employees/utilization.py
makes a live call to SQL during module invocation. This is done by
get_metric_columns('EmployeeUtilization', shapers=SHAPERS)
and pytest complaints:
➜ pytest
=========================================================================== test session starts ===========================================================================
platform linux -- Python 3.6.8, pytest-4.0.0, py-1.7.0, pluggy-0.8.0
Django settings: config.settings.local (from ini file)
rootdir: /home/dmitry/Projects/analytics/backend, inifile: pytest.ini
plugins: django-3.4.7, pylama-7.6.6, django-test-plus-1.1.1, celery-4.2.1
collected 60 items / 1 errors
================================================================================= ERRORS ==================================================================================
__________________________________________________________ ERROR collecting apps/business/metrics/tools.tests.py __________________________________________________________
../../../.pyenv/versions/3.6.8/envs/cam/lib/python3.6/site-packages/py/_path/local.py:668: in pyimport
__import__(modname)
apps/business/metrics/__init__.py:3: in <module>
from .widgets import * # noqa
apps/business/metrics/widgets/__init__.py:1: in <module>
from . import help # noqa
apps/business/metrics/widgets/help.py:1: in <module>
from .employees.utilization import EmployeeSwarmUtilization
apps/business/metrics/widgets/employees/utilization.py:19: in <module>
get_metric_columns('EmployeeUtilization', shapers=SHAPERS)
apps/business/metrics/tools.py:132: in get_metric_columns
m = get_metric(metric, period=p, shapers=shapers)
apps/business/metrics/data/__init__.py:23: in get_metric
return metrics[name](*args, **kwargs)
apps/business/metrics/data/abstract.py:441: in __init__
self._to_dataframe(self.sql or self._ingest())
apps/business/metrics/data/abstract.py:472: in _to_dataframe
source, connection, params=query_params, index_col=self.index
../../../.pyenv/versions/3.6.8/envs/cam/lib/python3.6/site-packages/pandas/io/sql.py:381: in read_sql
chunksize=chunksize)
../../../.pyenv/versions/3.6.8/envs/cam/lib/python3.6/site-packages/pandas/io/sql.py:1413: in read_query
cursor = self.execute(*args)
../../../.pyenv/versions/3.6.8/envs/cam/lib/python3.6/site-packages/pandas/io/sql.py:1373: in execute
cur = self.con.cursor()
../../../.pyenv/versions/3.6.8/envs/cam/lib/python3.6/site-packages/django/db/backends/base/base.py:255: in cursor
return self._cursor()
../../../.pyenv/versions/3.6.8/envs/cam/lib/python3.6/site-packages/django/db/backends/base/base.py:232: in _cursor
self.ensure_connection()
E Failed: Database access not allowed, use the "django_db" mark, or the "db" or "transactional_db" fixtures to enable it.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
========================================================================= 1 error in 2.43 seconds =========================================================================
Is there a way to handle such situation with pytest?
I understand I can convert the get_metric_columns('EmployeeUtilization', shapers=SHAPERS) into a partial func and change the implementation, but is there any other way around?
Solution:
import pytest
#pytest.mark.django_db
class TestExample:
def test_one():
...
Assume that you've created a TestExample class inside your test file and it should be decorated with #pytest.mark.django_db. It should solve your problem.
Another way to solve this is to inherit from the Django TestCase in your test class:
import from django.test import TestCase
class TestExampleTestCase(TestCase):
def test_one():
...
Make sure you import django.test.TestCase and not unittest.TestCase.
The accepted answer should also work, but this will give you additional tooling provided by the Django test framework and is the standard way of writing tests according to the official Django docs on testing.

Resources