In the process of developing a Pyramid web application, I've found it very useful to use the command-line pshell to load the application and interact with various code. However, log statements are not echoed on the console, and I'm not sure why.
For instance, lets say in common.utilities.general I have a function:
import logging
log = logging.getLogger(__name__)
def my_util():
log.debug("Executing utility.")
return "Utility was executed."
Then in my command line:
(pyenv)rook:swap nateford$ pshell src/local.ini
2015-10-08 14:44:01,081 INFO [common.orm.pymongo_core][MainThread] PyMongo Connection to replica set successful: localhost:27017
2015-10-08 14:44:01,082 INFO [common.orm.pymongo_core][MainThread] Connected to Mongo Database = turnhere
Python 3.4.3 (default, Mar 10 2015, 14:53:35)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.56)] on darwin
Type "help" for more information.
Environment:
app The WSGI application.
registry Active Pyramid registry.
request Active request object.
root Root of the default resource tree.
root_factory Default root factory used to create `root`.
>>> from common.utilities.general import my_util
>>> my_util()
'Utility was executed.'
>>>
As you can see, there is no log to the console. I would expect:
>>> from common.utilities.general import my_util
>>> my_util()
[some date/server info][DEBUG]: Executing utility.
'Utility was executed.'
>>>
Here is the (relevant) contents of my local.ini file:
<Various elided application settings>
###
# logging configuration
# http://docs.pylonsproject.org/projects/pyramid/en/1.5-branch/narr/logging.html
###
[loggers]
keys = root, common, webapp, services, sqlalchemy
[handlers]
keys = console, applog
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console, applog
[logger_common]
level = DEBUG
handlers =
qualname = common
[logger_services]
level = DEBUG
handlers =
qualname = common.services
[logger_webapp]
level = DEBUG
handlers =
qualname = webapp
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
# "level = INFO" logs SQL queries.
# "level = DEBUG" logs SQL queries and results.
# "level = WARN" logs neither. (Recommended for production systems.)
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = DEBUG
formatter = generic
[handler_applog]
class = FileHandler
args = (r'%(here)s/log/app.log','a')
level = NOTSET
formatter = generic
[formatter_generic]
format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s
Your root's logger logging level is set INFO which is higher level than DEBUG which is the level you log your messages with. Changing root's logger level to DEBUG should help.
Related
I have a Pytest + Selenium project and I would like to use the logging module.
However, when I set up logging in conftest.py like this
#pytest.fixture(params=["chrome"], scope="class")
def init_driver(request):
start = datetime.now()
logging.basicConfig(filename='.\\test.log', level=logging.INFO)
if request.param == "chrome":
options = ChromeOptions()
options.add_argument("--start-maximized")
web_driver = webdriver.Chrome(ChromeDriverManager().install(), options=options)
if request.param == "firefox":
web_driver = webdriver.Firefox(GeckoDriverManager().install())
request.cls.driver = web_driver
yield
end = datetime.now()
logging.info(f"{end}: --- DURATION: {end - start}")
web_driver.close()
looks like test.log is not created at all and there are no error messages or other indications something went wrong.
How can I make this work?
Two facts first:
logging.basicConfig() only has an effect if no logging configuration was done before invoking it (the target logger has no handlers registered).
pytest registers custom handlers to the root logger to be able to capture log records emitted in your code, so you can test whether your program logging behaviour is correct.
This means that calling logging.basicConfig(filename='.\\test.log', level=logging.INFO) in a fixture will do nothing, since the test run has already started and the root logger has handlers attached by pytest. You thus have two options:
Disable the builtin logging plugin completely. This will stop log records capturing - if you have tests where you are analyzing emitted logs (e.g. using the caplog fixture), those will stop working. Invocation:
$ pytest -p no:logging ...
You can persist the flag in pyproject.toml so it is applied automatically:
[tool.pytest.ini_options]
addopts = "-p no:logging"
Or in pytest.ini:
[pytest]
addopts = -p no:logging
Configure and use live logging. The configuration in pyproject.toml, equivalent to your logging.basicConfig() call:
[tool.pytest.ini_options]
log_file = "test.log"
log_file_level = "INFO"
In pytest.ini:
[pytest]
log_file = test.log
log_file_level = INFO
Of course, the logging.basicConfig() line can be removed from the init_driver fixture in this case.
I need to disable Tornado from logging to STDOUT. I am using Python 3.8 and is running on Ubuntu 18.04. I want my log statements to be handled by a rotating file logger only. The issue is that logged statements are logged into the file and also to console:
import logging
from logging.handlers import RotatingFileHandler
logger = logging.getLogger("ex_logger")
nh = logging.NullHandler()
rfh = RotatingFileHandler(filename="./logs/process.log", mode='a', maxBytes=50000000, backupCount=25, encoding=None, delay=False)
rfh.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
rfh.setFormatter(formatter)
logger.handlers = []
logger.propagete = False
logger.addHandler(rfh)
logging.getLogger("tornado.access").handlers = []
logging.getLogger("tornado.application").handlers = []
logging.getLogger("tornado.general").handlers = []
logging.getLogger("tornado.access").addHandler(nh)
logging.getLogger("tornado.application").addHandler(nh)
logging.getLogger("tornado.general").addHandler(nh)
logging.getLogger("tornado.access").propagate = False
logging.getLogger("tornado.application").propagate = False
logging.getLogger("tornado.general").propagate = False
....
def main():
######
# this message eppears in both the output log file and stdout
######
logger.info(" application init ... ")
asyncio.set_event_loop_policy(tornado.platform.asyncio.AnyThreadEventLoopPolicy())
tornado.options.parse_command_line()
app = Application()
app.listen(options.port)
tornado.ioloop.IOLoop.current().start()
if __name__ == "__main__":
main()
The problem is that from the moment you start up your IDE, logging.getLogger("tornado") may have a StreamHandler attached. This doesn't happen with every IDE but it does happen with Spyder. So that is the one you have to replace by a NullHandler:
import logging
nh = logging.NullHandler()
tornado_logger = logging.getLogger("tornado")
tornado_logger.handlers.clear()
# tornado_logger.handlers = [] # would work instead of .clear() but it's best practice to change the list and not replace it, just in case some other variable name somewhere else still refers to it.
tornado_logger.addHandler(nh)
You don't need to do anything with the children of the "tornado" logger, e.g. "tornado.access", et cetera.
You also need to define a logging policy for the root handler (logging.getLogger("")). Tornado looks at the root handler to decide whether logging has already been configured or needs a default setup.
I'd like to set different logging levels per handler and logger in python. I'd like debug level logs for all loggers to be sent to their filehandlers, but be able to individually select the log level for each logger for what's printed to the console.
import logging
loggerA = logging.getLogger('a')
fhA = logging.FileHandler(filename='a.log', mode='w')
fhA.setLevel(logging.DEBUG)
loggerA.addHandler(fhA)
loggerB = logging.getLogger('b')
fhB = logging.FileHandler(filename='b.log', mode='w')
fhB.setLevel(logging.DEBUG)
loggerB.addHandler(fhB)
logging.basicConfig(level=logging.INFO)
logging.getLogger('a').setLevel(logging.INFO)
logging.getLogger('b').setLevel(logging.WARN)
loggerA.info("TEST a")
loggerB.info("TEST b")
I'd expect all logs to be sent to the files, but only show WARNING and above from 'b' in the console, and INFO and above from 'a' in the console. With the above code, 'b' doesn't have any logs in the file.
After connecting to a Gremlin Server, all of my log messages are duplicated. I am using the following code to connect.
graph = anonymous_traversal.traversal().withRemote(DriverRemoteConnection("ws://localhost:8182/gremlin", "g"))
I guess that the gremlin-python API somehow enable the root logger, but I can not find it where. Maybe I overlooked some settings. Any bits of advice to overcome this issue is very welcome.
Here is the whole sample code which I used to replicate this problem.
import logging
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
from gremlin_python.process import anonymous_traversal
I used to test this as the main entry point of the script.
if __name__ == '__main__':
Setting up my local logger instance
logger = logging.getLogger(__name__)
logger.setLevel(logging.getLevelName('DEBUG'))
log_format = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
log_file_handler = logging.FileHandler('foo.log', mode='w')
log_file_handler.setFormatter(log_format)
log_file_handler.setLevel(logging.DEBUG)
logger.addHandler(log_file_handler)
console_log_handler = logging.StreamHandler()
console_log_handler.setLevel(logging.getLevelName('INFO'))
console_log_handler.setFormatter(log_format)
logger.addHandler(console_log_handler)
And sending some messages to log before the connection estabilished.
logger.debug("foo")
logger.info("foo")
logger.warning("and")
logger.error("foo again")
Comment/uncomment the following line to test without gremlin connection
graph = anonymous_traversal.traversal().withRemote(DriverRemoteConnection("ws://localhost:8182/gremlin", "g"))
Finally, sending some more message afte the connection.
logger.debug("after foo")
logger.info("after foo")
logger.warning("after and")
logger.error("after foo again")
The result is something like this.
2019-04-18 16:09:57,746 - INFO - foo
2019-04-18 16:09:57,746 - WARNING - and
2019-04-18 16:09:57,746 - ERROR - foo again
DEBUG:__main__:after foo
2019-04-18 16:09:59,107 - INFO - after foo
INFO:__main__:after foo
2019-04-18 16:09:59,107 - WARNING - after and
WARNING:__main__:after and
2019-04-18 16:09:59,108 - ERROR - after foo again
ERROR:__main__:after foo again
I think gremlin should use its own logger and should not cause any side effects in the working of other logger instances.
I found a very simple workaround. It is not pretty.
def non_root_logger():
for handler in logging.root.handlers:
logging.root.removeHandler(handler)
return logging.getLogger(__name__)
I created a small function to setup logging, with a filehandler for 'everything', and smtphandler for error and above. Error logs write to the log file and send correctly to email, but debug, info, notset don't, even though setlevel is set to 0 for filehandler. Why's that? Code below
#logsetup.py
import logging
import logging.handlers
def _setup_logger(name, log_file):
"""Function to setup logger"""
logger = logging.getLogger(name)
#Create Formatters
file_formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
mail_formatter = logging.Formatter('%(name)s - %(message)s')
#Create handler, set formatting, set log level
file_handler_obj = logging.FileHandler(log_file)
file_handler_obj.setFormatter(file_formatter)
file_handler_obj.setLevel(0)
#Create handler, set formatting, set log level
smtp_handler_obj = logging.handlers.SMTPHandler(mailhost=('smtp.gmail.com', 587),
fromaddr='mymail#example.com',
toaddrs='mymail#example.com',
subject='Error in Script',
credentials=('mymail#example.com', 'pwexample'), #username, password
secure=())
smtp_handler_obj.setFormatter(mail_formatter)
smtp_handler_obj.setLevel(logging.ERROR)
# add the handlers to logger
logger.addHandler(smtp_handler_obj)
logger.addHandler(file_handler_obj)
return logger
#mytest.py
import time
import logsetup
if __name__ == '__main__':
TEST_SETTINGS = config_funcs._get_config('TEST_SETTINGS')
logtime = time.strftime('%Y%m%d') # -%H%M%S")
log = logsetup._setup_logger('TEST', TEST_SETTINGS['logging_dir'] + 'Py_Log_%s.log' % logtime)
log.error('Writes to log file and sends email')
log.debug('Supposed to write to log file, does nothing.')
Apparently, logging needs it's own logging level aside from the handlers. Setting logger.setLevel(logging.DEBUG) right before returning logger causes it to work correctly. Documentation says
When a logger is created, the level is set to NOTSET (which causes all
messages to be processed when the logger is the root logger, or
delegation to the parent when the logger is a non-root logger). Note
that the root logger is created with level WARNING.
Which means that if the handlers are lower level than the root logger (which ERROR is not, but DEBUG is) then the handlers which I guess is a child because I'm getting a named logger? Not quite sure on the why of it, but that does 'fix' it, in case anyone comes to this later.