How do I format error message and stack when logging an Exception in python3 logging setup with dictconfig? - python-3.x

I'm using logging.config.dictconfig() and my formatter is:
'formatters': {
'logfileFormat': {
'format': '%(asctime)s %(name)-12s: %(levelname)s %(message)s'
}
}
When there's an exception, I would like the formatter to show the error message, as well as print out the exec_info. How do I alter the format to do this? Or do I need to implement my own formatter in code, and reference it here?

I may be missing something here, but calling logger.exception(e) already adds the stack trace and error message.
(While logger.error(e) only shows the message)
See this POC script:
import logging
import logging.config
logging.config.dictConfig({
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'logfileFormat': {
'format': '%(asctime)s %(name)-12s: %(levelname)s %(message)s'
}
},
'handlers': {
'logfile': {
'level': 'INFO',
'formatter': 'logfileFormat',
'class': 'logging.StreamHandler',
'stream': 'ext://sys.stdout', # Default is stderr
},
},
'loggers': {
'': { # root logger
'handlers': ['logfile'],
'level': 'INFO',
'propagate': False
},
}
})
def foo():
logging.info("Normal stuff")
logging.warning("More stuff")
try:
# ... something causing an exception ...
raise Exception("Something something, Exception")
except Exception as e:
logging.exception(e)
try:
raise Exception("The stacktrace of this is surpressed")
except Exception as e:
logging.error(e)
logging.info("More normal stuff")
foo()
Which results in the following output:
2020-07-26 20:54:27,621 root : INFO Normal stuff
2020-07-26 20:54:27,621 root : WARNING More stuff
2020-07-26 20:54:27,621 root : ERROR Something something, Exception
Traceback (most recent call last):
File "sol.py", line 34, in foo
raise Exception("Something something, Exception")
Exception: Something something, Exception
2020-07-26 20:54:27,622 root : ERROR The stacktrace of this is surpressed
2020-07-26 20:54:27,622 root : INFO More normal stuff
I fail to see in what way your existing formatter needs to be changed.
If you wanted your stack trace to be indented or otherwise formatted differently, see this SO answer for creating a custom formatter.

logging automatically adds the traceback to your message if you use log.exception(e).
If you want to log the traceback with other logging levels, you can use the exc_info=True kwarg like log.error("message", exc_info=True).
You don't need to modify the formatter for the traceback to show up.

Related

Python logging with dictConfig and using GMT / UTC zone time

I'm trying to configure some logging across multiple modules within a package. These modules may be executed in different containers, possibly in different regions, so I wanted to explicitly use GMT / UTC time for logging.
When I'm reading about the Formatter class in logging, it indicates you can specify converter to use either local-time or GMT. I'd like to utilize this feature, in conjunction with dictConfig (or possibly fileConfig) to specify the configurations for the different modules, but the documentation is sparse with respect to this feature. Everything specified in the config is working, except for the timezone. The log always uses local time. I can include the '%z' formatting specification in datefmt to specify offset from GMT, but that breaks the .%(msecs)03d formatting.
Below is my code using a defined dictionary and dictConfig. Has anyone had any success specifying timezone in the config? is this possible?
import json
import logging
from logging.config import dictConfig
import time
DEFAULT_CONFIG = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'standard': {
'format': '%(levelname)s : %(name)s : %(asctime)s.%(msecs)03d : %(message)s',
'datefmt': '%Y-%m-%d %H.%M.%S',
'converter': time.gmtime # also fails with 'time.gmtime', 'gmtime'
}
},
'handlers' : {
'default': {
'level': 'NOTSET',
'class': 'logging.StreamHandler',
'formatter': 'standard',
}
},
'loggers': {
'TEST': { # logging from this module should be logged in DEBUG level
'handlers' : ['default'],
'level': 'INFO',
'propagate': False,
},
},
'root': {
'level': 'INFO',
'handlers': ['default']
},
'incremental': False
}
if __name__ == '__main__':
dictConfig(DEFAULT_CONFIG)
logger = logging.getLogger('TEST')
logger.debug('debug message') # this should not be displayed if level==INFO
logger.info('info message')
logger.warning('warning message')
logger.error('error message')
Output:
INFO : TEST : 2022-07-07 17.20.13.434 : info message
WARNING : TEST : 2022-07-07 17.20.13.435 : warning message
ERROR : TEST : 2022-07-07 17.20.13.435 : error message

youtube_dl KeyError 'key'

Hello I am trying to download youtube videos with the following code
import youtube_dl
import tempfile
youtube_links = ['https://www.youtube.com/watch?v=668nUCeBHyY']
with tempfile.TemporaryDirectory() as tempdir:
opts = {
'format': 'best',
'outtmpl': f'{tempdir}/%(id)s.%(ext)s',
'noplaylist': True,
'postprocessors': [{
'preferredcodec': 'mp4'
}]
}
ydl = youtube_dl.YoutubeDL(opts)
try:
meta = ydl.extract_info(
youtube_links,
download=True
)
except Exception as e:
raise e
else:
print(f"Downloaded to {tempdir}/{meta['id']}.{meta['ext']}")
However this raises an error:
<path>>py -3.8 test.py
Traceback (most recent call last):
File "test.py", line 14, in <module>
ydl = youtube_dl.YoutubeDL(opts)
File "C:\Users\User\AppData\Local\Programs\Python\Python38\lib\site-packages\youtube_dl\YoutubeDL.py", line 429, in __init__
pp_class = get_postprocessor(pp_def_raw['key'])
KeyError: 'key'
And none of the examples told me anything about a key I have to pass, nor can I find anything about it, what am I doing wrong?
In a nutshell, try this:
'postprocessors': [{
'key':'FFmpegMetadata',
'preferredcodec': 'mp4'
}]
There are lots of keys, so if it's not working, check out this: https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/postprocessor/__init__.py
The trick is that the keys listed inside the file are not usable because you gotta drop the "PP" part!!
FYI:
It seems you are trying to get video? Then you should try the documented style: https://github.com/ytdl-org/youtube-dl/
You have to look at the "Functions" yourself and guess what parameters you should put here: https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/YoutubeDL.py In fact, somewhere in this link says, "check out the init.py for key information."

Google Cloud logging, Python3.8 standard environment, group request related logs by trace id

I stucked with problem during Google Cloud Logging setup for Python3.8 in Google App Engine Standard environment.
I'm using FastAPI with unicorn. My code logging configuration:
import logging.config
import sys
from google.cloud import logging as google_logging
from app.settings import ENV, _settings
if _settings.ENV == ENV.LOCAL:
MAIN_LOGGER = 'console'
LOGGER_CONF_DICT = {
'class': 'logging.StreamHandler',
'formatter': 'verbose',
'stream': sys.stdout,
'level': _settings.LOG_LEVEL.upper(),
}
else:
log_client = google_logging.Client()
MAIN_LOGGER = 'stackdriver_logging'
LOGGER_CONF_DICT = {
'class': 'app.gcloud_logs.GCLHandler',
'client': log_client,
'name': 'appengine.googleapis.com%2Frequest_log'
# I've tried other names: stdout, %2FA instead of / symbol, appengine.googleapis.com/stdout
# the same result or no logs at all
}
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'verbose': {
'format': '%(log_color)s%(asctime)s [%(levelname)s] [%(name)s] %(message)s (%(filename)s:%(lineno)d)',
'()': 'colorlog.ColoredFormatter',
'log_colors': {
'DEBUG': 'cyan',
'INFO': 'green',
'WARNING': 'yellow',
'ERROR': 'red',
'CRITICAL': 'bold_red',
},
}
},
'handlers': {
MAIN_LOGGER: {**LOGGER_CONF_DICT},
'blackhole': {'level': 'DEBUG', 'class': 'logging.NullHandler'},
},
'loggers': {
'fastapi': {'level': 'INFO', 'handlers': [MAIN_LOGGER]},
'uvicorn.error': {'level': 'INFO', 'handlers': [MAIN_LOGGER], 'propagate': False},
'uvicorn.access': {'level': 'INFO', 'handlers': [MAIN_LOGGER], 'propagate': False},
'uvicorn': {'level': 'INFO', 'handlers': [MAIN_LOGGER], 'propagate': False},
'google.cloud.logging.handlers.transports.background_thread': {'level': 'DEBUG', 'handlers': ['blackhole'],
'propagate': False},
'': {
'level': _settings.LOG_LEVEL.upper(),
'handlers': [MAIN_LOGGER],
'propagate': True,
},
}
}
logging.config.dictConfig(LOGGING)
And my logging handler code:
import os
from typing import Any, Dict, Optional
from google.cloud.logging.handlers import CloudLoggingHandler
from google.cloud.logging.resource import Resource
from starlette.requests import Request
from starlette_context import context
from app.settings import _settings
class GCLHandler(CloudLoggingHandler):
def emit(self, record):
message = super(GCLHandler, self).format(record)
request: Optional[Request] = None
trace: Optional[str] = None
span_id: Optional[str] = None
user_id: Optional[int] = None
resource = Resource(
type='gae_app',
labels={
'module_id': os.environ['GAE_SERVICE'],
'project_id': _settings.PROJECT_NAME,
'version_id': os.environ['GAE_VERSION'],
'zone': 'us16' # tried without zone - the same result
}
)
labels: Dict[str, Any] = {}
if context.exists(): # I'm sure that it works
request = context.get('request') # I'm sure that it works
user_id = context.get('user_id') # I'm sure that it works
if user_id is not None:
labels['user_id'] = user_id
if request:
if request.headers.get('X-Cloud-Trace-Context'):
cloud_trace = request.headers.get('X-Cloud-Trace-Context').split('/')
if len(cloud_trace) > 1:
span_id = cloud_trace[1].split(';')[0]
trace = f'projects/{_settings.PROJECT_NAME}/traces/{cloud_trace[0]}'
labels['logging.googleapis.com/trace'] = cloud_trace[0] # Found in some guides, not sure that its neccessary
labels['appengine.googleapis.com/trace_id'] = cloud_trace[0] # Found in some guides, not sure that its neccessary
self.transport.send(
record,
message,
resource=resource,
labels=labels,
trace=trace,
span_id=span_id
)
I've got some strange results in logs viewer that my log has the same trace as request log, but they're not grouped
Any ideas?
There are 2 types of logs in App Engine :
Request log: A log of the requests that are sent to your app. App Engine automatically creates entries in the request log.
App log: log entries that you write to a supported framework or file as described on this page.
The both logs are send to the Cloud Logging Agent automatically by App Engine Standard.
On first request, app logs and request logs are not correlated and that's why there are not shown in a group , this is a known issue stated in App Engine Official Documentation. However in the second request, you can see that the logs are shown in a group.
A feature request in Public Issue Tracker has already been created for this behavior where you will get all the updates regarding the fix.
log_client = google_logging.Client()
MAIN_LOGGER = 'stackdriver_logging'
LOGGER_CONF_DICT = {
'class': 'app.gcloud_logs.GCLHandler',
'client': log_client,
'name': 'app'
}
Change name to app is helped

How to configure 'dictConfig' properly to use a different logging format?

I am trying to configure my logger using dictConfig to use a different format but it does not seem to be taking effect. Following is the code that I have (I am also simultaneously trying to suppress the logs from imported modules)-
import logging.config
import requests
logging.config.dictConfig({
'version': 1,
'disable_existing_loggers': True,
'formatters':{'standard': { 'format': "[%(asctime)s] [%(levelname)8s] - %(message)s", 'datefmt':"%d-%b-%Y %I:%M:%S %p"}},
'handlers': {'default': {'level': 'DEBUG', 'formatter': 'standard', 'class': 'logging.StreamHandler', 'stream': 'ext://sys.stdout'}},
'loggers':{'__main__': {'handlers': ['default'], 'level': 'DEBUG', 'propagate': False }}
})
req = requests.get('https://www.google.com')
logging.debug("Only thing that should be printed")
Output -
DEBUG:root:Only thing that should be printed
Expected Output -
[2020-04-04 22:46:24,866] [ DEBUG] - Only thing that should be printed
I learnt how to use dictConfig from this SO post.
If you look at the post post that you've mentioned, you will see that you forget a line :)
log = logging.getLogger(__name__)
log.debug("Only thing that should be printed")
The logger hierarchy must be defined explicitly in the logger name, using dot-notation.
When using the __name__ :
This means that logger names track the package/module hierarchy, and
it’s intuitively obvious where events are logged just from the logger
name.
For more explanations, the docs are pretty complete.

Python ValueError while attempting a dictionary based logging configuration

I'm trying to code a dictionary based logging configuration and have been stumped by a ValueError that occurs when I run the program. I've stripped it down to the essentials and the problem remains. I've read the 3.5 docs, logging HOWTO, Logging Cookbook, etc. but unfortunately, the solution has not presented itself. Any help would be appreciated.
Also, I'm only 3 weeks into python so I may just be out of my depth at this point. Here's the code...
import logging.config
log_config = {
'version': 1,
'disable_existing_loggers': False,
'formatters':{
'verbose_formatter':{
'format':'%(levelname)s: %(name)s: %(asctime)s.%(msecs).03d : '\
'%(message)s: %(process)s: %(processName)s',
'datefmt':'%Y-%m-%d %H:%M:%S'
},
'precise_formatter':{
'format':'%(levelname)s: %(name)s: %(asctime)s.%(msecs).03d : '\
'%(message)s',
'datefmt':'%Y-%m-%d %H:%M:%S'
},
'brief_formatter':{
'format':'%(levelname)s: %(message)s'
}
},
'handlers':{
'con_handler':{
'class':'logging.StreamHandler',
'level':'DEBUG',
'formatter':'precise_formatter',
'stream':'ext://sys.stdout'
},
'file_handler':{
'class':'logging.handlers.RotatingFileHandler',
'filename':'logger.log',
'maxBytes':1048576,
'backupCount':4,
'level':'DEBUG',
'formatter':'precise_formatter',
'encoding':'utf8'
}
},
'loggers':{
'level':'DEBUG',
'handlers':['con_handler', 'file_handler']
}
}
logging.config.dictConfig(log_config)
logger = logging.getLogger(__name__)
logger.critical('This should always be seen!')
When run, I receive the follow:
ValueError was unhandled by user code
Message: Unable to configure logger 'handlers': 'ConvertingList' object has no attribute 'get'
or sometimes this...
ValueError was unhandled by user code
Message: Unable to configure logger 'level': 'str' object has no attribute 'get'
I suspect that the different errors may have to do with the sometimes changing order of the dictionary?
Change the loggers section to
'loggers':{
'': {
'level':'DEBUG',
'handlers':['con_handler', 'file_handler']
}
}
The '' (empty string) refers to the root logger. you can add more loggers for different components:
'loggers':{
'': {
'level':'DEBUG',
'handlers':['con_handler', 'file_handler']
}
'bottle': { #I only want error level from bottle :)
'level':'ERROR',
'handlers':['con_handler', 'file_handler']
}
}
To config the root logger use a root key of your log_config dictionary.
root - this will be the configuration for the root logger.
Source: Dictionary Schema Details
Following this description your config should look something like this:
log_config = {
...
'handlers': {
'con_handler': ...,
'file_handler': ...
},
'loggers': {
'other_logger': ...
},
'root': {
'level': 'DEBUG',
'handlers': ['con_handler', 'file_handler']
}
}

Resources