Logging in Python Using Config Json file - python-3.x

I am trying to log from python using a json config file which controls the configuration of the logger , below is sample config I am using
{
"version": 1,
"disable_existing_loggers": true,
"formatters": {
"json": {
"format": "%(asctime)s %(levelname)s %(filename)s %(lineno)s %(message)s"
}
},
"handlers": {
"json": {
"class": "logging.StreamHandler",
"formatter": "json"
}
},
"loggers": {
"": {
"handlers": ["json"],
"level": 20
}
}
}
This works fine but is there any Doc which would point to different formatters that are available to be used or similarly different handlers that can be used ?

Handlers
The different handlers you can choose from are listed in the docs. Among the different handlers, there are:
StreamHandler that "sends logging output to streams such as sys.stdout, sys.stderr or any file-like object"
FileHandler that "sends logging output to a disk file"
SocketHandler that "sends logging output to a network socket"
SysLogHandler that "supports sending logging messages to a remote or local Unix syslog".
...
Formatters
logging.Formatters can use any of the following LogRecord attributes inside the fmt string.
For example,
"%(asctime)s %(levelname)s %(name)s %(message)s"
# gives
2023-02-07 17:24:07,981 INFO foobar This is a message.
----
"%(name)s - %(levelname)s - %(pathname)s - %(filename)s - %(funcName)s - %(asctime)s - %(process)d - %(message)s"
# gives
loggername - INFO - default_formatter.py - default_formatter.py - my_func_name - 2023-02-07 17:25:02,734 - 380334 - This is an info message.

Related

Python logging with dictConfig and using GMT / UTC zone time

I'm trying to configure some logging across multiple modules within a package. These modules may be executed in different containers, possibly in different regions, so I wanted to explicitly use GMT / UTC time for logging.
When I'm reading about the Formatter class in logging, it indicates you can specify converter to use either local-time or GMT. I'd like to utilize this feature, in conjunction with dictConfig (or possibly fileConfig) to specify the configurations for the different modules, but the documentation is sparse with respect to this feature. Everything specified in the config is working, except for the timezone. The log always uses local time. I can include the '%z' formatting specification in datefmt to specify offset from GMT, but that breaks the .%(msecs)03d formatting.
Below is my code using a defined dictionary and dictConfig. Has anyone had any success specifying timezone in the config? is this possible?
import json
import logging
from logging.config import dictConfig
import time
DEFAULT_CONFIG = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'standard': {
'format': '%(levelname)s : %(name)s : %(asctime)s.%(msecs)03d : %(message)s',
'datefmt': '%Y-%m-%d %H.%M.%S',
'converter': time.gmtime # also fails with 'time.gmtime', 'gmtime'
}
},
'handlers' : {
'default': {
'level': 'NOTSET',
'class': 'logging.StreamHandler',
'formatter': 'standard',
}
},
'loggers': {
'TEST': { # logging from this module should be logged in DEBUG level
'handlers' : ['default'],
'level': 'INFO',
'propagate': False,
},
},
'root': {
'level': 'INFO',
'handlers': ['default']
},
'incremental': False
}
if __name__ == '__main__':
dictConfig(DEFAULT_CONFIG)
logger = logging.getLogger('TEST')
logger.debug('debug message') # this should not be displayed if level==INFO
logger.info('info message')
logger.warning('warning message')
logger.error('error message')
Output:
INFO : TEST : 2022-07-07 17.20.13.434 : info message
WARNING : TEST : 2022-07-07 17.20.13.435 : warning message
ERROR : TEST : 2022-07-07 17.20.13.435 : error message

Adding python logging to FastApi endpoints, hosted on docker doesn't display API Endpoints logs

I have a fastapi app on which I want to add python logging. I followed the basic tutorial and added this, however this doesn't add API but just gunicorn logging.
So I have a local server hosted using docker build so running server using docker-compose up and testing my endpoints using api client (Insomnia, similar to postman).
Below is the code where no log file is created and hence no log statements added.
My project str is as follows:
project/
src/
api/
models/
users.py
routers/
users.py
main.py
logging.conf
"""
main.py Main is the starting point for the app.
"""
import logging
import logging.config
from fastapi import FastAPI
from msgpack_asgi import MessagePackMiddleware
import uvicorn
from api.routers import users
logger = logging.getLogger(__name__)
app = FastAPI(debug=True)
app.include_router(users.router)
#app.get("/check")
async def check():
"""Simple health check endpoint."""
logger.info("logging from the root logger")
return {"success": True}
Also, I am using gunicorn.conf that looks like this:
[program:gunicorn]
command=poetry run gunicorn -c /etc/gunicorn/gunicorn.conf.py foodgame_api.main:app
directory=/var/www/
autostart=true
autorestart=true
redirect_stderr=true
And gunicorn.conf.py as
import multiprocessing
bind = "unix:/tmp/gunicorn.sock"
workers = multiprocessing.cpu_count() * 2 + 1
worker_class = "uvicorn.workers.UvicornWorker"
loglevel = "debug"
errorlog = "-"
capture_output = True
chdir = "/var/www"
reload = True
reload_engine = "auto"
accesslog = "-"
access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"'
This is my output terminal for the above API endpoint on docker:
Could anyone please guide me here? I am new to FastApi so some help will be appreciated.
Inspired by #JPG's answer, but using a pydantic model looked cleaner.
You might want to expose more variables. This config worked good for me.
from pydantic import BaseModel
class LogConfig(BaseModel):
"""Logging configuration to be set for the server"""
LOGGER_NAME: str = "mycoolapp"
LOG_FORMAT: str = "%(levelprefix)s | %(asctime)s | %(message)s"
LOG_LEVEL: str = "DEBUG"
# Logging config
version = 1
disable_existing_loggers = False
formatters = {
"default": {
"()": "uvicorn.logging.DefaultFormatter",
"fmt": LOG_FORMAT,
"datefmt": "%Y-%m-%d %H:%M:%S",
},
}
handlers = {
"default": {
"formatter": "default",
"class": "logging.StreamHandler",
"stream": "ext://sys.stderr",
},
}
loggers = {
LOGGER_NAME: {"handlers": ["default"], "level": LOG_LEVEL},
}
Then import it into your main.py file as:
from logging.config import dictConfig
import logging
from .config import LogConfig
dictConfig(LogConfig().dict())
logger = logging.getLogger("mycoolapp")
logger.info("Dummy Info")
logger.error("Dummy Error")
logger.debug("Dummy Debug")
logger.warning("Dummy Warning")
Which gives:
I would use dict log config
create a logger config as below,
# my_log_conf.py
log_config = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"default": {
"()": "uvicorn.logging.DefaultFormatter",
"fmt": "%(levelprefix)s %(asctime)s %(message)s",
"datefmt": "%Y-%m-%d %H:%M:%S",
},
},
"handlers": {
"default": {
"formatter": "default",
"class": "logging.StreamHandler",
"stream": "ext://sys.stderr",
},
},
"loggers": {
"foo-logger": {"handlers": ["default"], "level": "DEBUG"},
},
}
Then, load the config using dictConfig function as,
from logging.config import dictConfig
from fastapi import FastAPI
from some.where.my_log_conf import log_config
dictConfig(log_config)
app = FastAPI(debug=True)
Note: It is recommended to call the dictConfig(...) function before the FastAPI initialization.
After the initialization, you can use logger named foo-logger anywhere in your code as,
import logging
logger = logging.getLogger('foo-logger')
logger.debug('This is test')

Unable to send newly updated events form filebeat to logstash

filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- C:\Office\Work\processed_fileChaser_2020-05-22_12_56.log
close_eof: true
close_inactive: 10s
scan_frequency: 15s
tail_files: true
output.logstash:
# The Logstash hosts
hosts: ["XX.XXX.XXX.XX:5044"]
------------------------ logsatsh config -----------------------------------
input {
beats {
port => 5044
}
}
filter {
json {
source => "message"
}
mutate {
remove_field => ["tags", "ecs", "log", "agent", "message", "input"]
}
}
output {
file {
path => "C:/Softwares/logstash-7.5.2/output/dm_filechaser.json"
}
}
Sample input:
{"fileName": "DM.YYYYMMDD.TXT", "country": "TW", "type": "TW ", "app_code": "DM", "upstream": "XXXX", "insertionTime": "2020-05-20T13:01:04+08:00"}
{"fileName": "TRANSACTION.YYYYMMDD.TXT", "country": "TW", "type": "TW", "app_code": "DM", "upstream": "XXXX", "insertionTime": "2020-05-22T13:01:04+08:00"} ```
Versions:
Logstash: 7.5.2
Filebeat: 7.5.2
filebeat unable to send new data and always sends same data to logstash from file beat.
Can you please help how to send newly updated data from log to logsatsh.
Filebeat logs:
2020-05-22T15:39:30.630+0530 ERROR logstash/async.go:256 Failed to publish events caused by: read tcp 10.10.242.48:53400->10.10.242.48:5044: i/o timeout
2020-05-22T15:39:30.632+0530 ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2020-05-22T15:39:32.605+0530 ERROR pipeline/output.go:121 Failed to publish events: client is not connected
2020-05-22T15:39:32.605+0530 INFO pipeline/output.go:95 Connecting to backoff(async(tcp://10.10.242.48:5044))
2020-05-22T15:39:32.606+0530 INFO pipeline/output.go:105 Connection to backoff(async(tcp://10.10.242.48:5044)) established
2020-05-22T15:39:44.588+0530 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":1203,"time":{"ms":32}},"total":{"ticks":1890,"time":{"ms":48},"value":1890},"user":{"ticks":687,"time":{"ms":16}}},"handles":{"open":280},"info":{"ephemeral_id":"970ad9cc-16a8-4bf1-85ed-4dd774f0df42","uptime":{"ms":61869}},"memstats":{"gc_next":11787664,"memory_alloc":8709864,"memory_total":16065440,"rss":2510848},"runtime":{"goroutines":26}},"filebeat":{"events":{"active":2,"added":2},"harvester":{"closed":1,"open_files":0,"running":0,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"batches":2,"failed":4,"total":4},"read":{"errors":1},"write":{"bytes":730}},"pipeline":{"clients":1,"events":{"active":2,"filtered":2,"retry":6,"total":2}}},"registrar":{"states":{"current":1}}}}}
2020-05-22T15:39:44.657+0530 INFO log/harvester.go:251 Harvester started for file: C:\Office\Work\processed_fileChaser_2020-05-22_12_56.log

Python ValueError while attempting a dictionary based logging configuration

I'm trying to code a dictionary based logging configuration and have been stumped by a ValueError that occurs when I run the program. I've stripped it down to the essentials and the problem remains. I've read the 3.5 docs, logging HOWTO, Logging Cookbook, etc. but unfortunately, the solution has not presented itself. Any help would be appreciated.
Also, I'm only 3 weeks into python so I may just be out of my depth at this point. Here's the code...
import logging.config
log_config = {
'version': 1,
'disable_existing_loggers': False,
'formatters':{
'verbose_formatter':{
'format':'%(levelname)s: %(name)s: %(asctime)s.%(msecs).03d : '\
'%(message)s: %(process)s: %(processName)s',
'datefmt':'%Y-%m-%d %H:%M:%S'
},
'precise_formatter':{
'format':'%(levelname)s: %(name)s: %(asctime)s.%(msecs).03d : '\
'%(message)s',
'datefmt':'%Y-%m-%d %H:%M:%S'
},
'brief_formatter':{
'format':'%(levelname)s: %(message)s'
}
},
'handlers':{
'con_handler':{
'class':'logging.StreamHandler',
'level':'DEBUG',
'formatter':'precise_formatter',
'stream':'ext://sys.stdout'
},
'file_handler':{
'class':'logging.handlers.RotatingFileHandler',
'filename':'logger.log',
'maxBytes':1048576,
'backupCount':4,
'level':'DEBUG',
'formatter':'precise_formatter',
'encoding':'utf8'
}
},
'loggers':{
'level':'DEBUG',
'handlers':['con_handler', 'file_handler']
}
}
logging.config.dictConfig(log_config)
logger = logging.getLogger(__name__)
logger.critical('This should always be seen!')
When run, I receive the follow:
ValueError was unhandled by user code
Message: Unable to configure logger 'handlers': 'ConvertingList' object has no attribute 'get'
or sometimes this...
ValueError was unhandled by user code
Message: Unable to configure logger 'level': 'str' object has no attribute 'get'
I suspect that the different errors may have to do with the sometimes changing order of the dictionary?
Change the loggers section to
'loggers':{
'': {
'level':'DEBUG',
'handlers':['con_handler', 'file_handler']
}
}
The '' (empty string) refers to the root logger. you can add more loggers for different components:
'loggers':{
'': {
'level':'DEBUG',
'handlers':['con_handler', 'file_handler']
}
'bottle': { #I only want error level from bottle :)
'level':'ERROR',
'handlers':['con_handler', 'file_handler']
}
}
To config the root logger use a root key of your log_config dictionary.
root - this will be the configuration for the root logger.
Source: Dictionary Schema Details
Following this description your config should look something like this:
log_config = {
...
'handlers': {
'con_handler': ...,
'file_handler': ...
},
'loggers': {
'other_logger': ...
},
'root': {
'level': 'DEBUG',
'handlers': ['con_handler', 'file_handler']
}
}

How can I make log4js on cygwin write exactly what I want to file?

I have a node.js project on cygwin, which I test with mocha.
My logger is log4js.
I have an appender to get log4js to write to file, but I don't know how to get it to write it exactly as I want.
Must-haves:
A different file for each time mocha runs (it's OK if mocha clobbers the old log files)
Unix line endings
Nice-to-have:
No color encodings in the file (I can get around this when mocha is run with -C, but ideally there would be something in log4js-config.js.)
How do I do this?
My current setup is as follows.
My log4js-config.js:
var log4js = require('log4js');
var log4js_config = {
"appenders": [{
"type": "console",
"layout": {
"type": "pattern",
"pattern": "%[%d{yyyy-MM-ddThh:mm:ss.SSS} [pid=%x{pid}] %p %c -%] %m",
"tokens": {pid: function(){return process.pid;}}
}
},{
"type": "file",
"filename": "jxg_log.log",
"layout": {
"type": "pattern",
"pattern": "%d{yyyy-MM-ddThh:mm:ss.SSS} [pid=%x{pid}] %p %c - %m",
"tokens": {pid: function(){return process.pid;}}
}
}],
replaceConsole: true
};
log4js.configure(log4js_config, {});
exports.logging = log4js;
My test file is:
var assert = require("assert");
var logger = require('../src/log4js-config').logging.getLogger('Mocha-Test');
describe('Array', function(){
describe('#indexOf()', function(){
it('should return -1 when the value is not present', function(){
logger.debug("logger debug line 1");
assert.equal(-1, [1,2,3].indexOf(5));
assert.equal(-1, [1,2,3].indexOf(0));
logger.debug("logger debug last line");
})
})
})
(For the purposes of this question, there is no code under test. I run mocha with ./node_modules/mocha/bin/mocha.)
The log file that is output is this:
2014-11-24T13:45:00.012 [pid=90652] INFO console - %m
2014-11-24T13:45:00.018 [pid=90652] INFO console - %m
2014-11-24T13:45:00.019 [pid=90652] INFO console - Array
2014-11-24T13:45:00.020 [pid=90652] INFO console - #indexOf()
2014-11-24T13:45:00.021 [pid=90652] DEBUG Mocha-Test - logger debug line 1
2014-11-24T13:45:00.021 [pid=90652] DEBUG Mocha-Test - logger debug last line
2014-11-24T13:45:19.090 [pid=90180] INFO console - %m
2014-11-24T13:45:19.093 [pid=90180] INFO console - %m
2014-11-24T13:45:19.095 [pid=90180] INFO console - Array
2014-11-24T13:45:19.095 [pid=90180] INFO console - #indexOf()
2014-11-24T13:45:19.096 [pid=90180] DEBUG Mocha-Test - logger debug line 1
2014-11-24T13:45:19.096 [pid=90180] DEBUG Mocha-Test - logger debug last line
2014-11-24T13:50:13.716 [pid=88476] INFO console - %m
2014-11-24T13:50:13.720 [pid=88476] INFO console - [0m[0m
2014-11-24T13:50:13.721 [pid=88476] INFO console - [0m Array[0m
2014-11-24T13:50:13.722 [pid=88476] INFO console - [0m #indexOf()[0m
2014-11-24T13:50:13.722 [pid=88476] DEBUG Mocha-Test - logger debug line 1
2014-11-24T13:50:13.723 [pid=88476] DEBUG Mocha-Test - logger debug last line
Note three mocha runs here; the third time, I left out the -C to show the color encoding.
I can't show the line endings directly in the question, but there's this:
$ file jxg_log.log
jxg_log.log: ASCII text, with CRLF line terminators, with escape sequences
So, again: how can I overwrite the log file on subsequent runs, output unix newlines, and suppress the color codes?

Resources