Info logger is not printing - python-3.x

Here is my code
import logging
logger = logging.getLogger('test')
logger.setLevel(level=logging.INFO)
logger.info('Hello World')
I expect it to print out 'Hello World'. It does not do so.
Could someone help me understanding why it does not print the message out?

You haven't specified a handler for your logger. The message is therefor propagated to the root handler which has a different log level.
The root logger can be configured as follows:
logging.basicConfig(level=logging.INFO)
Alternatively you can add a handler that forwards the messages to stderr:
logger.addHandler(logging.StreamHandler())
This behavior is documented here.

Related

Capture logging message in python 3

I would like to capture the logs printed by a python program, and save it to either a variable or a file. Is there any way to achieve this without adding handlers or modifying logger config? (this is because the logger class will be used by many other modules, and we want it to be generic)
Code snippet:
import logging
from io import StringIO
from contextlib import redirect_stdout
logging.basicConfig(level=logging.DEBUG, format="%(asctime)s %(levelname)s %(message)s")
with StringIO() as buf, redirect_stdout(buf), open("test.txt", "w+") as f:
logger.debug("Debug message")
logger.info("Info message")
logger.error("Error message")
buf.flush()
f.write(buf.getvalue())
Console output:
xxxx-xx-xx xx:xx:xx,xxx DEBUG Debug message
xxxx-xx-xx xx:xx:xx,xxx INFO Info message
xxxx-xx-xx xx:xx:xx,xxx ERROR Error message
What confuses me is, since logger prints all the logs to stdout by default, redirecting stdout using context manager should do the trick. However, the logs are still printed to the console, and nothing is written to file. Any idea?
Logging library already have a utility for that.
Suppose you want to start logging event in file my.log then
import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG)
logging.info('Some info')
logging.warning('Some warning')
logging.debug('Debug messages')
For more check the python documentation.
Edit: OP asked if there is alternative way to do that without using basicConfig method. Here is another approach I found that utilize file handler. Using this you can declare a file handler separately and then assign it to a logger.
logger = logging.basicConfig(level=logging.DEBUG, format="%(asctime)s %(levelname)s %(message)s")
# create file handler
fh = logging.FileHandler('my.log')
fh.setLevel(logging.DEBUG)
logger.addHandler(fh)

Python Logging leaving file empty

I know there are a lot of questions related to this but I haven't found the one that applies to my case.
I'm running a script (Python 3.7.0) on Windows that should log some events, but it is only creating the empty file log_minera.log.
Logging level seems to be ok, same as writting mode, handler connected to logger... I suspect closing the window just kills unflushed streams so never get written, but adding the line to flush doesn't work either hitting ENTER or closing. Help please!
import logging
logger = logging.getLogger(__name__)
handler = logging.FileHandler('log_minera.log', mode='w')
formatter = logging.Formatter('* %(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
handler.setLevel(logging.INFO)
logger.addHandler(handler)
while True:
logger.info('info to be logged')
# code...
logger.error('other info related to errors')
# more code
#logger.handlers[0].flush() <- does nothing
answer = input('Press ENTER to repeat or close the window to exit.')
You do not set the level of your custom logger. Therefore, it has the same level as the parent logger, in your case, probably the root logger.
I tried your code and in my setup, logger has a level WARNING. It means that "info to be logged" will not get logged inside the file.
On your setup, it could be that you set your root logger level to something greater than ERROR, maybe CRITICAL, creating your logger with that level. In that case, nothing under logger.critical would be printed.
Either set your root logger level to INFO or set the logger level to INFO explicitely, like this:
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO) # <- HERE

Proper error logging for node applications

I am developing an express project which will have multiple modules/services in it. The folder structure looks mostly like this:
-- app.js
-- payment_service
-- routes.js
-- index.js
-- models
-- model_1.js
-- model_2.js
APIs in index.js are the only exposed APIs and they work as a gateway for all requests coming for this module/service.
Most of the services can throw operational error under many circumstances, so manual intervention may needed to fix things. So I need to:
Log errors properly with proper context so that some person/script can do the needful.
Figure out the reason of failure.
There will be dedicated teams owning each service. So I should be able to differentiate between error logs for each service so that it can be aggregated and forwarded to concerned person.
I decided to go with ELK stash so that I can generate reports by script.
The main problem that I am facing is that I can't maintain correlation between logs. For example; If a request comes and it travels through five functions and each function logs something then I can't relate those logs.
One way is to create a child logger for each request and pass it to all the functions but that seems to be extra overhead passing logger instance to all the functions.
Another option is to use something like verror and do the logging only at entry point of the service/module so that the whole context can be contained in the log. This approach looks ok for logging errors, however it can't help with info and debug logs - they help me a lot in development and testing phase.
For the sake of differentiating between error logs, I am going to create
A dedicated logger for each service with log level error.
An application wide generic logger for info and debug purpose.
Is this the correct approach?
What will be the best way so that I can achieve all the requirements in simplest way?
I'd recommend you use a logger and you don't need anything too complex. For example:
npm install 12factor-log
Then create a file in your root folder near app.js (or in a /lib folder is where I'd place libraries)
logger.js
const Log = require('12factor-log');
module.exports = (params) => {
return new Log(params);
}
Then in your modules, import your logger and pass in the module name when you instantiate it so you can track where statements come from...
model_1.js
var log = require('./logger')({name: 'model_1'});
// ...
log.info("Something happened here");
// ...
try {
// ...
catch (error) {
const message = `Error doing x, y, z with value ${val}`;
log.error(message);
throw new Error(message);
}
Then handle error gracefully at your controller -> view layer for user-friendly experience.
Your logs would print something like this:
{"ts":"2018-04-27T16:37:24.914Z","msg":"Something happened here","name":"model_1","type":"info","level":3,"hostname":"localhost","pid":18}
As far as correlation of logs, if you see in the output above it includes the hostname of the machine it's running on, and also the name of the module and severity level. You can import this JSON into Logstash and load into Elasticsearch and it will store JSON for easy search and indexing.
See: https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html
Logging is complex and many people have worked on it. I would suggest not doing so yourself.
So, not following my own advice, I created my own logging package:
https://www.npmjs.com/package/woveon-logger
npm install woveon-logger
This prints file and line numbers of errors and messages, has logging levels and aspect-oriented logging, and can dump a stack trace in one call. It even has color coding options. If you get stuck and need some feature in logging, let me know.
let logger1 = new Logger('log1', {level : 'info', debug : true, showname : true};
let logger2 = new Logger('log2', {level : 'verbose', debug : true, showname : true};
...
log1.info('Here is a log message, that is on line 23.');
log1.verbose('Does not show');
log2.verbose('Shows because log2 is verbose logging');
log2.setAspect('IO', true);
log2.aspect('IO', 'Showing aspect IO logging, for logs for IO related operations');
[2018-06-10T10:43:20.692Z] [INFO--] [log1 ] [path/to/myfile:23] Here is a log message, that is on line 23.
[2018-06-10T10:43:20.792Z] [VERBOS] [log2 ] [path/to/myfile:25] Shows because log2 is verbose logging
[2018-06-10T10:43:20.892Z] [IO----] [log2 ] [path/to/myfile:27] Showing aspect IO logging, for logs for IO related operations
Also, some other features like:
log1.throwError('Logs this as both a line of logging, and throws the error with the same message');
log1.printStack('Prints this label next to the stack trace.');
Hope it helps!
You can use grackle_tracking library https://www.getgrackle.com/analytics_and_tracking
It logs errors & traffic to your db.

Logging module to send all unsent message to another file (Python)

I am trying to log messages to a log file and then do a shutdown on logging and send all remaining messages to a new log file.
But, I am observing that messages are going to the previously-created log file only. How can I fix this?
Below is my code:
logger_name = 'create_request'
request_Create_log = Code_vars.Requests_path+'Request_creation.log'
formatter = "%(asctime)s %(levelname)s %(message)s"
logging.basicConfig(filename=request_Create_log,filemode='a',level=logging.DEBUG,format=formatter)
req_logger = logging.getLogger(logger_name)
logging.shutdown()
#new logfile creation
logger_name = 'request_'+str(req_id)
formatter = "%(asctime)s %(levelname)s %(message)s"
logging.basicConfig(filename=log_file,filemode='a',level=logging.DEBUG,format=formatter)
logger = logging.getLogger(logger_name)
From logging.shutdown() documentation:
Informs the logging system to perform an orderly shutdown by flushing and closing all handlers. This should be called at application exit and no further use of the logging system should be made after this call.
So first, you're using it wrong if you just want to switch your handlers/config - it's not intended for that. Second, notice it says that it closes handlers, it doesn't remove them so they're still in the system. What you might want instead is to clean up the old handlers:
logger = logging.getLogger() # get the root logger
handlers = logger.handlers[:] # make a copy of the current handlers
for handler in handlers: # go through all available handlers and...
logger.removeHandler(handler) # remove them
And then rebuild your loggers to do what you want.

Python logging, supressing output

Here is a simple example that gets a new logger and attempts to
import logging
log = logging.getLogger("MyLog")
log.setLevel(logging.INFO)
log.info("hello")
log.debug("world")
If I call logging.basicConfig(level=logging.INFO) right after importing, "hello" will print, but not "world", (which seems strange since I set the level to debug).
How can the logging API be adjusted so all built-in levels are printed to the stdout?
If you call basicConfig with level X none of the log messages that is not covered by X will be printed.
You called logging.basicConfig(level=logging.INFO) Here, logging.INFO doesn't cover logging.DEBUG.
May be you wanted other way round?
logging.basicConfig(level=logging.DEBUG)
This prints both info and debug output:
import logging
logging.basicConfig(level=logging.DEBUG)
log = logging.getLogger("MyLog")
log.setLevel(logging.DEBUG)
log.info("hello")
log.debug("world")

Resources