python logging - there is any way to pass the log as an argument? - python-3.x

in my project i am using the logging module and write the logs into a local .log file, in addition i want to pass the same log to another function to document locally the logs with circular queue algorithm.
it is possible to configure the logger to do it?
thanks .
the currnet logging config
logger=logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
formatter=logging.Formatter("<SOME FORMAT>")
file_handler=logging.FileHandler('logfile.log')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
!! UPDATE : SOLVED - #imriqwe answer here https://stackoverflow.com/a/36408692 helped me to figure it.

I think this thread Python logging to multiple handlers, at different log levels? answers your question, it shows how to add multiple handlers, one file handler and one stream handler.

Related

Python logging, get DEBUG message from only my code

You know when you set the root logger in Python to DEBUG, and annoyingly it prints debug messages from whatever libraries you're using (Pytorch, Numpy, Matplotlib, whatever)? And then you have to manually turn them off in-code, only for more debug logs from other libraries that you install later to appear?
What I want is a switch that turns all of my loggers (that I got from logging.getLogger(__name__)) to level DEBUG, but not anything else. I initially thought about having a "root" logger for all of my code, named my_root, say, and defining my own get_logger() that wraps around logging.getLogger() by prepending my_root to the names of all the loggers. But then, my "root" logger my_root is still the child of the real root logger, and I can't switch my_root to DEBUG without also switching the root logger to DEBUG (as far as I understand), which I don't want to do. Is there a (preferably elegant) way to get what I want?
I guess the equivalent question is, can I set a child logger's level lower than its parent logger?

Structlog different ways to log: msg versus info and debug

I see different ways to use Structlog and I was wondering what the exact difference is.
Let's say I want to log something using Structlog, you could for example use:
logger.msg("My log message")
But there are other ways to log, like info, debug (as in the standard Python logging library) which give you the possibility to say something about the importance of a message (which you can filter using loglevel):
logger.info("This is an info message")
logger.debug("This is a debug message")
The question is: what is the advantage of using logger.msg as compared to the other ways to log like info and debug? Why would I choose logger.msg?
msg() is a remnant from the original generic BoundLogger that tried to have both stdlib and Twisted log methods (msg() hailing from the Twisted end).
If you use structlog's internal filtering system via structlog.make_filtering_bound_logger(), it's equivalent to the info log level.
You can safely ignore it.

Node.js - write CSV file creates empty file in production, while OK in Mocha testing

This gist shows a code snippet that dumps an object into a CSV file.
File writing is done using module csv-write-stream and it returns a promise.
This code works flawlessly in all the Mocha tests that I have made.
When the code is invoked by the main nodejs app (a server-side REPL application involving raw process.stdin and console.log invocations as interaction with the user), the CSV file is created, but it's always empty and no error/warning seems to be thrown.
I have debugged extensively the REPL code with node-debug and the Chrome dev tools: I am sure that the event handlers of the WriteStream are working properly: no 'error', all 'data' seems to be handled, 'close' runs, the promise resolves as expected.
Nevertheless, in the latter case the file is always 0 bytes. I have checked several Q&A's and cannot find anything as specific as this.
My questions are:
can I be missing some errors? how can I be sure to track all communications about the file write?
in which direction could I intensify my investigation? which other setup could help me isolate the problem?
since the problem may be due to the presence of process.stdin in the equation, what is a way to create a simple, light-weight interaction with the user without having to write a webapp?
I am working on Windows 7. Node 6.9.4, npm 3.5.3, csv-write-stream 2.0.0.
I managed to fix this issue in two ways, either by:
resolving the promise upon the 'finish' event of the FileWriteStream rather than on the 'end' event of the CSVWriteStream
removing the process.exit() I was using at the end of my operations with process.stdin (this implies that this tutorial page may be in need of some corrections)

Best way to manually periodically import log files into Graylog using logstash

I'm currently using logstash to import dozens of log files from different webapps into Graylog. It works great the files are tagged so I know from wich webapp they originate.
I can't change the webapp thus I can't add a GELF appender to the log4j conf of the webapp. The idea is to periodically retrieve the log files, parse them and import them with logstash into Graylog.
My problem is how do I make sure I don't import a log event I've already imported.
For example, I have a log file that has a log pattern that increments: log.1, log.2, etc. So I'll have log events that could be in log.1 the first time and 2 weeks later when I reimport them they'll maybe be in log.3.
I'm afraid I can't handle that with logstash's file input "sincedb_path" and "start_position".
So here are a few options I've gathered and I'd like your input about them, if anyone encountered the same issue:
Use a logstash filter dropping all events before a certain date,
requires to keep an index of every last log date of every file
imported (potentially 50+) and a lot of configuration writing
Use of a drool rule in GrayLog to refuse logs with timestamps prior
to last log received for a given type
Ask to change the log pattern to be something like log.date instead
of a log pattern that renames files (but I'd rather avoid this one)
Any other idea?

How to truncate org.jasig.cas.authentication.handler.BadCredentialsAuthenticationException Stack trace in log file?

I'm using CAS for single signon solution, My log(log4j version 1.2.15) file completely fills with the Exception(org.jasig.cas.authentication.handler.BadCredentialsAuthenticationException)
Stack trace when User enters invalid login credentials.
Is there a solution to trim the Stack trace in CAS or Java?
I can't use log4j EnhancePatternLayout to achieve this as it requires log4j version 1.2.16
Any suggestions around this problem would be appreciated.
Thanks
I haven't used CAS. However, the way that I've gotten around similar problems in the past is by suppressing log messages from the offending class. For example, if you're using a log4j.properties file, insert this line:
log4j.logger.com.jasig.cas.WhateverClassLogsTheException=OFF
Note that you will need to suppress messages from the class that throws the exception, not the exception class itself. Also, you can also use FATAL or other values to ensure that only log messages that are at or above the given level are logged. See the Log4J docs for more information.
Note that this will suppress all messages from that class, not just the particular log message that produces that exception.
The problem is CAS is passing Exception object to Log4j,so I did comment that line in my overlayed class. BindLdapAuthenticationHandler.java

Resources