Hello folks!
#this is values from xml file.
clientID = logObject['meta']['clientID']
authToken = logObject['meta']['authToken']
logType = logObject['logType']
FORMAT = '%(asctime)-15s %(logType)s %(process)d %(user)-8s %(message)s'
d = {'logType': logType ,'user': getpass.getuser()}
#line creating two log files(access.log and error.log)
logging.basicConfig(filename = 'access.log', filemode = 'w', format=FORMAT)
logging.basicConfig(filename = 'error.log', filemode = 'w', format=FORMAT)
if(clientID == ""):
# logger = setup_logger('first_logger', 'access.log',logType)
logger.warning('Please Enter clientID', extra=d)
This is my sample code.
what i need is to create two files which i have mentioned. but problem is it only creates single file everytime and messages goes to that file only.
So i want that if i mention logger.error("msg") or logger.warning("msg") then it should go to that log file.
When you use just logging you actually use single root logger created during import logging. You may use it with several handlers. For example:
# 1. logging to file
filename = (
'log_file{}.log'
.format(
dt.datetime.today().strftime("_date_%Y-%m-%d_time_%H-%M-%S")))
# path to log folder.
path = os.path.join(os.getcwd(), 'logs')
# create log folder if it does not exist.
if not os.path.isdir(path):
os.makedirs(path, exist_ok=True)
to_file = logging.FileHandler(os.path.join(path, filename))
to_file.addFilter(lambda x: x.levelno in [20, 40])
# 2. logging to console
to_console = logging.StreamHandler()
to_console.addFilter(lambda x: x.levelno in [20, 21, 40])
# 3. root logger configuration
logging.basicConfig(
level=10,
datefmt='%Y-%m-%d %H:%M:%S',
format='[%(asctime)s]:%(threadName)s:%(levelname)s:%(message)s',
handlers=[to_console, to_file])
If you want to log into 2 files then just create 2 handlers logging.FileHandler(...), register them, and use newly configured root logger as usual:
logging.info('Some info')
Another option is to create 2 loggers. Usually you need to do so if you want to separate several sources of log messages.
Related
Recently setup some logging on a new program I'm writing. I'm using a config file (logging.conf) to configure it and then calling it using logging.config.fileConfig(). It was working fine logging to console but I also wanted to start writing to a log file. Made the necessary updates and it worked. Then I noticed my log files were in the application working directory. No big deal; I had neglected to prepend the log file name with the path to the logs directory. Once I did that it refused to generate a file or write to it. No idea why. I put in a logger statement to output the path to the logfile and it's valid. If I remove the path it writes to the program directory again. Any ideas?
Here's how I'm invoking logging.conf:
working_dir = "/app_2/TSM_data_collector"
log_dir = working_dir + "/logs"
raw_data_dir = working_dir + "/rawdata"
clean_data_dir = working_dir + "/cleandata"
log_file_name = log_dir + "/" + datetime.strftime(datetime.now(), '%Y-%m-%d') + "_tsm_orchestrator.log"
logging.config.fileConfig('logging.conf', defaults={'logfilename': log_file_name})
logger = logging.getLogger('TSM_orchestrator')
Here's the contents of logging.conf:
[loggers]
keys=root,TSM_orchestrator
[handlers]
keys=consoleHandler,fileHandler
[formatters]
keys=baseFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler
[logger_TSM_orchestrator]
level=DEBUG
handlers=consoleHandler, fileHandler
qualname=TSM_orchestrator
propagate=0
[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=baseFormatter
args=(sys.stdout,)
[handler_fileHandler]
class=FileHandler
level=DEBUG
formatter=baseFormatter
args=('%(logfilename)s','a')
[formatter_baseFormatter]
format=%(asctime)s,%(levelname)s,%(name)s,%(message)s
datefmt=%Y-%m-%d %H:%M:%S
Finally figured it out...
I had to move all of this:
working_dir = "/app_2/TSM_data_collector"
log_dir = working_dir + "/logs"
raw_data_dir = working_dir + "/rawdata"
clean_data_dir = working_dir + "/cleandata"
log_file_name = log_dir + "/" + datetime.strftime(datetime.now(), '%Y-%m-%d') + "_tsm_orchestrator.log"
logging.config.fileConfig('logging.conf', defaults={'logfilename': log_file_name})
logger = logging.getLogger('TSM_orchestrator')
Below this:
if __name__ == '__main__':
I never thought to even try this because in the past I did my own logging and opened the logging file up above __main__ and just below the import statements which always worked. Thought of trying this today and it finally writes to the log file.
Why would it work for logging to the console when the config was up above __main__ ? If I didn't put on any path it would write to a file just fine in the main app directory. Even with the pathing; if I deleted the ../logs/ folder it would cry about the folder missing.
However I could never get it to actually write output to a file in the ../logs/ directory. Feels like a bug to me but I'll pin it on ignorance on my behalf for now =^.^=
I am trying to create two log files in python script:
one for capturing normal log messages
one for capturing exception messages
I have implemented the python code as below:
Create a log file for logging normal messages
logger = logging.getLogger(__name__)
logger.setLevel(Logging Resources and Information.)
formatter = logging.Formatter('%(asctime)s-----%(levelname)s-----%(filename)s-----%(funcName)s-----%(message)s', datefmt='%Y-%m-%d %H:%M:%S')
file_handler = logging.FileHandler('C:\RAVI\PYTHON_DOCS\Migration_Docs\Logfile.txt', mode='a') file_handler.setFormatter(formatter) logger.addHandler(file_handler)
Create a exception log file for logging exception messages in the program
exception_logger = logging.getLogger(__name__)
exception_logger.setLevel(logging.ERROR)
exception_formatter = logging.Formatter('%(asctime)s-----%(levelname)s-----%(filename)s-----%(funcName)s-----%(message)s', datefmt='%Y-%m-%d %H:%M:%S')
exception_file_handler = logging.FileHandler('C:\RAVI\PYTHON_DOCS\Migration_Docs\Exception.txt', mode='a')
exception_file_handler.setFormatter(exception_formatter) exception_logger.addHandler(exception_file_handler)
Below is the code snippet of the method where the above loggers are called:
logger.info('DDLLOAD METHOD START')
file_ddl = file_destination + '*.txt'
try:
logger.info('Connecting to Snowflake.....')
ctx = snow.connect( user='xdfz', #### Snowflake username
password='Xyz234', #### Snowflake password account='us-east-2.aws', ####
https://JZ51866.us-east-2.aws.snowflakecomputing.com. database='EDW',
schema='DDL_LOADER' )
logger.info('Connected to Snowflake successfully')
except Exception as ex:
exception_logger.error("Error encountered while establishing connection to Snowflake: ", ex)
Currently we use property.basePath = ${spark.yarn.app.container.log.dir}
We have use case where we are thinking to use common log42.properties file. Wanted to know if there is support to pass app_name as a parameter/argument to be replaced in the log4j2.properties file
Sample lines in log4j2properties file:
appender.console.type = Console
appender.console.name = consoleLogger
logger.app.name = ${appName}
logger.app.appenderRef.console.ref = consoleLogger
I need to disable Tornado from logging to STDOUT. I am using Python 3.8 and is running on Ubuntu 18.04. I want my log statements to be handled by a rotating file logger only. The issue is that logged statements are logged into the file and also to console:
import logging
from logging.handlers import RotatingFileHandler
logger = logging.getLogger("ex_logger")
nh = logging.NullHandler()
rfh = RotatingFileHandler(filename="./logs/process.log", mode='a', maxBytes=50000000, backupCount=25, encoding=None, delay=False)
rfh.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
rfh.setFormatter(formatter)
logger.handlers = []
logger.propagete = False
logger.addHandler(rfh)
logging.getLogger("tornado.access").handlers = []
logging.getLogger("tornado.application").handlers = []
logging.getLogger("tornado.general").handlers = []
logging.getLogger("tornado.access").addHandler(nh)
logging.getLogger("tornado.application").addHandler(nh)
logging.getLogger("tornado.general").addHandler(nh)
logging.getLogger("tornado.access").propagate = False
logging.getLogger("tornado.application").propagate = False
logging.getLogger("tornado.general").propagate = False
....
def main():
######
# this message eppears in both the output log file and stdout
######
logger.info(" application init ... ")
asyncio.set_event_loop_policy(tornado.platform.asyncio.AnyThreadEventLoopPolicy())
tornado.options.parse_command_line()
app = Application()
app.listen(options.port)
tornado.ioloop.IOLoop.current().start()
if __name__ == "__main__":
main()
The problem is that from the moment you start up your IDE, logging.getLogger("tornado") may have a StreamHandler attached. This doesn't happen with every IDE but it does happen with Spyder. So that is the one you have to replace by a NullHandler:
import logging
nh = logging.NullHandler()
tornado_logger = logging.getLogger("tornado")
tornado_logger.handlers.clear()
# tornado_logger.handlers = [] # would work instead of .clear() but it's best practice to change the list and not replace it, just in case some other variable name somewhere else still refers to it.
tornado_logger.addHandler(nh)
You don't need to do anything with the children of the "tornado" logger, e.g. "tornado.access", et cetera.
You also need to define a logging policy for the root handler (logging.getLogger("")). Tornado looks at the root handler to decide whether logging has already been configured or needs a default setup.
I am trying to set up a logger that will write a new timestamped log file every time the application is run in a specific directory.
for example what I am trying to do is
timestampFilename = time.strftime("runtimelog%b_%d_%Y_%H:%M:%S.txt")
fh = logging.FileHandler(r'C:\my\folder\logs\'+timestampFilename, mode='w')
An example tweeked from the Logging Cookbook:
import logging
import os
from datetime import datetime
# create logger with 'spam_application'
logger = logging.getLogger('MYAPP')
logger.setLevel(logging.DEBUG)
# create file handler which logs in a specific directory
logdir = '.'
if 'APP_LOG_DIR' in os.environ:
logdir = os.environ['APP_LOG_DIR']
logfile = datetime.now().strftime("run_%b_%d_%Y_%H_%M_%S.log")
fh = logging.FileHandler(os.path.join(logdir, logfile))
fh.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('[%(asctime)s][%(name)s][%(levelname)s] %(message)s')
fh.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.debug("my first log line")
The log directory may be configured with the environment variable APP_LOG_DIR and path names are built in a platform independent way thanks to os.path.