I am trying to create two log files in python script:
one for capturing normal log messages
one for capturing exception messages
I have implemented the python code as below:
Create a log file for logging normal messages
logger = logging.getLogger(__name__)
logger.setLevel(Logging Resources and Information.)
formatter = logging.Formatter('%(asctime)s-----%(levelname)s-----%(filename)s-----%(funcName)s-----%(message)s', datefmt='%Y-%m-%d %H:%M:%S')
file_handler = logging.FileHandler('C:\RAVI\PYTHON_DOCS\Migration_Docs\Logfile.txt', mode='a') file_handler.setFormatter(formatter) logger.addHandler(file_handler)
Create a exception log file for logging exception messages in the program
exception_logger = logging.getLogger(__name__)
exception_logger.setLevel(logging.ERROR)
exception_formatter = logging.Formatter('%(asctime)s-----%(levelname)s-----%(filename)s-----%(funcName)s-----%(message)s', datefmt='%Y-%m-%d %H:%M:%S')
exception_file_handler = logging.FileHandler('C:\RAVI\PYTHON_DOCS\Migration_Docs\Exception.txt', mode='a')
exception_file_handler.setFormatter(exception_formatter) exception_logger.addHandler(exception_file_handler)
Below is the code snippet of the method where the above loggers are called:
logger.info('DDLLOAD METHOD START')
file_ddl = file_destination + '*.txt'
try:
logger.info('Connecting to Snowflake.....')
ctx = snow.connect( user='xdfz', #### Snowflake username
password='Xyz234', #### Snowflake password account='us-east-2.aws', ####
https://JZ51866.us-east-2.aws.snowflakecomputing.com. database='EDW',
schema='DDL_LOADER' )
logger.info('Connected to Snowflake successfully')
except Exception as ex:
exception_logger.error("Error encountered while establishing connection to Snowflake: ", ex)
Related
I need to disable Tornado from logging to STDOUT. I am using Python 3.8 and is running on Ubuntu 18.04. I want my log statements to be handled by a rotating file logger only. The issue is that logged statements are logged into the file and also to console:
import logging
from logging.handlers import RotatingFileHandler
logger = logging.getLogger("ex_logger")
nh = logging.NullHandler()
rfh = RotatingFileHandler(filename="./logs/process.log", mode='a', maxBytes=50000000, backupCount=25, encoding=None, delay=False)
rfh.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
rfh.setFormatter(formatter)
logger.handlers = []
logger.propagete = False
logger.addHandler(rfh)
logging.getLogger("tornado.access").handlers = []
logging.getLogger("tornado.application").handlers = []
logging.getLogger("tornado.general").handlers = []
logging.getLogger("tornado.access").addHandler(nh)
logging.getLogger("tornado.application").addHandler(nh)
logging.getLogger("tornado.general").addHandler(nh)
logging.getLogger("tornado.access").propagate = False
logging.getLogger("tornado.application").propagate = False
logging.getLogger("tornado.general").propagate = False
....
def main():
######
# this message eppears in both the output log file and stdout
######
logger.info(" application init ... ")
asyncio.set_event_loop_policy(tornado.platform.asyncio.AnyThreadEventLoopPolicy())
tornado.options.parse_command_line()
app = Application()
app.listen(options.port)
tornado.ioloop.IOLoop.current().start()
if __name__ == "__main__":
main()
The problem is that from the moment you start up your IDE, logging.getLogger("tornado") may have a StreamHandler attached. This doesn't happen with every IDE but it does happen with Spyder. So that is the one you have to replace by a NullHandler:
import logging
nh = logging.NullHandler()
tornado_logger = logging.getLogger("tornado")
tornado_logger.handlers.clear()
# tornado_logger.handlers = [] # would work instead of .clear() but it's best practice to change the list and not replace it, just in case some other variable name somewhere else still refers to it.
tornado_logger.addHandler(nh)
You don't need to do anything with the children of the "tornado" logger, e.g. "tornado.access", et cetera.
You also need to define a logging policy for the root handler (logging.getLogger("")). Tornado looks at the root handler to decide whether logging has already been configured or needs a default setup.
I am trying to set up a logger that will write a new timestamped log file every time the application is run in a specific directory.
for example what I am trying to do is
timestampFilename = time.strftime("runtimelog%b_%d_%Y_%H:%M:%S.txt")
fh = logging.FileHandler(r'C:\my\folder\logs\'+timestampFilename, mode='w')
An example tweeked from the Logging Cookbook:
import logging
import os
from datetime import datetime
# create logger with 'spam_application'
logger = logging.getLogger('MYAPP')
logger.setLevel(logging.DEBUG)
# create file handler which logs in a specific directory
logdir = '.'
if 'APP_LOG_DIR' in os.environ:
logdir = os.environ['APP_LOG_DIR']
logfile = datetime.now().strftime("run_%b_%d_%Y_%H_%M_%S.log")
fh = logging.FileHandler(os.path.join(logdir, logfile))
fh.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('[%(asctime)s][%(name)s][%(levelname)s] %(message)s')
fh.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.debug("my first log line")
The log directory may be configured with the environment variable APP_LOG_DIR and path names are built in a platform independent way thanks to os.path.
Hello folks!
#this is values from xml file.
clientID = logObject['meta']['clientID']
authToken = logObject['meta']['authToken']
logType = logObject['logType']
FORMAT = '%(asctime)-15s %(logType)s %(process)d %(user)-8s %(message)s'
d = {'logType': logType ,'user': getpass.getuser()}
#line creating two log files(access.log and error.log)
logging.basicConfig(filename = 'access.log', filemode = 'w', format=FORMAT)
logging.basicConfig(filename = 'error.log', filemode = 'w', format=FORMAT)
if(clientID == ""):
# logger = setup_logger('first_logger', 'access.log',logType)
logger.warning('Please Enter clientID', extra=d)
This is my sample code.
what i need is to create two files which i have mentioned. but problem is it only creates single file everytime and messages goes to that file only.
So i want that if i mention logger.error("msg") or logger.warning("msg") then it should go to that log file.
When you use just logging you actually use single root logger created during import logging. You may use it with several handlers. For example:
# 1. logging to file
filename = (
'log_file{}.log'
.format(
dt.datetime.today().strftime("_date_%Y-%m-%d_time_%H-%M-%S")))
# path to log folder.
path = os.path.join(os.getcwd(), 'logs')
# create log folder if it does not exist.
if not os.path.isdir(path):
os.makedirs(path, exist_ok=True)
to_file = logging.FileHandler(os.path.join(path, filename))
to_file.addFilter(lambda x: x.levelno in [20, 40])
# 2. logging to console
to_console = logging.StreamHandler()
to_console.addFilter(lambda x: x.levelno in [20, 21, 40])
# 3. root logger configuration
logging.basicConfig(
level=10,
datefmt='%Y-%m-%d %H:%M:%S',
format='[%(asctime)s]:%(threadName)s:%(levelname)s:%(message)s',
handlers=[to_console, to_file])
If you want to log into 2 files then just create 2 handlers logging.FileHandler(...), register them, and use newly configured root logger as usual:
logging.info('Some info')
Another option is to create 2 loggers. Usually you need to do so if you want to separate several sources of log messages.
It seems that on a couple machines I'm getting double output like this:
INFO LED NOTIFICATION STARTED
INFO:output_logger:LED NOTIFICATION STARTED
This is the function I'm using:
def setup_logger(name, log_file, level=logging.INFO, ContentFormat='%(asctime)s %(levelname)s %(message)s', DateTimeFormat="%Y-%m-%d %H:%M:%S", CreateConsoleLogger=False):
"""Function setup as many loggers as you want"""
logger = logging.getLogger(name)
logger.setLevel(level)
if CreateConsoleLogger:
# create console handler
handler = logging.StreamHandler()
handler.setLevel(level)
formatter = logging.Formatter("%(levelname)s %(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)
# create a file handler
handler = RotatingFileHandler(log_file, maxBytes=2000000, backupCount=5)
handler.setLevel(level)
formatter = logging.Formatter(ContentFormat, DateTimeFormat)
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
This is how I'm creating the logger:
output_logger = setup_logger('output_logger', 'log/autofy.log', level=logging.DEBUG, CreateConsoleLogger=True)
And this is how I call it:
output_logger.info("LED NOTIFICATION STARTED")
On a most of computers I just see the same message printed to the console that's saved to the file as expected ("INFO LED NOTIFICATION STARTED"), but on other computers it's doing this weird double output thing. My code is exactly the same from one computer to another, so any ideas what could be causing this on some computers and not others?
EDIT
I'm writing the script using notepad++ and running it in a terminal window on an Ubuntu 16.04 machine. I'm using python3.
Try adding this to your code:
logging._get_logger().propagate = False
First of all, sorry for my English.
i'm using Python3.x. I want to create a handler that storage the current logging
information into a table in SQLite3.
i've a handler that show by console de current logging and another that write the
same into a txt file.
I leave you my example:
import logging
class Logger(object):
'''test logger'''
def __init__(self):
'''constructor'''
self.logger = logging.getLogger('CIRE')
self.logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = '%(asctime)s. %(name)s|%(levelname)s| %(message)s'
dateFormat = '%m/%d/%Y %I:%M:%S %p'
logging.basicConfig(format=formatter, datefmt=dateFormat, filename='Logger.log')
formatter = logging.Formatter(formatter, dateFormat)
ch.setFormatter(formatter)
self.logger.addHandler(ch)
l = Logger()
l.logger.debug("test")
OUTPUT:
02/07/2017 12:54:13 PM. CIRE|DEBUG| test
thanks!!