Using config file for logging in python - linux

i wrote function for logging in python:
def go_logger(name_of_file):
formatter = logging.Formatter('%(asctime)s - %(message)s')
logging.basicConfig(filemode='a', datefmt='%m-%d-%Y')
logger = logging.getLogger(name_of_file)
logger.setLevel(logging.DEBUG)
handler = closehandler.ClosingHandler(os.path.join('/path/to/logs', filename), mode='a')
handler.setLevel(logging.DEBUG)
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
And it works. I can call this function, like this:
LOG = host_utils.go_logger('wwe.log')
How you can see, i can call my function with writing into different log files.
But i want to use config file. Using config (from official documentation of python):
logging.config.fileConfig(fname, defaults=None, disable_existing_loggers=True)
Ok, fname this is name of config, but how i can use placeholder for name of file for log ?
Part of writing to file from config:
[handler_handler]
class=handlers.FileHandler
level=DEBUG
formatter=Formatter
args=('wwe.log','a')
Do you see, args=('wwe.log','a'). How can i put placeholder, instead name of log of file ? I repeat, i want to call function, like i did with help of my method:
LOG = host_utils.go_logger('wwe.log')
But with using config file. What can you advice me ?

You can use the keys of the defaults dictionary as place holders in the configuration file.
Since your other question uses place holders, I assume you figured that out, but here's a full, runnable example from your other question, in case others have the same problem:
import logging.config
def get_logger(logfilename):
config_file = ('config.txt')
logging.config.fileConfig(config_file, defaults={'logfilename': logfilename}, disable_existing_loggers=False)
logger = logging.getLogger("main")
return logger
logger = get_logger('scratch.log')
logger.info('Hello, World!')
Here's the config file with the logfilename place holder:
[loggers]
keys=root
[handlers]
keys=fileHandler
[formatters]
keys=Formatter
[logger_root]
level=DEBUG
handlers=fileHandler
qualname=main
[handler_fileHandler]
class=FileHandler
level=DEBUG
formatter=Formatter
args=('%(logfilename)s', 'a', 'utf8')
[formatter_Formatter]
format=%(asctime)s - %(levelname)s - %(message)s
datefmt="%Y-%m-%d %H:%M:%S"

Related

Sending Log Data to Splunk using Python

I have an app that detects file changes, backups, and syncs files to Azure.
I currently have a logger setup writes log events to a file called log.log. I also have event data streaming to stdout. This is my current working code.
I’d like to send log data to Splunk via requests.post() or logging.handlers.HTTPHandler.
Question: How do I set up an HTTP Handler in Python logging?
(I need to become more familiar with the advanced features of logging in Python.)
import logging
def setup_logger(logger_name:str=__name__, logfile:str='log.log'):
""" Standard Logging: std out and log file.
Args:
logger_name (str, optional): Logger Name. Defaults to __name__.
logfile (str, optional): Log File Name. Defaults to 'app.log'.
Returns:
logging: function to set logging as a object.
"""
logger = logging.getLogger(logger_name)
logger.setLevel(logging.INFO)
fh = logging.FileHandler(logfile)
fh.setLevel(logging.INFO)
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
formatter = logging.Formatter(
'%(asctime)s | %(name)s | %(levelname)s | %(message)s',
'%m-%d-%Y %H:%M:%S')
fh.setFormatter(formatter)
ch.setFormatter(formatter)
logger.addHandler(fh)
logger.addHandler(ch)
return logger
if __name__ == "__main__":
logger=setup_logger('logger','log-sample.log') # Creates a test file vs default log.log
logger.info("My Logger has been initialized")
Currently I’m trying to send test data to Splunk via this code example (before I figure out the logging issue):
import requests
# Set up the Splunk HEC URL and token
splunk_url = "http://127.0.0.1:8088/services/collector/event"
splunk_token = "57489f00-605e-4f2a-8df3-123456789abcdef="
# Set up the log event data
log_data = {
"event": "This is a test log event",
"sourcetype": "my_sourcetype",
"index": "test_index"
}
# Send the log event to Splunk
response = requests.post(splunk_url, json=log_data, headers={
"Authorization": f"Splunk {splunk_token}"
})
# Check the response status code to make sure the request was successful
if response.status_code == 200:
print("Log event sent to Splunk successfully")
else:
print(f"Error sending log event to Splunk: {response.text}")
I found the solution myself.
import logging
import requests
import urllib3
urllib3.disable_warnings() # using default cert.
url = "https://127.0.0.1:8088/services/collector/event"
headers = {"Authorization": "Splunk 09584dbe-183b-4d14-9ee9-be66a37b331a"}
index = 'test_index'
class CustomHttpHandler(logging.Handler):
def __init__(self, url:str, headers:dict, index:str) -> None:
self.url = url
self.headers = headers
self.index = index
super().__init__()
def emit(self, record:str) -> exec:
'''
This function gets called when a log event gets emitted. It receives a
record, formats it and sends it to the url
Parameters:
record: a log record (created by logging module)
'''
log_entry = self.format(record)
response = requests.post(
url=self.url, headers=self.headers,
json={"index": self.index, "event": log_entry},
verify=False)
def setup_logger(logger_name:str=__name__, logfile:str='log.log'):
""" Standard Logging: std out and log file.
1.creates file handler which logs even debug messages: fh
2.creates console handler with a higher log level: ch
3.creates formatter and add it to the handlers: formatter, setFormatter
4.adds the handlers to the logger: addHandler
Args:
logger_name (str, optional): Logger Name. Defaults to __name__.
logfile (str, optional): Log File Name. Defaults to 'app.log'.
Returns:
logging: function to set logging as a object.
"""
logger = logging.getLogger(logger_name)
logger.setLevel(logging.INFO)
fh = logging.FileHandler(logfile)
fh.setLevel(logging.INFO)
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
formatter = logging.Formatter(
'%(asctime)s | %(name)s | %(levelname)s | %(message)s',
'%m-%d-%Y %H:%M:%S')
splunk_handler = CustomHttpHandler(url=url, headers=headers, index=index)
fh.setFormatter(formatter)
ch.setFormatter(formatter)
splunk_handler.setFormatter(formatter)
logger.addHandler(fh)
logger.addHandler(ch)
logger.addHandler(splunk_handler)
return logger
if __name__ == "__main__":
logger=setup_logger('logger','app.log')
logger.info("My Logger has been initialized")

Python logging to file but disable on CloudWatch

In a library, I have declared a custom logger like this in the file log.py:
import os
import sys
import logging
LOG_LEVEL = os.getenv('LOG_LEVEL', 'INFO')
# disable root logger
root_logger = logging.getLogger()
root_logger.disabled = True
# create custom logger
logger = logging.getLogger('my-handle')
logger.removeHandler(sys.stdout)
logger.setLevel(logging.getLevelName(LOG_LEVEL))
formatter = logging.Formatter(
'{"timestamp": "%(asctime)s", "level": "%(levelname)s", "logger": "%(name)s", '
'"filename": "%(filename)s", "message": "%(message)s"}',
'%Y-%m-%dT%H:%M:%S%z')
handler = logging.FileHandler('file.log', encoding='utf-8')
handler.setFormatter(formatter)
handler.setLevel(logging.getLevelName(LOG_LEVEL))
logger.addHandler(handler)
I then call this logger from other files doing:
from log import logger
logger.debug('something')
When running the code above on my computer, it only sends the logs to the file.log file. But when running the exact same code on a Lambda function, all the logs appear in CloudWatch. What is needed to disable the CloudWatch logs? it costs a lot for stuff I don't want and already log somewhere else (in s3 for that matter - much cheaper)

logging | error() gets sent to FileHandler... but not other Logging Levels

I am able to store error logs to a file... but not info() or any other Logging Levels.
What am I doing wrong?
How can I store any level of logs to FileHandler?
code.py
import sys
import logging
def setup_logging():
global logger
logger = logging.getLogger()
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
open('data_module.log', 'w').close() # empty logs
global fileHandler
fileHandler = logging.FileHandler('data_module.log')
fileHandler.setFormatter(formatter)
fileHandler.setLevel(logging.DEBUG)
logger.addHandler(fileHandler)
logger.error('Started') # info
logger.info('information') # info
test.py:
import code as c
c.setup_logging()
with open('data_module.log', 'r') as fileHandler:
logs = [l.rstrip() for l in fileHandler.readlines()]
open('data_module.log', 'w').close() # empty logs
assert len(logs) == 2
Error:
AssertionError: assert 1 == 2
Please let me know if there's anything else I should add to post.
You need to set the level for the logger itself:
logger.setLevel(logging.DEBUG)
The default log level is WARN: when you write a DEBUG-level message, the logger does not handle it (ie send it to a handler). The handler you added is never invoked.
The handler can have its own level, but that is consulted only after the handler is invoked. If a logger sends a DEBUG message to a handler that is only interested in INFO+ messages, it does nothing.

How to log everything into a log file using logger Python

I'm trying to duplicate everything that could display on console to a log file. This includes unhandled exceptions and test results from unittest.
Curranty following is not being done,
unhandled exceptions is NOT being logged into the log file.
test results from unittest is NOT being logged into the log file.
#2 is more important for me.
I work in Lunux/Unix but it would be great if the solution will work on Windows as well.
Any help will be appreciated. Below is my code,
logger = logging.getLogger(__name__)
logfile = datetime.now().strftime(pathlib.PurePath(__file__).stem + '_%H_%M_%d_%m_%Y')
class TestClass(unittest.TestCase):
logFormatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
rootLogger = logging.getLogger()
fileHandler = logging.FileHandler("{0}.log".format(logfile))
fileHandler.setFormatter(logFormatter)
rootLogger.addHandler(fileHandler)
rootLogger.setLevel(logging.DEBUG)
consoleHandler = logging.StreamHandler(sys.stderr)
consoleHandler.setFormatter(logFormatter)
rootLogger.addHandler(consoleHandler)
def test_logging(self):
self.fail("Why is this not being logged into the log file?")

logging getLogger does not output

I am trying to setup a logging mechanism for a python module.
Following is the example code that I have written to setup logging
import logging
def init_logger(logger):
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(filename)s - %(funcName)s - %(message)s')
ch = logging.StreamHandler()
ch.setFormatter(formatter)
ch.setLevel(logging.INFO)
logger.addHandler(ch)
file_handler = logging.FileHandler('test_logging.log')
file_handler.setFormatter(formatter)
file_handler.setLevel(logging.DEBUG)
logger.addHandler(file_handler)
def foo1():
logger = logging.getLogger(__name__)
logger.info('Test Info')
logger.debug('Test Debug')
logger.error('Test Error')
def foo2():
logger = logging.getLogger(__name__)
logger.info('Test Info')
logger.debug('Test Debug')
logger.error('Test Error')
if __name__ == '__main__':
logger = logging.getLogger(__name__)
init_logger(logger)
foo1()
foo2()
I expect the logging to print info level and above to stdout and debug level and above to be written to the log file. But what I see is that only error level is outputted to both stdout and log file.
2019-08-13 11:20:07,775 - ERROR - test_logger.py - foo1 - Test Error
2019-08-13 11:20:07,776 - ERROR - test_logger.py - foo2 - Test Error
As per the documentation getLogger should return the same instance of logger. I even tried to create a new instance for the first time like logger = logging.Logger(__name__) but no luck with that. I am not understanding what am I missing here.
Short answer: you must use logging.basicConfig(level=...) or logger.setLevel in your code.
When you use logging.getLogger('some_name') for the first time you create a new logger with level = NOTSET = 0.
# logging module source code
class Logger(Filterer):
def __init__(self, name, level=NOTSET):
...
logging.NOTSET seems to be a valid level value but it is not. Actually it is illegal value that says that logger is not enabled to log anything and forces logger to use level from parent logger (root logger). This logic is defined in Looger.getEffectiveLevel method:
# logging module source code
def getEffectiveLevel(self):
logger = self
while logger:
if logger.level: # 0 gives False here
return logger.level
logger = logger.parent # 0 makes this line reachable
return NOTSET
Root logger has level=WARNING so newly created loggers inherit this level:
# logging module source code
root = RootLogger(WARNING)
logging.getLogger does not allow you to specify logging level. So you have to use logging.basicConfig to modify root logger or logger.setLevel to modify newly created logger somewhere in the very beginning of the script.
I guess the feature should be documented in logging module guides/documentation.

Resources