Python 3 logging - MemoryHandler and flushOnClose behaviour - python-3.x

If I forget to close the MemoryHandler before the end of the script the log message 'debug' is displayed even though flushOnClose=False (Python 3.6).
Am I doing something wrong or is this the expected behaviour? I would have thought flushOnClose would be obeyed regardless of how the handle is closed (i.e. when the script ends).
import logging.config
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# file handler, triggered by the memory handler
fh = logging.FileHandler('log.txt')
# set the logging level
fh.setLevel(logging.DEBUG)
# capacity is the number of records
mh = logging.handlers.MemoryHandler(5, flushLevel=logging.ERROR, target=fh, flushOnClose=False)
logger.addHandler(mh)
logger.debug('debug')
# mh.close()
For the arguments 5, flushLevel=logging.ERROR, target=fh, flushOnClose=False the 'debug' message should not be displaying, because
I have not added 5 messages to the queue
flushOnClose=False, therefore when the script ends there should not be a flush
debug does not trigger flushing from flushLevel
I find that when I use mh.close() the message does not flush, as expected. However when the script ends without mh.close() (commented), the single debug message seems to get flushed despite the settings suggestion that it shouldn't.

Faced this issue too where logger should not supposed to print anything unless 'error' event is encountered only.
Had to manually call close() on all MemoryHandlers for my Logger instance via atexit:
def _close_all_memory_handlers():
for handler in Logger.handlers:
if isinstance(handler, logging.handlers.MemoryHandler):
handler.close()
import atexit
atexit.register(_close_all_memory_handlers)
This should work as long as you register this atexit handler after logging module is initialized.

I think this is the correct behaviour:
logger.debug('debug') --> this will print to your file 'debug' without waiting any flush.
Sorry...yes the default is True. I saw the add above and in my opinion the behaviour is normal, in sense that if you do NOT terminate then everything will be flush at the end of execution (this is typical in order to debug what went wrong). In case you are terminating, then the message was append to the buffer and the "False" cause the message to be destroyed within the buffer. Isn't it right behaviour?
In addition the flushOnClose does not exist in the handler class as below:
class MemoryHandler(BufferingHandler):
"""
A handler class which buffers logging records in memory, periodically
flushing them to a target handler. Flushing occurs whenever the buffer
is full, or when an event of a certain severity or greater is seen.
"""
def __init__(self, capacity, flushLevel=logging.ERROR, target=None):
"""
Initialize the handler with the buffer size, the level at which
flushing should occur and an optional target.
Note that without a target being set either here or via setTarget(),
a MemoryHandler is no use to anyone!
"""
BufferingHandler.__init__(self, capacity)
self.flushLevel = flushLevel
self.target = target
def shouldFlush(self, record):
"""
Check for buffer full or a record at the flushLevel or higher.
"""
return (len(self.buffer) >= self.capacity) or \
(record.levelno >= self.flushLevel)
def setTarget(self, target):
"""
Set the target handler for this handler.
"""
self.target = target
def flush(self):
"""
For a MemoryHandler, flushing means just sending the buffered
records to the target, if there is one. Override if you want
different behaviour.
The record buffer is also cleared by this operation.
"""
self.acquire()
try:
if self.target:
for record in self.buffer:
self.target.handle(record)
self.buffer = []
finally:
self.release()
def close(self):
"""
Flush, set the target to None and lose the buffer.
"""
try:
self.flush()
finally:
self.acquire()
try:
self.target = None
BufferingHandler.close(self)
finally:
self.release()
Anyway the behavior is normal, in sense that even when you open a file, you can decide nowadays if close or not in the end. In the end the file will be closed anyway in order not to lose information :-)

Related

Python logger always outputting debug messages regardless of filter

I'm currently creating a subclass of logging.Logger, which has a filter based on level, and this level can change between logging calls (hence why I'm doing this with a filter, as opposed to setLevel()). However, it seems that my logger always prints out messages with level DEBUG, regardless of the filter. Here's my code below
import logging
class _LevelFilter(logging.Filter):
def filter(self, record):
SimpleLogger.setLevel(_DEFAULT_LEVEL)
return 1 if SimpleLogger.isEnabledFor(record.levelno) else 0
class _SimpleLogger(logging.getLoggerClass()):
def __init__(self, name=None, level=logging.DEBUG):
super().__init__(name, level)
self.setLevel(logging.DEBUG)
_handler = logging.StreamHandler()
_handler.setLevel(logging.DEBUG)
self.addHandler(_handler)
self.addFilter(_LevelFilter())
_DEFAULT_LEVEL = 'WARNING'
SimpleLogger = _SimpleLogger()
if __name__ == '__main__':
SimpleLogger.debug('testing debug')
SimpleLogger.info('testing info')
SimpleLogger.warning('testing warning')
SimpleLogger.critical('testing critical')
SimpleLogger.debug('testing debug')
The code above gives the following output:
testing debug
testing warning
testing critical
testing debug
I know that if I declare SimpleLogger as a separate variable instead of a subclass, it works, but I need to use the subclass for various reasons. For reference, here is a version not using the subclass that works.
SimpleLogger = logging.getLogger()
SimpleLogger.setLevel(logging.DEBUG)
_handler = logging.StreamHandler()
_handler.setLevel(logging.DEBUG)
SimpleLogger.addHandler(_handler)
SimpleLogger.addFilter(_LevelFilter())
_DEFAULT_LEVEL = 'WARNING'
I cannot figure out for the life of me why the debug messages are always printing. The differences between the subclass and non-subclass versions are not very big, and setting the level should cause debug and info messages to not appear. Any help would be appreciated, thanks!
I finally found the solution! So it turns out that python 3.7 introduces a cache to the logger that caches results from isEnabledFor(). Since the filter runs after the initial isEnabledFor() check, the old result was still cached, which led to this weird behavior. The solution is to not use a filter in this way. Instead, I really wanted an alternative way to get the effective level of the logger. Here is the code for the fixed logger:
Edit: It turns out that my original solution still doesn't work, and the cache problem is still there. It seems this is specific to loggers that subclass from logging.getLoggerClass(), so you need to clear the cache every time. The new solution is below. (Also, I simplified it a lot to only include the necessary stuff.)
class _SimpleLogger(logging.getLoggerClass()):
def __init__(self, name=None, level=logging.DEBUG):
super().__init__(name, level)
_handler = logging.StreamHandler()
self.addHandler(_handler)
def isEnabledFor(self, level):
# Clears logging cache introduced in Python 3.7.
# Clear here since this is called by all logging methods that write.
self._cache = {} # Set instead of calling clear() for compatibility with Python <3.7
return super().isEnabledFor(level)
# Confirm that this works
if __name__ == "__main__":
logger = SimpleLogger()
logger.setLevel(logging.DEBUG)
# Next 4 logs should print
logger.debug('d')
logger.info('i')
logger.warning('w')
logger.error('e')
# Only warning and error logs should print
logger.setLevel(logging.WARNING)
logger.debug('d')
logger.info('i')
logger.warning('w')
logger.error('e')

Asyncio shared object at the same address does not hold same values

Okay, so I am created a DataStream object which is just a wrapper class around asyncio.Queue. I am passing this around all over and everything is working fine up until the following functions. I am calling ensure_future to run 2 infinite loops, one that replicates the data in one DataStream object, and one that sends data to a websocket. here is that code:
def start(self):
# make sure that we set the event loop before we run our async requests
print("Starting WebsocketProducer on ", self.host, self.port)
RUNTIME_LOGGER.info(
"Starting WebsocketProducer on %s:%i", self.host, self.port)
#Get the event loop and add a task to it.
asyncio.set_event_loop(self.loop)
asyncio.get_event_loop().create_task(self._mirror_stream(self.data_stream))
asyncio.ensure_future(self._serve(self.ssl_context))enter code here
Ignore the indent issue, SO wont indent correctly.
And here is the method that is failing with the error 'Task was destroyed but it is pending!'. Keep in mind, if I do not include the lines with 'data_stream.get()' the function runs fine. I made sure, the objects in both locations have the same memory address AND value for id(). If i print the data that comes from the await self.data_stream.get() I get the correct data. However after that it seems to just return and break. Here is the code:
async def _mirror_stream(self):
while True:
stream_length = self.data_stream.length
try:
if stream_length > 1:
for _ in range(0, stream_length):
data = await self.data_stream.get()
else:
data = await self.data_stream.get()
except Exception as e:
print(str(e))
# If the data is null, keep the last known value
if self._is_json_serializable(data) and data is not None:
self.payload = json.dumps(data)
else:
RUNTIME_LOGGER.warning(
"Mirroring stream encountered a Null payload in WebsocketProducer!")
await asyncio.sleep(self.poll_rate)enter code here
The issue has been resolved by implementing my own async Queue by utilizing the normal queue.Queue object. For some reason the application would only work if I would 'await' for queue.get(), even though it wasnt an asyncio.Queue object... Not entirely sure why this behavior occured, however the application is running well, and still performing as if the Queue were from the asyncio lib. Thanks to those who looked!

How to check if there is no message in RabbitMQ with Pika and Python

I read messages from RabbitMQ with the pika python library. Reading messages in the loop is done by
connection = rpc.connect()
channel = connection.channel()
channel.basic_consume(rpc.consumeCallback, queue=FromQueue, no_ack=Ack)
channel.start_consuming()
This works fine.
But I also have the need to read one single message, which I do with:
method, properties, body = channel.basic_get(queue=FromQueue)
rpc.consumeCallback(Channel=channel,Method=method, Properties=properties,Body=body)
But when there is no message in the queue, the script is craching. How do I implement the get_empty() method described here ?
I solved it temporarily with a check on the response like:
method, properties, body = channel.basic_get(queue=FromQueue)
if(method == None):
## queue is empty
you can check empty in body like this:
def callback(ch, method, properties, body):
decodeBodyInfo = body.decode('utf-8')
if decodeBodyInfo != '':
cacheResult = decodeBodyInfo
ch.stop_consuming()
That so simple and easy to use :D
In case you're using the channel.consume generator in a for loop, you can set the inactivity_timeout parameter.
From the pika docs,
:param float inactivity_timeout: if a number is given (in seconds), will cause the
method to yield (None, None, None) after the given period of inactivity; this
permits for pseudo-regular maintenance activities to be carried out by the user
while waiting for messages to arrive. If None is given (default), then the method
blocks until the next event arrives. NOTE that timing granularity is limited by the
timer resolution of the underlying implementation.NEW in pika 0.10.0.
so changing your code to something like this might help
for method_frame, properties, body in channel.consume(queue, inactivity_timeout=120):
# break of the loop after 2 min of inactivity (no new item fetched)
if method_frame is None
break
Don't forget to properly handle the channel and the connection after exiting the loop

python3/logging: Optionally write to more than one stream

I'm successfully using the logging module in my python3 program to send log messages to a log file, for example, /var/log/myprogram.log. In certain cases, I want a subset of those messages to also go to stdout, with them formatted through my logging.Logger instance in the same way that they are formatted when they go to the log file.
Assuming that my logger instance is called loginstance, I'd like to put some sort of wrapper around loginstance.log(level, msg) to let me choose whether the message only goes to /var/log/myprogram.log, or whether it goes there and also to stdout, as follows:
# Assume `loginstance` has already been instantiated
# as a global, and that it knows to send logging info
# to `/var/log/myprogram.log` by default.
def mylogger(level, msg, with_stdout=False):
if with_stdout:
# Somehow send `msg` through `loginstance` so
# that it goes BOTH to `/var/log/myprogram.log`
# AND to `stdout`, with identical formatting.
else:
# Send only to `/var/log/myprogram.log` by default.
loginstance.log(level, msg)
I'd like to manage this with one, single logging.Logger instance, so that if I want to change the format or other logging behavior, I only have to do this in one place.
I'm guessing that this involves subclassing logging.Logger and/or logging.Formatter, but I haven't figured out how to do this.
Thank you in advance for any suggestions.
I figured out how to do it. It simply requires that I use a FileHandler subclass and pass an extra argument to log() ...
class MyFileHandler(logging.FileHandler):
def emit(self, record):
super().emit(record)
also_use_stdout = getattr(record, 'also_use_stdout', False)
if also_use_stdout:
savestream = self.stream
self.stream = sys.stdout
try:
super().emit(record)
finally:
self.stream = savestream
When instantiating my logger, I do this ...
logger = logging.getLogger('myprogram')
logger.addHandler(MyFileHandler('/var/log/myprogram.log'))
Then, the mylogger function that I described above will look like this:
def mylogger(level, msg, with_stdout=False):
loginstance.log(level, msg, extra={'also_use_stdout': with_stdout})
This works because anything passed to the log function within the optional extra dictionary becomes an attribute of the record object that ultimately gets passed to emit.

Gifs shown inside a label doesn't update itself regulary in Pyqt5

I have a loading widget that consists of two labels, one is the status label and the other one is the label that the animated gif will be shown in. If I call show() method before heavy stuff gets processed, the gif at the loading widget doesn't update itself at all. There's nothing wrong with the gif btw(looping problems etc.). The main code(caller) looks like this:
self.loadingwidget = LoadingWidgetForm()
self.setCentralWidget(self.loadingwidget)
self.loadingwidget.show()
...
...
heavy stuff
...
...
self.loadingwidget.hide()
The widget class:
class LoadingWidgetForm(QWidget, LoadingWidget):
def __init__(self, parent=None):
super().__init__(parent=parent)
self.setupUi(self)
self.setWindowFlags(self.windowFlags() | Qt.FramelessWindowHint)
self.setAttribute(Qt.WA_TranslucentBackground)
pince_directory = SysUtils.get_current_script_directory() # returns current working directory
self.movie = QMovie(pince_directory + "/media/loading_widget_gondola.gif", QByteArray())
self.label_Animated.setMovie(self.movie)
self.movie.setScaledSize(QSize(50, 50))
self.movie.setCacheMode(QMovie.CacheAll)
self.movie.setSpeed(100)
self.movie.start()
self.not_finished=True
self.update_thread = Thread(target=self.update_widget)
self.update_thread.daemon = True
def showEvent(self, QShowEvent):
QApplication.processEvents()
self.update_thread.start()
def hideEvent(self, QHideEvent):
self.not_finished = False
def update_widget(self):
while self.not_finished:
QApplication.processEvents()
As you see I tried to create a seperate thread to avoid workload but it didn't make any difference. Then I tried my luck with the QThread class by overriding the run() method but it also didn't work. But executing QApplication.processEvents() method inside of the heavy stuff works well. I also think I shouldn't be using seperate threads, I feel like there should be a more elegant way to do this. The widget looks like this btw:
Processing...
Full version of the gif:
Thanks in advance! Have a good day.
Edit: I can't move the heavy stuff to a different thread due to bugs in pexpect. Pexpect's spawn() method requires spawned object and any operations related with the spawned object to be in the same thread. I don't want to change the working flow of the whole program
In order to update GUI animations, the main Qt loop (located in the main GUI thread) has to be running and processing events. The Qt event loop can only process a single event at a time, however because handling these events typically takes a very short time control is returned rapidly to the loop. This allows the GUI updates (repaints, including animation etc.) to appear smooth.
A common example is having a button to initiate loading of a file. The button press creates an event which is handled, and passed off to your code (either via events directly, or via signals). Now the main thread is in your long-running code, and the event loop is stalled — and will stay stalled until the long-running job (e.g. file load) is complete.
You're correct that you can solve this with threads, but you've gone about it backwards. You want to put your long-running code in a thread (not your call to processEvents). In fact, calling (or interacting with) the GUI from another thread is a recipe for a crash.
The simplest way to work with threads is to use QRunner and QThreadPool. This allows for multiple execution threads. The following wall of code gives you a custom Worker class that makes it simple to handle this. I normally put this in a file threads.py to keep it out of the way:
import sys
from PyQt5.QtCore import QObject, QRunnable
class WorkerSignals(QObject):
'''
Defines the signals available from a running worker thread.
error
`tuple` (exctype, value, traceback.format_exc() )
result
`dict` data returned from processing
'''
finished = pyqtSignal()
error = pyqtSignal(tuple)
result = pyqtSignal(dict)
class Worker(QRunnable):
'''
Worker thread
Inherits from QRunnable to handler worker thread setup, signals and wrap-up.
:param callback: The function callback to run on this worker thread. Supplied args and
kwargs will be passed through to the runner.
:type callback: function
:param args: Arguments to pass to the callback function
:param kwargs: Keywords to pass to the callback function
'''
def __init__(self, fn, *args, **kwargs):
super(Worker, self).__init__()
# Store constructor arguments (re-used for processing)
self.fn = fn
self.args = args
self.kwargs = kwargs
self.signals = WorkerSignals()
#pyqtSlot()
def run(self):
'''
Initialise the runner function with passed args, kwargs.
'''
# Retrieve args/kwargs here; and fire processing using them
try:
result = self.fn(*self.args, **self.kwargs)
except:
traceback.print_exc()
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
else:
self.signals.result.emit(result) # Return the result of the processing
finally:
self.signals.finished.emit() # Done
To use the above, you need a QThreadPool to handle the threads. You only need to create this once, for example during application initialisation.
threadpool = QThreadPool()
Now, create a worker by passing in the Python function to execute:
from .threads import Worker # our custom worker Class
worker = Worker(fn=<Python function>) # create a Worker object
Now attach signals to get back the result, or be notified of an error:
worker.signals.error.connect(<Python function error handler>)
worker.signals.result.connect(<Python function result handler>)
Then, to execute this Worker, you can just pass it to the QThreadPool.
threadpool.start(worker)
Everything will take care of itself, with the result of the work returned to the connected signal... and the main GUI loop will be free to do it's thing!

Resources