Python Logging: Exception clears log file - python-3.x

Looking for a hint of what to look for:
I inherited some code and am going through the process of reviewing/debugging. The author put lots of logging.info calls throughout the code which looked promising. The main.py has the following:
logging.basicConfig(filename='struc2vec.log', filemode='w', level=logging.DEBUG, format='%(asctime)s %(message)s')
When I monitor the file size of the log file while the program runs, the file size increases and decreases eg. 5k..6k...2k...6k... If I randomly stop the program I can see that some things have been written to the file BUT if there is an exception that crashes the program the log file is cleared.
It is as if the logging is being reinitialized over and over. The docs for logging indicate the file size should just keep increasing (DEBUG mode) indefinitely. There are a number of modules and each one imports logging, does that make a difference? What should I look for that might be clearing or overwriting the log file repeatedly?

Related

How to initiate a logfile and change handler.suffix in a TimedRotatingFileHandler

I have a script that monitors a platforms chat, and logs it to a file. Here is the code to setup my logger:
import logging
import logging.handlers as handlers
import time
from datetime import datetime
chatlogger = logging.getLogger("chatlog")
chatlogger.setLevel(logging.INFO)
logHandler = handlers.TimedRotatingFileHandler('chatlog_', when='midnight', interval =1, encoding='utf-8')
logHandler.setLevel(logging.INFO)
logHandler.suffix="%Y%m%d.log"
chatlogger.addHandler(logHandler)
logHandler.doRollover() #this line is needed if when=midnight, otherwise it does not crate the proper file
This works, the chatlog_.yyyymmddd.log gets created and it rolls over when it should. However, there are two small issues I'd like to address/address differently.
The first is that the very first log file the script creates does not have the suffix; it is just 'chatlog_' and nothing else. I added in the doRollover() to correct this, is a different or better way to handle initiating the logfile? The script will be run 24/7(*or as close to that as possible), being restarted with the machine.
The second issue is more of an aesthetic thing. The logHandler.suffix() adds in a '.' between the filename and suffix. Is there something I can do to so stop that from happening?

Papermill prints everything on the console

I am working on adding new features to project. Within the project I am logging stdout to a file because some components are printing information useful for debugging. I recently added a new feature to the project which uses papermill to run jupyter notebook. The problem I am having is that papermill is printing everything to the console even if I redirect stdout to a temporary variable.
Below you can see a sample code,
with io.StringIO() as buf, redirect_stdout(buf):
pm.execute_notebook(
path,
path,
progress_bar=False,
stdout_file=buf,
stderr_file=buf,
parameters=dict(**params)
)
print("!!! redirected !!!")
print("!!! redirected !!!")
The first print statement successfully gets redirected to the buf while everything pm.execute_notebook prints goes to the console. The last print statement prints on the console as expected.
To solve the problem I had to change the handler and logging level of the logger.
To get the logger:
logger = logging.getLogger('papermill')
To change the logging level:
logger.setLevel('WARNING')
To remove the stream handler:
logger.removeHandler(logging.StreamHandler())
Removing the stream handler and setting the right level solved my problem. Here is a link to Python logging documentation.

file watcher in python 3.5 using library watchgod

Hi everyone i am trying to build a file watcher in python 3.5 using watchgod. I want to continuously watch a directory and if any file is added then i want to send a list of added files to another program which will perform a series of task. Following is my code in python :-
print("execution of main file begins !!!!")
import os
from watchgod import watch
#changes gives a set object when watch finds any kind of changes in directory
for changes in watch(r'C:\Users\Rajat.Malik\Desktop\Requests'):
fileStatus = [obj[0] for obj in list(changes) ] #converting set to list which gives file status as added, changed or modified
fileLocation = [obj[1] for obj in list(changes) ] #similarly getting list of location of files added
var2 = 0
for var1 in fileLocation:
if fileStatus[var2] == 1: #if file is added then passing all files to another code which will work on the list of files added
os.system('python split_thread_module.py '+var1) #now this code will start executing
var2 = var2 + 1
So the problem i am having is that while split_thread_module.py is executing the watcher is not watching the directory. Any file which is coming at time when split_thread_module.py is executing is not reflecting in changes. How can i watch the changes in directory and pass it to the other program on the fly even when the other program is executing. I am not a python programmer. Can anyone help me in this regard ?
Thanks in advance !!!!
Sorry for delayed, I'm the developer of watchgod. I've added a python-watchgod tag to your question which I'll watch (no pun intended) in future so I can answer such questions more quickly.
To answer your question, watchgod will not miss changes which occur in the filesystem while other code is running. They'll just be reported as changes next time watch iterates.
More generally the best approach would be to run the other code asynchronously so the main process can get back to watching the directory.
a few other hints for neater python
no need to call list(changes) in the comprehension
os.system is deprecated, better to use subprocess.run
since split_thread_module.py is also python, do you really need to run it in a separate process? Even if you do you might have more luck with python multiprocessing than starting a new process with the system's process initiation.
Overall you might try something like:
from concurrent.futures import ProcessPoolExecutor
from time import sleep
from watchgod import watch
def slow_job(status, location):
print(f'status: {status}, location: {location}, starting...')
sleep(10)
print(f'status: {status}, location: {location}, done')
with ProcessPoolExecutor() as executor:
for changes in watch('./tests'):
for status, location in changes:
executor.submit(slow_job, status, location)

Plugin to open file at eof

I'm tryng to create a plugin that opens a .log file associated with a file i'm editing. I was able to open the file but could not make the cursor move to end of file, unless I run the code again when the file is already open.
import sublime
import sublime_plugin
class OpenlogCommand(sublime_plugin.TextCommand):
def run(self, edit):
if os.path.isfile(self.view.file_name()[:-3]+"log"):
a=sublime.active_window().open_file(self.view.file_name()[:-3]+"log")
a.run_command("move_to", {"to": "eof"})
Does anybody know how to do it?
The reason that this doesn't work unless the file is already open is because the loading of a file is asynchronous; the command to open the file returns right away and the file is loaded in the background if it's not already open.
Thus the first time you run the command, the move_to command does nothing because it's already at the end of an empty buffer, but when the file has already been loaded it does what you expect.
To get around that, you need to detect if the file is still loading and defer the call to jump to the end of the file until after it's finished. An example of that is the following:
import sublime
import sublime_plugin
import os
class OpenLogCommand(sublime_plugin.TextCommand):
def run(self, edit):
log_name = self.view.file_name()[:-3] + "log"
log_view = self.view.window().open_file(log_name)
if log_view.is_loading():
log_view.settings().set("_goto_eol", True)
else:
log_view.run_command("move_to", {"to": "eof"})
def is_enabled(self):
fname = self.view.file_name()
if fname is not None and not fname.endswith(".log"):
return os.path.isfile(fname[:-3] + "log")
return False
class OpenLogListener(sublime_plugin.EventListener):
def on_load(self, view):
if view.settings().get("_goto_eol", False):
view.settings().erase("_goto_eol")
view.run_command("move_to", {"to": "eof"})
An issue with your existing version of this is that the file_name() method returns None if the file has not been saved to disk yet. Thus if you run that command on an unsaved file it will generate an error in the console. This is harmless, but a little unclean since it might be a red herring if you have other problems and happen to see those errors in the console.
Here the command will only enable itself if the file has been saved to stop that kind of problem. It will also only enable itself if it's not already a log file (since that would be redundant), and where an associated log file actually exists.
When a command is disabled, you can't execute it. That means that it also won't show up in the Command Palette and it will appear grayed out in menus (assuming you've added it to either of those).
When you run the command, it first calls open_file to open the associated log file, and then asks the view "Are you still loading?". When the view says NO, it means that the file is already open, and so we can immediately jump to the end of the file.
If the view says YES to this question, we then set a temporary setting in the view so that we know that when the contents of this view is finished loading, we want to jump to the end of the buffer.
The event listener asks every view as it's finished loading if it has this setting set, and when it does it will remove the setting and then jump to the end of the file.
[edit]
As mentioned in the comments below, the move_to command behaves slightly differently for a file that's already open versus a file that has just finished loading.
I'm not entirely sure why that's the case, but I suspect that there is some subtle interplay between the on_load notification being delivered right when the file content has been loaded but not yet displayed or something along those lines, although this is just a guess.
In any case, the most expedient fix would be to make a slight modification to the event listener by replacing that part of the code above with this instead:
class OpenLogListener(sublime_plugin.EventListener):
def on_load(self, view):
if view.settings().get("_goto_eol", False):
view.settings().erase("_goto_eol")
sublime.set_timeout(lambda: view.run_command("move_to", {"to": "eof"}), 1)
This changes things up a bit so that the call to the move_to command effectively happens after all event handling has been completed. That seems to resolve the issue on my test machine, at least.

File Handles and Logging in Node JS

I have an application that will log data to a text file anytime the application is used. It may be used once a day or maybe once every second.
Is there anything wrong with me opening a handle to the text file and keeping it open as long as my application is running and never close it? This way I can just append data to the log without reopening the file handle. Are there consequences to this decision I should be aware of?
fs.open returns a file descriptor and if you don't close connection it may result in file descriptors leak. The file descriptor won't be closed, even if there's an error.
on the other hand, fs.readFile, fs.writeFile,fs.appendFile , fs.createReadStream and fs.createWriteStream don't return a file descriptor. They open the file , operate on it and then close the file.

Resources