I have simple script, that logs to a logfile. This is the core of my script:
with open('/var/log/mail/mail.log', 'a', buffering=1) as f:
for line in sys.stdin:
f.write(line)
The file being written /var/log/mail/mail.log has to be rotated regularly by logrotate. At the moment, when logrotate rotates the file, my script does not realize it, and continues writing to the old (now renamed) file.
logrotate has the possibility to execute command after the file has been rotated. Normally, when rsyslog is logging, the command would be:
postrotate
invoke-rc.d rsyslog rotate > /dev/null
endscript
But in my case, I need to send some signal to my script, and handle the signal in my script.
Also, I don't know in advance the PID my script is running as.
How can I best implement this ?
As a solution for this you can watch if the inode of the open log file is the same as the path. If not reopen the file. This only works on Unix.
import os, sys, stat
logfile = '/var/log/mail/mail.log'
while True:
with open(logfile, 'a', buffering=1) as f:
f_ino = os.stat(logfile)[stat.ST_INO]
f_dev = os.stat(logfile)[stat.ST_DEV]
for line in sys.stdin:
f.write(line)
try:
if os.stat(logfile)[stat.ST_INO] != f_ino or
os.stat(logfile)[stat.ST_DEV] != f_dev:
break
except OSError: # Was IOError with python < 3.4
pass
Closing the file is not required as it’s handled by the with context manager.
The try..except OSError block is used to catch any error by the system function os.stat. It can be that during the change of the file the function returns an OSError (For example a FileNotFoundError). In this case it will pass the exception and retry the check if the inode is changed. If we omit this try..except block you might end up with a program that terminates.
Related
I'm not sure but I imagine that there may be issues similar to mine, but I have not found any that has been satisfactory.
When I open my Jupyter Lab and execute a cell as below (code01), it remains with the status of * (see figure01 below) meaning that it is still running the code, but the output of the out1.txt file is printed correctly.
I would like to know if it is normal for the cell to remain running in this circumstances described from code01.
code01:
import sys
file = open('out1.txt', 'a')
sys.stdout = file
print("house")
file.close()
figure01:
Because you redirect the stdout to a file and then close it you are breaking the IPython kernel underneath: there is no way for any stdout to be correctly processed by the kernel afterwards (cannot write to a closed file). You can reproduce it by executing your code in the IPython console rather than in a notebook. To fix this you could rebind the original stdout back:
import sys
file = open('out1.txt', 'a')
stdout = sys.stdout
sys.stdout = file
print("house")
# before close, not after!
sys.stdout = stdout
file.close()
But this is still not 100% safe; you should ideally use context managers instead:
from contextlib import redirect_stdout
with open('out1.txt', 'a') as f:
with redirect_stdout(f):
print('house')
But for this particular case why not to make use the file argument of the print() function?
with open('out1.txt', 'a') as f:
print('house', file=f)
I have a very simple (test) code which I'm running either from a Linux shell, or in interactive mode, and I have two different behaviours I cannot figure out the reason of.
I have a file generated by a Popen call, previously, where each line is a file path. This is the code used to generate the file:
with open('find.txt','w') as f:
find = subprocess.Popen(["find",".","-name","myfile.out"],stdout=f)
(Incidentally, I was trying to build a PIPE originally, namely inputting the output of this command to a grep command, and since I wasn't successful in any way, I decided to break the problem down and just read the file paths from a file, and process them one by one. So maybe there is a common issue that is blocking me somewhere in this procedure).
Since in this second step I wasn't even able to open and process the files by opening the addresses contained in each line of the find.txt file, I just tried to print the file lines out, because for sure they're available in there:
with open('find.txt','r') as g:
for l in g.readlines():
print(l)
Now, the interesting part:
if I paste the lines above into a python shell, everything works fine and I get my outputs as expected
if, on the other hand, I try to run python test.py, where test.py is the name of the file containing the lines above, no output appears in the shell's stdout.
I've tried sys.stdout.flush() to no avail. I've also inserted some dummy print() statements along the way: everything gets printed but what's after the g.readlines() statement.
Here's the full script I'm trying to make work (a pre-precursor of what I'm actually after, tbh).
#!/usr/bin/env python3
import subprocess
import sys
with open('find.txt','w') as f:
find = subprocess.Popen(["find",".","-name","myfile.out"],stdout=f)
print('hello')
with open('find.txt','r') as g:
print('hello?')
for l in g.readlines():
print('help me!')
print(l)
sys.stdout.flush()
output being:
{ancis:>106> python test.py
hello
hello?
{ancis:>106>
EDIT
I've quickly tried the very same lines (but without the call to find, which isn't available) on my python installation in Windows: it works as expected)
Based on that, I've tried to run the simpler code below:
print('hello')
with open('find.txt','r') as g:
print('hello?')
for l in g.readlines():
print('help me!')
print(l)
sys.stdout.flush()
as a script, in Linux - This also works w/o problems.
This should mean that somehow I'm messing things up with the call to Popen... But what?
This is a race condition.
Your call to
find = subprocess.Popen(["find",".","-name","myfile.out"],stdout=f)
is opening another process and running your find command which takes a bit of time to fully execute.
Python then continues on and reaches the reading of the file portion before the command is fully executed and the file is generated.
Want to test it out?
Add a time.sleep(1) just before the opening of the file.
Full test script:
#!/usr/bin/env python3
import subprocess
import time
with open('find.txt','w') as f:
find = subprocess.Popen(["find",".","-name","myfile.out"],stdout=f)
time.sleep(1)
with open('find.txt','r') as g:
for l in g:
print(l)
To block until the process is complete you can use find.communicate().
With this you can also optionally set a timeout if that's something that you want.
#!/usr/bin/env python3
import subprocess
with open('find.txt','w') as f:
find = subprocess.Popen(["find",".","-name","myfile.out"],stdout=f)
find.communicate()
with open('find.txt','r') as g:
for l in g:
print(l)
Source:
https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate
Hey Im trying to print output of an interactive command to a file inside a python script and move on to next line.
I am not sure how to achieve this. I have tried:
os.system("mnamer foo.mkv > mnamer.txt")
FYI mnamer can be imported and called from inside the script with "mnamer"
the above command logs the info I need in a file but I need it to move past the prompt and read the next line of code.
Is there a python specific way of doing this?
If you can import mnamer as a python module, do that, use it this way, and log its outputs to a file by temporarily assigning sys.stdout and sys.stderr to a file:
import mnamer
import sys
logfile = open("/path/to/log/file.txt", "w") # open the logfile
stdout, stderr = sys.stdout, sys.stderr # make copies of these to be able to restore them after the mnamer commands
sys.stdout = logfile # assign stdout to logfile, infos will be written there
sys.stderr = logfile # assign stderr to logfile, errors will be written there too
# put your mnamer commands here
mnamer.some_method_of_mnamer_module()
sys.stdout = stdout # restore stdout so that further infos will be printed to terminal again
sys.stderr = stderr # restore stderr so that further errors will be printed in terminal again
logfile.close() # close the logfile
I'm running a shell command in a Jupyter Notebook using subprocess or os.system() . The actual output is a dump of thousands of lines of code which takes at least a minute to stdout in terminal. In my notebook, I just want to know if the output is more than a couple of lines because if it was an error, the output would only be 1 or 2 lines. What's the best way to check if I'm receiving 20+ lines and then stop the process and move on to the next?
you could read line by line using subprocess.Popen and count the lines (redirecting & merging output and error streams, maybe merging is not needed, depends on the process)
If the number of lines exceeds 20, kill the process and break the loop.
If the loop ends before the number of lines reaches 20, print/handle an error
code:
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
for lineno,line in enumerate(iter(p.stdout.readline, b'')):
if lineno == 20:
print("process okay")
p.kill()
break
else:
# too short, break wasn't reached
print("process failed return code: {}".format(p.wait()))
note that p.poll() is not None can help to figure out if the process has ended prematurely too
I'm new to Linux, and have been trying to solve an assignment but to no avail.
I have a shell script which prints out lines of a text file in a certain manner (a line within every few seconds):
python << END
import time,random
a= open ('/home/ch/pshety/course/fielding_history.txt','r')
flag =False
for i in range(1000):
b=a.readline()
if i==402 or flag:
print(a.readline())
flag=True
time.sleep(2)
END
sh th.sh
If I run it without trying to redirect it anywhere, I get the output on the terminal. However, when I tried to redirect it into a new text file, it doesn't do anything - the text remains empty:
sh th.sh > debug.txt
I've tried looking for answers, I've stumbled upon a lot of suggestions including tee but nothing helps - the file remains empty.
What am I doing wrong?
Try this:
import time,random
a = open('/home/ch/pshety/course/fielding_history.txt', 'r')
for i in range(1000):
b = a.readline()
if i >= 402:
print(b, flush=True)
time.sleep(2)
Your Python script likely needs to flush the contents of the output buffer before you can see it.
Note: aside from the sleep() call, Unix provides other ways of accomplishing this. I would take a look at man tail and read about the -f and -n switches.
Edit: didn't realize that tail has a switch (-s) to sleep as well!