Hey Im trying to print output of an interactive command to a file inside a python script and move on to next line.
I am not sure how to achieve this. I have tried:
os.system("mnamer foo.mkv > mnamer.txt")
FYI mnamer can be imported and called from inside the script with "mnamer"
the above command logs the info I need in a file but I need it to move past the prompt and read the next line of code.
Is there a python specific way of doing this?
If you can import mnamer as a python module, do that, use it this way, and log its outputs to a file by temporarily assigning sys.stdout and sys.stderr to a file:
import mnamer
import sys
logfile = open("/path/to/log/file.txt", "w") # open the logfile
stdout, stderr = sys.stdout, sys.stderr # make copies of these to be able to restore them after the mnamer commands
sys.stdout = logfile # assign stdout to logfile, infos will be written there
sys.stderr = logfile # assign stderr to logfile, errors will be written there too
# put your mnamer commands here
mnamer.some_method_of_mnamer_module()
sys.stdout = stdout # restore stdout so that further infos will be printed to terminal again
sys.stderr = stderr # restore stderr so that further errors will be printed in terminal again
logfile.close() # close the logfile
Related
I'm not sure but I imagine that there may be issues similar to mine, but I have not found any that has been satisfactory.
When I open my Jupyter Lab and execute a cell as below (code01), it remains with the status of * (see figure01 below) meaning that it is still running the code, but the output of the out1.txt file is printed correctly.
I would like to know if it is normal for the cell to remain running in this circumstances described from code01.
code01:
import sys
file = open('out1.txt', 'a')
sys.stdout = file
print("house")
file.close()
figure01:
Because you redirect the stdout to a file and then close it you are breaking the IPython kernel underneath: there is no way for any stdout to be correctly processed by the kernel afterwards (cannot write to a closed file). You can reproduce it by executing your code in the IPython console rather than in a notebook. To fix this you could rebind the original stdout back:
import sys
file = open('out1.txt', 'a')
stdout = sys.stdout
sys.stdout = file
print("house")
# before close, not after!
sys.stdout = stdout
file.close()
But this is still not 100% safe; you should ideally use context managers instead:
from contextlib import redirect_stdout
with open('out1.txt', 'a') as f:
with redirect_stdout(f):
print('house')
But for this particular case why not to make use the file argument of the print() function?
with open('out1.txt', 'a') as f:
print('house', file=f)
I have simple script, that logs to a logfile. This is the core of my script:
with open('/var/log/mail/mail.log', 'a', buffering=1) as f:
for line in sys.stdin:
f.write(line)
The file being written /var/log/mail/mail.log has to be rotated regularly by logrotate. At the moment, when logrotate rotates the file, my script does not realize it, and continues writing to the old (now renamed) file.
logrotate has the possibility to execute command after the file has been rotated. Normally, when rsyslog is logging, the command would be:
postrotate
invoke-rc.d rsyslog rotate > /dev/null
endscript
But in my case, I need to send some signal to my script, and handle the signal in my script.
Also, I don't know in advance the PID my script is running as.
How can I best implement this ?
As a solution for this you can watch if the inode of the open log file is the same as the path. If not reopen the file. This only works on Unix.
import os, sys, stat
logfile = '/var/log/mail/mail.log'
while True:
with open(logfile, 'a', buffering=1) as f:
f_ino = os.stat(logfile)[stat.ST_INO]
f_dev = os.stat(logfile)[stat.ST_DEV]
for line in sys.stdin:
f.write(line)
try:
if os.stat(logfile)[stat.ST_INO] != f_ino or
os.stat(logfile)[stat.ST_DEV] != f_dev:
break
except OSError: # Was IOError with python < 3.4
pass
Closing the file is not required as it’s handled by the with context manager.
The try..except OSError block is used to catch any error by the system function os.stat. It can be that during the change of the file the function returns an OSError (For example a FileNotFoundError). In this case it will pass the exception and retry the check if the inode is changed. If we omit this try..except block you might end up with a program that terminates.
I'm trying to run a python script in Ubuntu and see the output in the terminal and simultaneously save the output to a file. I already know how to save the output to a .txt file. But when I run this, I don't see anything in terminal. I have to keep reloading the text file to see the output:
import subprocess
import sys
for mode in modes:
log_path = 'Logs/log%s.txt'
for scriptInstance in [1, 2, 3, 4, 5]:
sys.stdout = open(log_path % scriptInstance, 'w')
subprocess.call('python3 main.py',
stdout=sys.stdout, stderr=subprocess.STDOUT, shell=True)
You should check out python logging. You could use a StreamHandler to log to the terminal and use a FileHandler to log to a file.
Check this logging tutorial.
I have this fun in my python script:
def start_pushdata_server(Logger):
Logger.write_event("Starting pushdata Server..", "INFO")
retcode, stdout, stderr = run_shell(create_shell_command("pushdata-server
start"))
we want to redirect the standard error from pushdata-server binary to /dev/null.
so we edit it like this:
def start_pushdata_server(Logger):
Logger.write_event("Starting pushdata Server..", "INFO")
retcode, stdout, stderr = run_shell(create_shell_command("pushdata-server
start 2>/dev/null"))
But adding the 2>/dev/null in the python code isn't valid.
So how we can in the python code to send all errors from "pushdata-server
start" to null?
This code added to a Python script running in Unix or Linux will redirect all stderr output to /dev/null
import os # if you have not already done this
fd = os.open('/dev/null',os.O_WRONLY)
os.dup2(fd,2)
If you want to do this for only part of your code:
import os # if you have not already done this
fd = os.open('/dev/null',os.O_WRONLY)
savefd = os.dup(2)
os.dup2(fd,2)
The part of your code to have stderr redirected goes here. Then to restore stderr back to where it was:
os.dup2(savefd,2)
If you want to do this for stdout, use 1 instead of 2 in the os.dup and os.dup2 calls (dup2 stays as dup2) and flush stdout before doing any group of os. calls. Use different names instead of fd and/or savefd if these are conflicts with your code.
Avoiding the complexities of the run_shell(create_shell_command(...)) part which isn't well-defined anyway, try
import subprocess
subprocess.run(['pushdata-server', 'start'], stderr=subprocess.DEVNULL)
This doesn't involve a shell at all; your command doesn't seem to require one.
This is a simple code that logs into a linux box, and execute a grep to a resource on the box. I just need to be able to view the output of the command I execute, but nothing is returned. Code doesn't report any error but the desired output is not written.
Below is the code:
import socket
import re
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('linux_box', port=22, username='abc', password='xyz')
stdin, stdout, stderr = ssh.exec_command('grep xyz *.set')
output2 = stdout.readlines()
type(output2)
This is the output I get:
C:\Python34\python.exe C:/Python34/Paramiko.py
Process finished with exit code 0
You never actually print anything to standard output.
Changing last line to print(output2) should print value correctly.
Your code was likely based on interactive Python shell experiments, where return value of last executed command is printed to standard output implicitly. In non-interactive mode such behavior does not occur. That's why you need to use print function explicitly.