python - How to redirect the errors to /dev/null? - linux

I have this fun in my python script:
def start_pushdata_server(Logger):
Logger.write_event("Starting pushdata Server..", "INFO")
retcode, stdout, stderr = run_shell(create_shell_command("pushdata-server
start"))
we want to redirect the standard error from pushdata-server binary to /dev/null.
so we edit it like this:
def start_pushdata_server(Logger):
Logger.write_event("Starting pushdata Server..", "INFO")
retcode, stdout, stderr = run_shell(create_shell_command("pushdata-server
start 2>/dev/null"))
But adding the 2>/dev/null in the python code isn't valid.
So how we can in the python code to send all errors from "pushdata-server
start" to null?

This code added to a Python script running in Unix or Linux will redirect all stderr output to /dev/null
import os # if you have not already done this
fd = os.open('/dev/null',os.O_WRONLY)
os.dup2(fd,2)
If you want to do this for only part of your code:
import os # if you have not already done this
fd = os.open('/dev/null',os.O_WRONLY)
savefd = os.dup(2)
os.dup2(fd,2)
The part of your code to have stderr redirected goes here. Then to restore stderr back to where it was:
os.dup2(savefd,2)
If you want to do this for stdout, use 1 instead of 2 in the os.dup and os.dup2 calls (dup2 stays as dup2) and flush stdout before doing any group of os. calls. Use different names instead of fd and/or savefd if these are conflicts with your code.

Avoiding the complexities of the run_shell(create_shell_command(...)) part which isn't well-defined anyway, try
import subprocess
subprocess.run(['pushdata-server', 'start'], stderr=subprocess.DEVNULL)
This doesn't involve a shell at all; your command doesn't seem to require one.

Related

write out values to pipe between python and lua script

I am writing a program which utilises lua script and python script.
I am calling python script from within lua as:
-- lua
pipe = io.popen("python3 main.py", "w")
Now, when the python executes code I want to do something like this:
# python
sys.stdout.write(str(timevar))
The problem is that the timevar is being sent to the Linux terminal and I cannot catch it in pipe inside lua script with:
-- lua
result = pipe:read("*a")
Hence, how to send data via pipe? I am reading from the pipe with:
#python
import fileinput
info = [ line[:-1] for line in fileinput.input() ]
which works well, but writing to output does not, so I am not sure if I made a mistake somewhere or does the python ask for something else to be done?

Python Subprocess.run standard out

I have the following function that executes as expected.
def get_files(paths):
for path in paths:
file_name = parse_path(path)
csv_command = "curl -b ./cookie {} > ./tmp/{}".format(path, file_name)
subprocess.run([csv_command], shell=True)
print("success")
My issue here is that I am also capturing standardout from the subprocess. How do I modify the function to ignore the standard out of the subprocess. I will be logging using a logger and need to make sure that logging will still occur to STDout
Pass a handle to the DEVNULL directory for the standard output. (which can be accessed using subprocess.DEVNULL)
This should suppress output from your function
subprocess.run([csv_command], shell=True, stdout=subprocess.DEVNULL)
You can see some more options by reading this page: https://docs.python.org/3/library/subprocess.html#subprocess.DEVNULL
Just close the file-descriptor:
process.stdout.close()
Where process is the process which you desire to not read from.

isatty() always returning False?

I want to pipe data via stdin to a Python script for onwards processing. The command is:
tail -f /home/pi/ALL.TXT | python3 ./logcheck.py
And the Python code is:
import sys
while 1:
if (sys.stdin.isatty()):
for line in sys.stdin:
print(line)
I want the code to continuously watch stdin and then process each row when received. The tail command is working when run on its own but the python script never outputs anything.
Checking isatty() it appears to always return False?
Help!
A TTY is when you use your regular terminal - as in opening up a python in your shell, and typing
BASH>python
>>>from sys import stdin
>>>stdin.isatty() #True
In your case the standard input is coming from something which is not a tty. Just add a not in the if statement.

No output given by stdout.readlines()

This is a simple code that logs into a linux box, and execute a grep to a resource on the box. I just need to be able to view the output of the command I execute, but nothing is returned. Code doesn't report any error but the desired output is not written.
Below is the code:
import socket
import re
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('linux_box', port=22, username='abc', password='xyz')
stdin, stdout, stderr = ssh.exec_command('grep xyz *.set')
output2 = stdout.readlines()
type(output2)
This is the output I get:
C:\Python34\python.exe C:/Python34/Paramiko.py
Process finished with exit code 0
You never actually print anything to standard output.
Changing last line to print(output2) should print value correctly.
Your code was likely based on interactive Python shell experiments, where return value of last executed command is printed to standard output implicitly. In non-interactive mode such behavior does not occur. That's why you need to use print function explicitly.

python subprocess.readline() blocking when calling another python script

I've been playing with using the subprocess module to run python scripts as sub-processes and have come accross a problem with reading output line by line.
The documentation I have read indicates that you should be able to use subprocess and call readline() on stdout, and this does indeed work if the script I am calling is a bash script. However when I run a python script readline() blocks until the whole script has completed.
I have written a couple of test scripts that repeat the problem. In the test scripts I attmept to run a python script (tst1.py) as a sub-process from within a python script (tst.py) and then read the output of tst1.py line by line.
tst.py starts tst1.py and tries to read the output line by line:
#!/usr/bin/env python
import sys, subprocess, multiprocessing, time
cmdStr = 'python ./tst1.py'
print(cmdStr)
cmdList = cmdStr.split()
subProc = subprocess.Popen(cmdList, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
while(1):
# this call blocks until tst1.py has completed, then reads all the output
# it then reads empty lines (seemingly for ever)
ln = subProc.stdout.readline()
if ln:
print(ln)
tst1.py simply loops printing out a message:
#!/usr/bin/env python
import time
if __name__ == "__main__":
x = 0
while(x<20):
print("%d: sleeping ..." % x)
# flushing stdout here fixes the problem
#sys.stdout.flush()
time.sleep(1)
x += 1
If tst1.py is written as a shell script tst1.sh:
#!/bin/bash
x=0
while [ $x -lt 20 ]
do
echo $x: sleeping ...
sleep 1
let x++
done
readline() works as expected.
After some playing about I discovered the situation can be resolved by flushing stdout in tst1.py, but I do not understand why this is required. I was wondering if anyone had an explanation for this behaviour ?
I am running redhat 4 linux:
Linux lb-cbga-05 2.6.9-89.ELsmp #1 SMP Mon Apr 20 10:33:05 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
Because if the output is buffered somewhere the parent process won't see it until the child process exists at that point the output is flushed and all fd's are closed. As for why it works with bash without explicitly flushing the output, because when you type echo in a most shells it actually forks a process that executes echo (which prints something) and exists so the output is flushed too.

Resources