How to write data to existing process's STDIN from external process? - linux

I'm seeking for ways to write data to the existing process's STDIN from external processes, and found similar question How do you stream data into the STDIN of a program from different local/remote processes in Python? in stackoverlow.
In that thread, #Michael says that we can get file descriptors of existing process in path like below, and permitted to write data into them on Linux.
/proc/$PID/fd/
So, I've created a simple script listed below to test writing data to the script's STDIN (and TTY) from external process.
#!/usr/bin/env python
import os, sys
def get_ttyname():
for f in sys.stdin, sys.stdout, sys.stderr:
if f.isatty():
return os.ttyname(f.fileno())
return None
if __name__ == "__main__":
print("Try commands below")
print("$ echo 'foobar' > {0}".format(get_ttyname()))
print("$ echo 'foobar' > /proc/{0}/fd/0".format(os.getpid()))
print("read :: [" + sys.stdin.readline() + "]")
This test script shows paths of STDIN and TTY and then, wait for one to write it's STDIN.
I launched this script and got messages below.
Try commands below
$ echo 'foobar' > /dev/pts/6
$ echo 'foobar' > /proc/3308/fd/0
So, I executed the command echo 'foobar' > /dev/pts/6 and echo 'foobar' > /proc/3308/fd/0 from other terminal. After execution of both commands, message foobar is displayed twice on the terminal the test script is running on, but that's all. The line print("read :: [" + sys.stdin.readline() + "]") was not executed.
Are there any ways to write data from external processes to the existing process's STDIN (or other file descriptors), i.e. invoke execution of the lineprint("read :: [" + sys.stdin.readline() + "]") from other processes?

Your code will not work.
/proc/pid/fd/0 is a link to the /dev/pts/6 file.
$ echo 'foobar' > /dev/pts/6
$ echo 'foobar' > /proc/pid/fd/0
Since both the commands write to the terminal. This input goes to terminal and not to the process.
It will work if stdin intially is a pipe.
For example, test.py is :
#!/usr/bin/python
import os, sys
if __name__ == "__main__":
print("Try commands below")
print("$ echo 'foobar' > /proc/{0}/fd/0".format(os.getpid()))
while True:
print("read :: [" + sys.stdin.readline() + "]")
pass
Run this as:
$ (while [ 1 ]; do sleep 1; done) | python test.py
Now from another terminal write something to /proc/pid/fd/0 and it will come to test.py

I want to leave here an example I found useful. It's a slight modification of the while true trick above that failed intermittently on my machine.
# pipe cat to your long running process
( cat ) | ./your_server &
server_pid=$!
# send an echo to your cat process that will close cat and in my hypothetical case the server too
echo "quit\n" > "/proc/$server_pid/fd/0"
It was helpful to me because for particular reasons I couldn't use mkfifo, which is perfect for this scenario.

Related

Add "y" to stdin, 10 seconds after running a script using python:

from Python3, I'm trying to run a script (that I can't edit) that is looking in stdin for file names, runs some code on them and after all that, is expecting for "y/n" answer.
The whole code takes about 3 seconds at most.
How can I add "y" to the stdin after running the script?
This is the original script (written in perl):
if (! -t STDIN) {
while(my $file=<STDIN>) {
chomp $file;
printI "Getting file from STDIN : $file\n";
push(#table_files,$file);
}
close(STDIN);
}
...
<code that runs>
...
"Does it look right? (y/n) : ";
open(STDIN,"/dev/tty"); # reopen stdin cause script also takes files from stdin
my $answer=<STDIN>;
chomp($answer);
if (lc ($answer) ne "y") {die "Exiting!\n"}
Those were my tries (using the terminal):
( sleep 5 ; echo y ; ) | script.pl file1; # Takes "y" as a file instead of answer
yes | script.pl file1; # Fails because it tries to read 'y' all the time as files
This is my try in python3:
p = Popen("script.pl file1", stdout=PIPE, stderr=PIPE, stdin=PIPE, universal_newlines=True, shell=True)
(stdoutdata, stderrdata) = p.communicate(input=f'y\n') # Fails because of no delay
Any suggestions?
If there is a solution for both terminal(tcsh I believe) and python, that would be great. Thanks!!

Not getting grep result using popen with cat and multiple process pipes

I am trying to get grep to work using pipes and subprocess. I've double-checked the cat and I know it's working, but for some reason the grep isn't returning anything, even though when I run it through the terminal, it works just fine. I'm wondering if I have the command constructed correctly, as it doesn't give the desired output, and I can't figure out why.
I'm trying to retrieve a few specific lines of data from a file I've already retrieved from a server. I've been having a lot of issues with getting grep to work and perhaps I do not simply understand how it works.
p1 = subprocess.Popen(["cat", "result.txt"], stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
p2 = subprocess.Popen(["grep", "tshaper"], stdin=p1.stdout,
stdout=subprocess.PIPE)
o = p1.communicate()
print(o)
p1.stdout.close()
out, err = p2.communicate()
print(out)
The output for the file I have when I run this command (cat result.txt | grep "tshaper") on the terminal:
tshaper.1.devname=eth0
tshaper.1.input.burst=0
tshaper.1.input.rate=25000
tshaper.1.input.status=enabled
tshaper.1.output.burst=0
tshaper.1.output.rate=25000
tshaper.1.output.status=enabled
tshaper.1.status=enabled
tshaper.status=disabled
My results running the command in the script:
(b'', b'')
where the tuple is the stdout, stderr respectively of the p2 process.
EDIT:
I changed the command based on the Popen documentation to
p1 = subprocess.Popen(['result.txt', 'cat'], shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, cwd=os.getcwd())
to the p1 subprocess statement. While I was able to get the output in stderr, it didn't really change anything, saying
(b'', b'cat: 1: cat: result.txt: not found\n')
FYI: You got this error: (b'', b'cat: 1: cat: result.txt: not found\n') because you have changed the seq. of commands in your Popen method: ['result.txt', 'cat'] (Based on your question).
I have written a working solution which provides the expected output.
Python3.6.6 has been used for it.
result.txt file:
I have changed some lines to test the grep command.
tshaper.1.devname=eth0
ashaper.1.input.burst=0
bshaper.1.input.rate=25000
tshaper.1.input.status=enabled
tshaper.1.output.burst=0
cshaper.1.output.rate=25000
tshaper.1.output.status=enabled
dshaper.1.status=enabled
tshaper.status=disabled
Code:
I have made an understandable printing but it is not necessary if you only need the output of grep. (It is a bytesl-like object in Python3)
import subprocess
p1 = subprocess.Popen(['cat', 'result.txt'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p2 = subprocess.check_output(["grep", "tshaper"], stdin=p1.stdout)
print("\n".join(p2.decode("utf-8").split(" ")))
Otuput:
You can see the grep filters the lines from cat command as it is expected.
>>> python3 test.py
tshaper.1.devname=eth0
tshaper.1.input.status=enabled
tshaper.1.output.burst=0
tshaper.1.output.status=enabled
tshaper.status=disabled

Exit status of $? using python when segmentation fault occured

I need to execute echo $? using python3 and capture the exit status. I need this especially for capturing the Segmentation fault (core dumped) status.
I tried :
>>> os.system('echo $?')
0
0
got 0 0. Also, for segfault,
>>> os.system('./a.out')
Segmentation fault (core dumped)
35584
After above command, I again got:
>>> os.system('echo $?')
0
0
Also, why is 0 getting printed twice?
I went through the doccumentation of python-3 which says:
os.system(command)
On Unix, the return value is the exit status of the process encoded in the format specified for wait(). Note that POSIX does not specify the meaning of the return value of the C system() function, so the return value of the Python function is system-dependent.
Does this something say about such behavior?
Help me clarify on this.
Note: I already ran the ulimit -c unlimited before all above steps. The expected result should be non-zero or 139(to be specific).
Edit: I am thinking if there is a limitation on this!
Thanks!
No, you don't need to execute echo $?. It wouldn't be useful. The exit status of the program is the return value of the function os.system. That's what the number 35584 is. The documentation os os.system tells you to read the documentation of os.wait which explains
a 16-bit number, whose low byte is the signal number that killed the process, and whose high byte is the exit status (if the signal number is zero); the high bit of the low byte is set if a core file was produced.
However, note that depending on the shell, with os.system('./a.out'), you may be getting the exit status of a.out or the exit status of the shell itself. Normally there's no difference, because the exit status of the shell is the exit status of the last command it executes. But if the command dies from a signal, then there is a difference. The shell won't kill itself with the same signal, it will return a status that encodes the signal. In most shells, that's 128 + signal_number. For example, if the program dies of signal 11 (segfault on Linux) and leaves a core dump, then its status as returned by wait is 11. But if there's a shell in between, then the shell will exit normally with the exit code 128+11. That's what you're seeing: 35584 is (128 + 11) << 8.
To avoid this complication, use subprocess.call or one of its variants (you can use subprocess.run if you don't need your code to run Python <=3.4).
returncode = subprocess.call(['./a.out'], shell=False).returncode
if returncode & 0xff == 0:
exit_code = returncode >> 8
print('The program exited normally with status {}.'.format(exit_code))
else:
print('The program was killed by signal {}.'.format(returncode))
If you run os.system('echo $?'), this starts a new shell. You're printing the initial value of $? in that shell, before it has run any command, and the initial value of $? in a shell is 0.
You see 0 twice in the interactive environment because the first one is the one printed by the echo command and the second one is the value of the Python expression. Compare os.system('echo hello').
Note that with os.system, you can't access the output of the command, so if you print something with echo, you can't use it in the program. You'd have to use functions in the subprocess module for that, but you need this only if you need the output of ./a.out, not to get its exit status.
When running:
>>> os.system('echo $?')
0
0
and if your previous command was successful a first 0 will be print by echo $? and another one will be the return code of the call to echo $? that has just succeeded so you will have another 0 printed.
The return code of the script/command that you execute will directly be returned to your python program by os.system function so you do not need to use echo $?
Examples:
$ more return_code*
::::::::::::::
return_code1.py
::::::::::::::
import os
print os.system('sleep 1')
#will print 0 after 1sec
::::::::::::::
return_code2.py
::::::::::::::
import os
print os.system('ls abcdef')
#will print a rc!=0 if the file abcdef is not present in your working directory
Executions:
$ python return_code1.py
0
and
$ python return_code2.py
ls: cannot access 'abcdef': No such file or directory
512
I have written the following code for the above question and worked as expected. I used subprocess.Popen() method for achieving my requirement. I used sample.returncode to get the exit status of the shell.
def run_cmd():
ret = 0
sample_cmd = "./a.out"
sample = subprocess.Popen(sample_cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out_stdout, out_stderr = sample.communicate()
if sample.returncode != 0:
print ("OUTPUT: %s\nERROR: %s\n"%(out_stdout, out_stderr))
print ("Command: %s \nStatus: FAIL "%(sample_cmd))
sys.stdout.flush()
if sample.returncode == 139:
print('Segmentation fauilt(core dumped) occured...with status: ', sample.returncode)
ret = sample.returncode
else:
ret = 1
else:
print ("OUTPUT: %s\n"%(out_stdout))
print ("Command: %s \nStatus: PASS "%(sample_cmd))
ret = 0

Forking Turtle inshell command not streaming stdout

I'm using the the following function to fork commands in my Turtle script:
forkCommand shellCommand = do
pid <- inshell (shellCommand <> "& echo $!") empty
return $ PID (lineToText pid)
The reason for doing this is because I want to get the PID of the forked process that I'm running.
The issue is that the command I'm ruining isn't streaming any stdout. For example you could set shellCommand to:
"python -c \"print('Hello, World')\""
and you won't see the print occur.

Bash output limited to echo only

I am writing a bash script to handle by backups. I have created a message function controller that uses functions to handle email, log and output.
So the structure is as:
message_call(i, "This is the output")
Message Function
-> Pass to email function
--> Build email file
-> Pass to log function
--> Build log file
-> Pass to echo function (custom)
--> Format and echo input dependent on $1 as a switch and $2 as the output message
When I echo I want nice clean output that only consists of messages passed to the echo function, I can point all output /dev/null but I am struggling to limit all output except for the echo command.
Current output sample:
craig#ubuntu:~/backup/functions$ sudo ./echo_function.sh i test
+ SWITCH=i
+ INPUT=test
+ echo_function
+ echo_main
+ echo_controller i test
+ '[' i == i ']'
+ echo_info test
+ echo -e '\e[32m\e[1m[INFO]\e[0m test'
[INFO] test
+ echo test
test
+ '[' i == w ']'
+ '[' i == e ']'
Above I ran the echo function alone and the output I want is on line 10, all other output in the sample I don't want.
If you have the line set -x in your script, comment it out. If not, try adding set +x at the top of your script.
If you want to hide all the output from everything except what you're explicitly doing in your echo function you could do something like this:
exec 7>&1 # save a copy of current stdout
exec >/dev/null # redirect everyone else's stdout to /dev/null
ls # output goes to /dev/null
echo My Message >&7 # output goes to "old" stdout

Resources