Indicate no more input without closing pty - linux

When controlling a process using a PTY master/slave pair, I would like to indicate to the process in question that stdin has closed and I have no more content to send, but I would still like to receive output from the process.
The catch is that I only have one file descriptor (the PTY "master") which handles both input from the child process and output to the child process. So closing the descriptor would close both.
Example in python:
import subprocess, pty, os
master,slave = pty.openpty()
proc = subprocess.Popen(["/bin/cat"], stdin=slave, stdout=slave)
os.close(slave) # now belongs to child process
os.write(master,"foo")
magic_close_fn(master) # <--- THIS is what I want
while True:
out = os.read(master,4096)
if out:
print out
else:
break
proc.wait()

You need to get separate read and write file descriptors. The simple way to do that is with a pipe and a PTY. So now your code would look like this:
import subprocess, pty, os
master, slave = pty.openpty()
child_stdin, parent_stdin = os.pipe()
proc = subprocess.Popen(["/bin/cat"], stdin=child_stdin, stdout=slave)
os.close(child_stdin) # now belongs to child process
os.close(slave)
os.write(parent_stdin,"foo") #Write to the write end (our end) of the child's stdin
#Here's the "magic" close function
os.close(parent_stdin)
while True:
out = os.read(master,4096)
if out:
print out
else:
break
proc.wait()

I had to do this today, ended up here and was sad to see no answer. I achieved this using a pair of ptys rather than a single pty.
stdin_master, stdin_slave = os.openpty()
stdout_master, stdout_slave = os.openpty()
def child_setup():
os.close(stdin_master) # only the parent needs this
os.close(stdout_master) # only the parent needs this
with subprocess.Popen(cmd,
start_new_session=True,
stderr=subprocess.PIPE,
stdin=stdin_slave,
stdout=stdout_slave,
preexec_fn=child_setup) as proc:
os.close(stdin_slave) # only the child needs this
os.close(stdout_slave) # only the child needs this
stdin_pty = io.FileIO(stdin_master, "w")
stdout_pty = io.FileIO(stdout_master, "r")
stdin_pty.write(b"here is your input\r")
stdin_pty.close() # no more input (EOF)
output = b""
while True:
try:
output += stdout_pty.read(1)
except OSError:
# EOF
break
stdout_pty.close()

I think that what you want is to send the CTRL-D (EOT - End Of Transmission) caracter, isn't you? This will close the input in some applications, but others will quit.
perl -e 'print qq,\cD,'
or purely shell:
echo -e '\x04' | nc localhost 8080
Both are just examples. BTW the CTRL-D caracter is \x04 in hexa.

Related

Python 3 piping Ghostscript output to internal variable

I'm loosing my head trying to make work this piece of code. I want to pipe the Ghostscript's data output of
'-sDEVICE=ink_cov' to an internal variable instead of using a external file that I must read (like I'm doing right now), but I can't do it work. Here are some of my trys:
__args = ['gswin64', f'-sOutputFile={salida_temp}', '-dBATCH', '-dNOPAUSE',
'-dSIMPLE', '-sDEVICE=ink_cov', '-dShowAnnots=false', '-dTextFormat=3', fichero_pdf]
# __args = ['gswin64', '-dBATCH', '-dNOPAUSE', '-dSIMPLE', '-sDEVICE=ink_cov',
'-dShowAnnots=false', '-dTextFormat=3', fichero_pdf]
# __args = ['gswin64', '-sOutputFile=%%pipe%%', '-q', '-dQUIET', '-dBATCH', '-dNOPAUSE',
'-dSIMPLE', '-sDEVICE=ink_cov', '-dTextFormat=3', fichero_pdf]
ghost_output = subprocess.run(__args, capture_output=True, text=True)
# ghost_output = subprocess.check_output(__args)
# ghost_output = subprocess.run(__args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# ghost_output = subprocess.Popen(__args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# with subprocess.Popen(__args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) as process:
# for line in process.stdout:
# print(line.decode('utf8'))
print('GS stdout:', ghost_output.stdout)
I have tried a bounch of parameters, subprocess run, check_output, Popen, context manager, but I can't get something in ghost_output.stdout, only empty string or sometimes b'' if I use decode().
(BTW, if I use the '-dQUIET' option, Ghostcript don't show any data but still opens an output window . I don't found the way to don't open any window, neither.)
Anybody knows how to do it properly?

Streaming read from subprocess

I need to read output from a child process as it's produced -- perhaps not on every write, but well before the process completes. I've tried solutions from the Python3 docs and SO questions here and here, but I still get nothing until the child terminates.
The application is for monitoring training of a deep learning model. I need to grab the test output (about 250 bytes for each iteration, at roughly 1-minute intervals) and watch for statistical failures.
I cannot change the training engine; for instance, I cannot insert stdout.flush() in the child process code.
I can reasonably wait for a dozen lines of output to accumulate; I was hopeful of a buffer-fill solving my problem.
Code: variations are commented out.
Parent
cmd = ["/usr/bin/python3", "zzz.py"]
# test_proc = subprocess.Popen(
test_proc = subprocess.run(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT
)
out_data = ""
print(time.time(), "START")
while not "QUIT" in str(out_data):
out_data = test_proc.stdout
# out_data, err_data = test_proc.communicate()
print(time.time(), "MAIN received", out_data)
Child (zzz.py)
from time import sleep
import sys
for _ in range(5):
print(_, "sleeping", "."*1000)
# sys.stdout.flush()
sleep(1)
print("QUIT this exercise")
Despite sending lines of 1000+ bytes, the buffer (tested elsewhere as 2kb; here, I've gone as high as 50kb) filling doesn't cause the parent to "see" the new text.
What am I missing to get this to work?
Update with regard to links, comments, and iBug's posted answer:
Popen instead of run fixed the blocking issue. Somehow I missed this in the documentation and my experiments with both.
universal_newline=True neatly changed the bytes return to string: easier to handle on the receiving end, although with interleaved empty lines (easy to detect and discard).
Setting bufsize to something tiny (e.g. 1) didn't affect anything; the parent still has to wait for the child to fill the stdout buffer, 8k in my case.
export PYTHONUNBUFFERED=1 before execution did fix the buffering problem. Thanks to wim for the link.
Unless someone comes up with a canonical, nifty solution that makes these obsolete, I'll accept iBug's answer tomorrow.
subprocess.run always spawns the child process, and blocks the thread until it exits.
The only option for you is to use p = subprocess.Popen(...) and read lines with s = p.stdout.readline() or p.stdout.__iter__() (see below).
This code works for me, if the child process flushes stdout after printing a line (see below for extended note).
cmd = ["/usr/bin/python3", "zzz.py"]
test_proc = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT
)
out_data = ""
print(time.time(), "START")
while not "QUIT" in str(out_data):
out_data = test_proc.stdout.readline()
print(time.time(), "MAIN received", out_data)
test_proc.communicate() # shut it down
See my terminal log (dots removed from zzz.py):
ibug#ubuntu:~/t $ python3 p.py
1546450821.9174328 START
1546450821.9793346 MAIN received b'0 sleeping \n'
1546450822.987753 MAIN received b'1 sleeping \n'
1546450823.993136 MAIN received b'2 sleeping \n'
1546450824.997726 MAIN received b'3 sleeping \n'
1546450825.9975247 MAIN received b'4 sleeping \n'
1546450827.0094354 MAIN received b'QUIT this exercise\n'
You can also do it with a for loop:
for out_data in test_proc.stdout:
if "QUIT" in str(out_data):
break
print(time.time(), "MAIN received", out_data)
If you cannot modify the child process, unbuffer (from package expect - install with APT or YUM) may help. This is my working parent code without changing the child code.
test_proc = subprocess.Popen(
["unbuffer"] + cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT
)

How to run a tshark command in the background and exit it using subprocess in python

I would like to do a packet capture using tshark, a command-line flavor of Wireshark, while connecting to a remote host device on telnet. I would like to invoke the function I wrote for capture:
def wire_cap(ip1,ip2,op_fold,file_name,duration): # invoke tshark to capture traffic during session
if duration == 0:
cmd='"tshark" -i 1 -P -w '+ op_fold+file_name+'.pcap src ' + str(ip1) + ' or src '+ str(ip2)
else:
cmd='"tshark" -i 1 -a duration:'+str(duration)+' -P -w '+ op_fold+file_name+'.pcap src ' + str(ip1) + ' or src '+ str(ip2)
p = subprocess.Popen(cmd, shell=True,stderr=subprocess.PIPE)
while True:
out = p.stderr.read(1)
if out == '' and p.poll() != None:
break
if out != '':
sys.stdout.write(out)
sys.stdout.flush()
For debugging purpose, I would like to run this function in the background by calling it as and when required and stopping it when I've got the capture. Something like:
Start a thread or a background process called wire_capture
//Do something here
Stop the thread or the background process wire_capture
By reading a bit, I realized that thread.start_new_thread() and threading.Thread() seems to be suitable only when I know the duration of the capture (an exit condition). I tried using thread.exit() but it acted like sys.exit() and stopped the execution of the program completely. I also tried threading.Event() as follows:
if cap_flg:
print "Starting a packet capture thread...."
th_capture = threading.Thread(target=wire_cap, name='Thread_Packet_Capture', args=(IP1, IP2, output, 'wire_capture', 0, ))
th_capture.setDaemon(True)
th_capture.start()
.
.
.
.
.
if cap_flg:
thread_kill = threading.Event()
print "Exiting the packet capture thread...."
thread_kill.set()
th_capture.join()
I would like to know how can I make the process stop when I feel like stopping it (Like an exit condition that can be added so that I can exit the thread execution). The above code I tried doesn't seem to work.
The threading.Event() approach is on the right track, but you need the event to be visible in both threads, so you need to create it before you start the second thread and pass it in:
if cap_flg:
print "Starting a packet capture thread...."
thread_kill = threading.Event()
th_capture = threading.Thread(target=wire_cap, name='Thread_Packet_Capture', args=(IP1, IP2, output, 'wire_capture', 0, thread_kill))
th_capture.setDaemon(True)
th_capture.start()
In that while loop, have the watching thread check the event on every iteration, and stop the loop (and also probably kill the tshark it started) if it is set. You also need to make sure that the process doesn't sit waiting forever for output from the process, and ignoring the termination event, by only reading from the pipe if there is data available:
def wire_cap(ip1,ip2,op_fold,file_name,duration,event): # invoke tshark to capture traffic during session
if duration == 0:
cmd='"tshark" -i 1 -P -w '+ op_fold+file_name+'.pcap src ' + str(ip1) + ' or src '+ str(ip2)
else:
cmd='"tshark" -i 1 -a duration:'+str(duration)+' -P -w '+ op_fold+file_name+'.pcap src ' + str(ip1) + ' or src '+ str(ip2)
p = subprocess.Popen(cmd, shell=True,stderr=subprocess.PIPE)
while not event.is_set():
# Make sure to not block forever waiting for
# the process to say things, so we can see if
# the event gets set. Only read if data is available.
if len(select.select([p.stderr], [], [], 0.1)) > 0:
out = p.stderr.read(1)
if out == '' and p.poll() != None:
break
if out != '':
sys.stdout.write(out)
sys.stdout.flush()
p.kill()
And then to actually tell the thread to stop you just set the event:
if cap_flg:
print "Exiting the packet capture thread...."
thread_kill.set()
th_capture.join()

Find execution time for subprocess.Popen python

Here's the Python code to run an arbitrary command returning its stdout data, or raise an exception on non-zero exit codes:
proc = subprocess.Popen(
cmd,
stderr=subprocess.STDOUT, # Merge stdout and stderr
stdout=subprocess.PIPE,
shell=True)
The subprocess module does not support execution-time and if it exceeds specific threshold => timeout(ability to kill a process running for more than X number of seconds)
What is the simplest way to implement get_execution_time and timeout in Python2.6 program meant to run on Linux?
Good question. Here is the complete code for this:
import time, subprocess # Importing modules.
timeoutInSeconds = 1 # Our timeout value.
cmd = "sleep 5" # Your desired command.
proc = subprocess.Popen(cmd,shell=True) # Starting main process.
timeStarted = time.time() # Save start time.
cmdTimer = "sleep "+str(timeoutInSeconds) # Waiting for timeout...
cmdKill = "kill "+str(proc.pid)+" 2>/dev/null" # And killing process.
cmdTimeout = cmdTimer+" && "+cmdKill # Combine commands above.
procTimeout = subprocess.Popen(cmdTimeout,shell=True) # Start timeout process.
proc.communicate() # Process is finished.
timeDelta = time.time() - timeStarted # Get execution time.
print("Finished process in "+str(timeDelta)+" seconds.") # Output result.

Detect when reader closes named pipe (FIFO)

Is there any way for a writer to know that a reader has closed its end of a named pipe (or exited), without writing to it?
I need to know this because the initial data I write to the pipe is different; the reader is expecting an initial header before the rest of the data comes.
Currently, I detect this when my write() fails with EPIPE. I then set a flag that says "next time, send the header". However, it is possible for the reader to close and re-open the pipe before I've written anything. In this case, I never realize what he's done, and don't send the header he is expecting.
Is there any sort of async event type thing that might help here? I'm not seeing any signals being sent.
Note that I haven't included any language tags, because this question should be considered language-agnostic. My code is Python, but the answers should apply to C, or any other language with system call-level bindings.
If you are using an event loop that is based on the poll system call you can register the pipe with an event mask that contains EPOLLERR. In Python, with select.poll,
import select
fd = open("pipe", "w")
poller = select.poll()
poller.register(fd, select.POLLERR)
poller.poll()
will wait until the pipe is closed.
To test this, run mkfifo pipe, start the script, and in another terminal run, for example, cat pipe. As soon as you quit the cat process, the script will terminate.
Oddly enough, it appears that when the last reader closes the pipe, select indicates that the pipe is readable:
writer.py
#!/usr/bin/env python
import os
import select
import time
NAME = 'fifo2'
os.mkfifo(NAME)
def select_test(fd, r=True, w=True, x=True):
rset = [fd] if r else []
wset = [fd] if w else []
xset = [fd] if x else []
t0 = time.time()
r,w,x = select.select(rset, wset, xset)
print 'After {0} sec:'.format(time.time() - t0)
if fd in r: print ' {0} is readable'.format(fd)
if fd in w: print ' {0} is writable'.format(fd)
if fd in x: print ' {0} is exceptional'.format(fd)
try:
fd = os.open(NAME, os.O_WRONLY)
print '{0} opened for writing'.format(NAME)
print 'select 1'
select_test(fd)
os.write(fd, 'test')
print 'wrote data'
print 'select 2'
select_test(fd)
print 'select 3 (no write)'
select_test(fd, w=False)
finally:
os.unlink(NAME)
Demo:
Terminal 1:
$ ./pipe_example_simple.py
fifo2 opened for writing
select 1
After 1.59740447998e-05 sec:
3 is writable
wrote data
select 2
After 2.86102294922e-06 sec:
3 is writable
select 3 (no write)
After 2.15910816193 sec:
3 is readable
Terminal 2:
$ cat fifo2
test
# (wait a sec, then Ctrl+C)
There is no such mechanism. Generally, according to the UNIX-way, there are no signals for streams opening or closing, on either end. This can only be detected by reading or writing to them (accordingly).
I would say this is wrong design. Currently you are trying to have the receiver signal their availability to receive by opening a pipe. So either you implement this signaling in an appropriate way, or incorporate the "closing logic" in the sending part of the pipe.

Resources