When using Paramiko to execute commands remotely, I can't see an updating progress bar using tqdm. I'm guessing this is because it isn't printing a new line when tqdm updates the bar
Here's a simple code example I've been using, but you'll need to supply your own SSH credentials
ssh = paramiko.SSHClient()
ssh.load_system_host_keys()
ssh.connect('8.tcp.ngrok.io', username=username, get_pty=True)
command = 'python3 -c "import tqdm; import time; [time.sleep(1) for i in tqdm.tqdm(range(5))]"'
stdin, stdout, stderr = ssh_client.exec_command('sudo -S '+command)
stdin.write(password+'\n')
stdin.flush()
###new_method
for l, approach in line_buffered(stdout):
if approach=='print':
print( l)
if approach=='overlay':
print( l, end='\r')
ssh.close()
Is there a way I can print the tqdm bar as it updates?
Based on Martin Prikryl's suggestion, I tried to incorporate the solution from:
Paramiko with continuous stdout
And adapted the code to print regardless of a new line
def line_buffered(f):
line_buf = ""
while not f.channel.exit_status_ready():
# f.read(1).decode("utf-8")
line_buf += f.read(1).decode("utf-8", 'ignore')
if line_buf.endswith('\n'):
yield line_buf, 'print'
line_buf = ''
# elif len(line_buf)>40:
elif line_buf.endswith('\r'):
yield line_buf, 'overlay'
This does successfully print the as the output as it is generated, and reprints on the tqdm line, but when I run this code I get the following output
100%|| 5/5 [00:05<00:00, 1.00s/it]1.00s/it]1.00s/it]1.00s/it]1.00s/it]?, ?it/s]
Not very pretty, and getting swamped by the iteration time. It doesn't seem to be printing the actual progress bar. any ideas?
It probably because you are (correctly) using non-interactive session to automate your command execution.
Most decently designed commands do not print an output intended for an interactive human use, like progress, when executed non-interactively.
If you really want to see the progress display, try setting get_pty argument of SSHClient.exec_command to true.
stdin, stdout, stderr = ssh_client.exec_command('sudo -S '+command, get_pty=True)
Related
I have the problem when running a command very irregularly it fails to run.
Currently I am using timeout, but the problem is that when the command does work it takes a long time to finish (several minutes).
Ideally I want to set the timeout to infinity if the command shows some signs of life and keep it 15sec otherwise.
Any suggestions?
At the end I solved it by wrapping it up into a python script.
The usage is
python script.py [your command]
The script will control if the commands needs a re-run
import sys
import subprocess
import signal
import time
def handler(signum, frame):
global atm
if (atm<3): # attempt to re-run 2 times
print('re-running')
atm = atm + 1
p.kill()
signal.alarm(0)
time.sleep(15) # not sure that is needed, but just in case
run()
else:
raise OSError("Number of re-run attempts exceeded")
def run(): # run the command which is passed as a parameter to this script
global p
p = subprocess.Popen(args, stdout=subprocess.PIPE)
signal.signal(signal.SIGALRM, handler) # will not work on Windows
signal.alarm(5) # give 5 seconds to produce some output, rerun otherwise
for line in iter(p.stdout.readline, b''):
signal.alarm(0) # if output produced remove the alarm
topr = (line).decode('UTF-8');
print(topr, end='')
args = sys.argv[1::] # command passed here
atm = 0
run()
Because of the changes in bytes vs string handling in Python3 the older solutions to similar questions are hard to use.
So I'm posting this just to provide a working example.
The generic answer is still, use the subprocess.Popen functionality with subprocess.PIPE objects, and use the sudo -S (--stdin) command line, to sanely feed the password into the running sudo command without doing psuedo-terminal handling.
The wrinkle is that the password has to be encoded as Python bytes and any captured output and error messages must be encoded back into Python strings.
So, without further ado, here's one working solution:
#!python
#!/usr/bin/env python
import sys
from getpass import getpass
from subprocess import Popen, PIPE
from shlex import split as shplit
if __name__ == '__main__':
cmd = 'sudo -S id'
response = getpass()
pw = bytes(response+'\r\n', 'ascii')
proc = Popen(shplit(cmd), stdin=PIPE, stdout=PIPE, stderr=PIPE)
out = err = None # In case of timeout
out, err = proc.communicate(input=pw, timeout=None)
out = str(out,'ascii')
err = str(err,'ascii')
if err.startswith('Password:'):
err = err[len('Password:'):]
if err:
print('Errors: ', err)
print()
if out:
print('Output: \n', out)
results = proc.poll()
errorcode = proc.returncode
print('Return code: ', errorcode)
sys.exit(errorcode)
Obviously most of this is in the wrapping. Most real code would get getting the password and the command for sudo to execute via some other means; and the post-processing of the output and error messages would normally be different. (Actually this example does a poor job of filtering sudo's prompt for your password out of the final output; fixing that's left as an exercise to the reader).
Of course this uses shlex.split as the safer approach than shell=True.
Ok so I'm trying to run a C program from a python script. Currently I'm using a test C program:
#include <stdio.h>
int main() {
while (1) {
printf("2000\n");
sleep(1);
}
return 0;
}
To simulate the program that I will be using, which takes readings from a sensor constantly.
Then I'm trying to read the output (in this case "2000") from the C program with subprocess in python:
#!usr/bin/python
import subprocess
process = subprocess.Popen("./main", stdout=subprocess.PIPE)
while True:
for line in iter(process.stdout.readline, ''):
print line,
but this is not working. From using print statements, it runs the .Popen line then waits at for line in iter(process.stdout.readline, ''):, until I press Ctrl-C.
Why is this? This is exactly what most examples that I've seen have as their code, and yet it does not read the file.
Is there a way of making it run only when there is something to be read?
It is a block buffering issue.
What follows is an extended for your case version of my answer to Python: read streaming input from subprocess.communicate() question.
Fix stdout buffer in C program directly
stdio-based programs as a rule are line buffered if they are running interactively in a terminal and block buffered when their stdout is redirected to a pipe. In the latter case, you won't see new lines until the buffer overflows or flushed.
To avoid calling fflush() after each printf() call, you could force line buffered output by calling in a C program at the very beginning:
setvbuf(stdout, (char *) NULL, _IOLBF, 0); /* make line buffered stdout */
As soon as a newline is printed the buffer is flushed in this case.
Or fix it without modifying the source of C program
There is stdbuf utility that allows you to change buffering type without modifying the source code e.g.:
from subprocess import Popen, PIPE
process = Popen(["stdbuf", "-oL", "./main"], stdout=PIPE, bufsize=1)
for line in iter(process.stdout.readline, b''):
print line,
process.communicate() # close process' stream, wait for it to exit
There are also other utilities available, see Turn off buffering in pipe.
Or use pseudo-TTY
To trick the subprocess into thinking that it is running interactively, you could use pexpect module or its analogs, for code examples that use pexpect and pty modules, see Python subprocess readlines() hangs. Here's a variation on the pty example provided there (it should work on Linux):
#!/usr/bin/env python
import os
import pty
import sys
from select import select
from subprocess import Popen, STDOUT
master_fd, slave_fd = pty.openpty() # provide tty to enable line buffering
process = Popen("./main", stdin=slave_fd, stdout=slave_fd, stderr=STDOUT,
bufsize=0, close_fds=True)
timeout = .1 # ugly but otherwise `select` blocks on process' exit
# code is similar to _copy() from pty.py
with os.fdopen(master_fd, 'r+b', 0) as master:
input_fds = [master, sys.stdin]
while True:
fds = select(input_fds, [], [], timeout)[0]
if master in fds: # subprocess' output is ready
data = os.read(master_fd, 512) # <-- doesn't block, may return less
if not data: # EOF
input_fds.remove(master)
else:
os.write(sys.stdout.fileno(), data) # copy to our stdout
if sys.stdin in fds: # got user input
data = os.read(sys.stdin.fileno(), 512)
if not data:
input_fds.remove(sys.stdin)
else:
master.write(data) # copy it to subprocess' stdin
if not fds: # timeout in select()
if process.poll() is not None: # subprocess ended
# and no output is buffered <-- timeout + dead subprocess
assert not select([master], [], [], 0)[0] # race is possible
os.close(slave_fd) # subproces don't need it anymore
break
rc = process.wait()
print("subprocess exited with status %d" % rc)
Or use pty via pexpect
pexpect wraps pty handling into higher level interface:
#!/usr/bin/env python
import pexpect
child = pexpect.spawn("/.main")
for line in child:
print line,
child.close()
Q: Why not just use a pipe (popen())? explains why pseudo-TTY is useful.
Your program isn't hung, it just runs very slowly. Your program is using buffered output; the "2000\n" data is not being written to stdout immediately, but will eventually make it. In your case, it might take BUFSIZ/strlen("2000\n") seconds (probably 1638 seconds) to complete.
After this line:
printf("2000\n");
add
fflush(stdout);
See readline docs.
Your code:
process.stdout.readline
Is waiting for EOF or a newline.
I cannot tell what you are ultimately trying to do, but adding a newline to your printf, e.g., printf("2000\n");, should at least get you started.
I'm attempting to have pexpect begin running a command which basically continually outputs some information every few milliseconds until cancelled with Ctrl + C.
I've attempted getting pexpect to log to a file, though these outputs are simply ignored and are never logged.
child = pexpect.spawn(command)
child.logfile = open('mylogfile.txt', 'w')
This results in the command being logged with an empty output.
I have also attempted letting the process run for a few seconds, then sending an interrupt to see if that logs the data, but this again, results in an almost empty log.
child = pexpect.spawn(command)
child.logfile = open('mylogfile.txt', 'w')
time.sleep(5)
child.send('\003')
child.expect('$')
This is the data in question:
image showing the data constantly printing to the terminal
I've attempted the solution described here: Parsing pexpect output though it hasn't worked for me and results in a timeout.
Managed to get it working by using Python Subprocess for this, not sure of a way to do it with Pexpect, but this got what I described.
def echo(self, n_lines):
output = []
if self.running is False:
# start shell
self.current_shell = Popen(cmd, stdout=PIPE, shell=True)
self.running = True
i = 0
# Read the lines from stdout, break after reading the desired amount of lines.
for line in iter(self.current_shell.stdout.readline, ''):
output.append(line[0:].decode('utf-8').strip())
if i == n_lines:
break
i += 1
return output
I'm using the subprocess module to run a bash command. I want to display the result in real time, including when there's no new line added, but the output is still modified.
I'm using python 3. My code is running with subprocess, but I'm open to any other module. I have some code that return a generator for every new line added.
import subprocess
import shlex
def run(command):
process = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE)
while True:
line = process.stdout.readline().rstrip()
if not line:
break
yield line.decode('utf-8')
cmd = 'ls -al'
for l in run(cmd):
print(l)
The problem comes with commands of the form rsync -P file.txt file2.txt for example, which shows a progress bar.
For example, we can start by creating a big file in bash:
base64 /dev/urandom | head -c 1000000000 > file.txt
Then try to use python to display the rsync command:
cmd = 'rsync -P file.txt file2.txt'
for l in run(cmd):
print(l)
With this code, the progress bar is only printed at the end of the process, but I want to print the progress in real time.
From this answer you can disable buffering when print in python:
You can skip buffering for a whole python process using "python -u"
(or #!/usr/bin/env python -u etc) or by setting the environment
variable PYTHONUNBUFFERED.
You could also replace sys.stdout with some other stream like wrapper
which does a flush after every call.
Something like this (not really tested) might work...but there are
probably problems that could pop up. For instance, I don't think it
will work in IDLE, since sys.stdout is already replaced with some
funny object there which doesn't like to be flushed. (This could be
considered a bug in IDLE though.)
>>> class Unbuffered:
.. def __init__(self, stream):
.. self.stream = stream
.. def write(self, data):
.. self.stream.write(data)
.. self.stream.flush()
.. def __getattr__(self, attr):
.. return getattr(self.stream, attr)
..
>>> import sys
>>> sys.stdout=Unbuffered(sys.stdout)
>>> print 'Hello'
Hello