I am using the subprocess module like that:
ping = subprocess.Popen('fping.exe 192.168.2.3 196.65.58.69', stdout=PIPE)
output = ping.stdout.readlines()
I am need the output list in order to process it later at the program, but it seems since the stdout is directed to PIPE it isn't output the results to the console. I would like to get both the console output (as it being executed) and the output list.
How can i do that ?
I have done a search, and got an answer here, but i am unable to implement it.
I am using Python 3.x on Windows env.
Thanks.
There's no such thing as a pipe that goes to two places. Everything written to a pipe will only be read once. (While it's theoretically possible for your program and the console to have access to the same out pipe, if you succeed in doing so then only some of the data will go to your program, and only the data that doesn't will end up on the console.) To get all the output to your program and to the console, someone will have to read and duplicate the data. On a unix-like system, you might use the "tee" command for this, but you probably don't have that on your Windows machine.
So you will have to write the output to the console as you get it.
In this case, you can probably get away with using readline() in a loop instead of readlines().
I have found a way to do that here is it:
for line in os.popen("Fping x.x.x.x x.x.x.x -l"):
ipList.append(line)
print(line)
That way, i am able to get the results from the Fping program into the list, and to print it to the screen while it is executing, since the for loop with the os.popen aren't wait for the program to finish, but always looping at every line from the program.
Related
We have a tool that runs tests, but does return an error code if they fail.
The tool runs the tests after starting logging in through SSH to a custom console (not bash) and issuing a command. All tests run at once within that invocation
The logging of the tests goes to a file.
The output of the tool is roughly:
test1 [ok]
test2 Some message based on the failure
...
To stop the build, we need to look for certain strings in the output.
The output appears as the tests run.
I could capture the whole output into a file and fail at the end. But it would save quite some time to fail once the first test fails.
Therefore, I would like something like tee, but it would also kill the execution if it finds that failure string. Or, at least, it should print the output as it comes, and return non-zero if a string is found.
Is this doable with the standard Linux toolkit?
The only solution I can think of is:
Start you build process and cat its output to an output file.
Start another script that monitors this file: a loop which iterates every X seconds in search for your, lets say, forbidden words in the file. As soon as they appear, kill the build process (you may need a way to identify your build process, such as a pid file or something like this) and clear the file.
You can even put this 2 processes in a single shellscript and make them both start and stop when needed.
I am writing a pipeline in python 3.6 which internally executes several command-line based software. Each internal software is executed using subprocess.run() . I want to
redirect the stdout of each software to a separate file,
if there is an error produced by an internal software I want to redirect the error to stderr (so the user can see it), redirect to a separate file and terminate the execution of the pipeline.
The first step I do like this:
with open("outfile", "wb", 0) as outfile:
subprocess.run(cmd, stdout=outfile, check=True)
My question 1 is if there is a better way of achieving the same? There are many commands in the pipeline and I don't want open separate file for each of them.
Regarding the second question, so I am not sure how to solve it: one way would be to write the stderr to a file and then check if the file is empty, but I feel its a wrong way of doing it.
Any suggestions are appreciated,
I am writing a program that handles some data on a server. Throughout the program, many files are made and sent as input into other programs. To do this, I usually make the command string, then run it like so:
cmd = "prog input_file1 input_file2 > outputfile"
os.system(cmd)
When I run the command, however, the programs being called report that they cannot open the files. If I run the python code on my local computer, it is fine. When I loaded it onto the server, it started to fail. I think this is related to issues with permissions, but am not sure how I can fix this. Many of the files, particularly the output files, are being created at run time. The input files have full permissions for all users. Any help or advice would be appreciated!
Cheers!
The python code you list is simple and correct, so the problem is likely not in the two lines of your example. Here are some related areas for you to check out.
Permissions
The user running the python script must have the appropriate permission (read, write, execute). I see from comments that you've already checked this.
What command are you running
If the command is literally typed into your source code like in the example, then you know what command is being run, but if you are generating any part of it (eg. the list of operands, the name of the output file, other parameters, etc), make sure there are no bugs in the portions of your code that generate the command. For example before the call to os.system(cmd) consider including a line like print("About to execute: " + cmd) so you can see exactly what will be run.
Directly invoke the command
If all the above looks good, try to execute the command directly at a terminal on your server. What output do you get then. It's possible that the problem is with the underlying command itself rather than your python code.
It's common to use the command more. more is usually used with pipe. so I think more has the ability to read from stdin. every command separated by pipe is a process, and the one before more just create pipe and dup2 the write pipe to more's stdin. but I found that if I type "more" in the console, just some usages appear. so what is the matter?
Why do you think that anything is wrong? More pages output for the terminal so what would be the point in waiting for enough typed stdin input to page?
If you type more and one or more filenames it will page that input. So the behavior is something like:
am I attached to a terminal? ("isatty")
are there filenames in argv
page files
else
display help
else
page pipe input
It's a feature. It detects that its standard input is connected to a terminal, and displays a help message instead of proceeding. There is hardly a situation where it makes sense to run a pager on input while you are typing it in by hand. If you really actually want to, try cat | more for example.
For what's worth, I looked at the source package provided by the repositories in my linux distribution and found this:
if (!no_intty && nfiles == 0) {
usage(argv[0]);
exit(1);
}
So indeed the behaviour is to display the usage message if no input is detected.
Besides using top, is there a more precise way of identifying if the last executed command has finished if I had to check in a separate session over Putty?
pgrep
How about getting it to run another command immediately after that sets a flag.
$ do_command ; touch I_FINISHED
then when the command finishes it'll create a file called I_FINISHED that you
can look for.
or something more sophisticated that writes to a log file if you're doing it
multiple times.
I agree that it may be a faster option in the long run to have your program write to a log file or create a notification. Just put it at the end of the executed code, past the part that you suspect may cause it to hang.
ps -eo cmd
Lists all processes, and displays the command line, as 'typed' when the command started, so you will be able to tell your script apart from anything else running written in Perl.