How can I have subprocess or Popen show the command line that it has just run? - linux

Currently, I have a command that looks something like the following:
my_command = Popen([activate_this_python_virtualenv_file, \
"-m", "my_command", "-l", \
directory_where_ini_file_for_my_command_is + "/" + my_ini_file_name], \
stderr=subprocess.STDOUT, stdout=subprocess.PIPE, shell=False,
universal_newlines=False, cwd=directory_where_my_module_is)
I have figured out how to access and process the output, deal with subprocess.PIPE, and make subprocess do a few other neat tricks.
However, it seems odd to me that the standard Python documentation for subprocess doesn't mention a way to just get the actual command line as subprocess.Popen puts it together from arguments to the Popen constructor.
For example, perhaps my_command.get_args() or something like that?
Is it just that getting the command line run in Popen should be easy enough?
I can just put the arguments together on my own, without accessing the command subprocess runs with Popen, but if there's a better way, I'd like to know it.

It was added in Python 3.3. According to docs:
The following attributes are also available:
Popen.args The args argument as it was passed to Popen – a sequence of
program arguments or else a single string.
New in version 3.3.
So sample code would be:
my_args_list = [] # yourlist
p = subprocess.Popen(my_args_list)
assert p.args == my_args_list

Related

Running a string command using exec with popen

I have a simple cmd_str containing a set of lines. Using exec, I can run those lines juts fine. However, running those lines in a separate process when shell=True is failing. Is this dues to missing quotes? what is happening under the hood?
import subprocess
cmd_str = """
import sys
for r in range(10):
print('rob')
"""
exec(cmd_str) # works ok
full_cmd = f'python3 -c "exec( "{cmd_str}" )"'
process = subprocess.Popen([full_cmd],
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
(output, error) = process.communicate()
exit_code = process.wait()
output_msg = output.decode("utf-8", 'ignore')
error_msg = error.decode("utf-8", 'ignore').strip()
Your approach is slightly inaccurate. I believe the problem you're having has to do with the subprocess usage. The first thing you must realise is that exec
is a way to send and execute python code to and from the interpreter directly. This is why it works inside python programs(and it is generally not a good approach). Subprocesses on the other hand handle command like they are being called directly from the terminal or shell. This means that you no longer need to include exec cause you are already interacting with the python interpreter when you call python -c.
To get this to run in a subprocess environment all you need to do is
full_cmd = f'python3 -c "{cmd_str}"'
process = subprocess.Popen(full_cmd,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
Also, notice the absence of square brackets when calling subprocess.Popen, this is because that particular approach works slightly different and if you want to use the square brackets your command will have to be
full_cmd = ['python3', '-c', f'{cmd_str}']
process = subprocess.Popen(full_cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
And with these few changes, everything should work OK

How would I run cmd commands with Python and take input from them?

I would like write some python to run a cmd ping test and let me know if/when I get a general failure. (I know it's really specific don't ask).
What I really want to do is not just run a cmd command from python, but I also want it to use the results as input to display a message when the failure happens with a timestamp.
The idea I had was to take the output from the cmd and check for the string 'fail' within each line, but I'm not sure how to achieve this given the cmd seems hard to read with python with my current knowledge.
You can use subprocess module in python and do something like below. Replace the cmd variable with your command that you want to execute.
import subprocess
cmd = ['ls', '-lrt']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, err = p.communicate()
print('-' * 50, 'OUTPUT', '-' * 50)
print(output.decode())
print('fail' in err.decode())

Is there a way to store the output of a linux command into a variable in python network programming?

I am trying to build a system where a list of the available wifi networks would be stored for some specific purpose. Now the problem is that executing a system command with os.system() in a variable 'res' only stores the return value of the command which is useless to me at this point.
I know of no approach that provide me the desired result.
import os
res = os.system('nmcli dev wifi')
The variable res must store all the desired result into it rather than the return value. Even if it stores result, it will do the work.
You can do this using the Popen method from the subprocess module
from subprocess import Popen, PIPE
#First argument is the program name.
arguments = ['ls', '-l', '-a']
#Run the program ls as subprocess.
process = Popen(arguments, stdout=PIPE, stderr=PIPE)
#Get the output or any errors. Be aware, they are going to be
#in bytes!!!
stdout, stderr = process.communicate()
#Print the output of the ls command.
print(bytes.decode(stdout))

how to execute script from python with options

I need to run the following command via python.
/work/data/get_info name=Mike home
The error I am getting is No such file or directory: '/work/data/get_info name=Mike home'. Which isn't correct. the get_info program does exits.
It is working in a perl script I am trying to get the same functionality in python.
perl script
$ENV{work} = '/work/data';
my $myinfo = "$ENV{work}/bin/get_info";
$info = `$myinfo name=Mike home`;
Info dumps out information
my python script
import os, subprocess
os.environ['work'] = '/work/data'
run_info = "{}/bin/get_info name={} {}".format(os.environ['work'],'Mike','home')
p = subprocess.call([run_product_info], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
I get an error No such file or directory: '/work/data/get_info name=Mike
The Python subprocess.call is thinking that the entire string is the name of the program, as if you had double quoted it like "/work/data/get_info name=Mike home" since you passed it as an array.
Either pass it without the array for the shell (if you are sure all escaping/quoting is correct, and see warnings in the docs) or pass each as a separate array element.
subprocess.call(['/work/data/bin/get_info', 'name=Mike', 'home'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.call('/work/data/bin/get_info name=Mike home', stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
https://docs.python.org/3.7/library/subprocess.html#frequently-used-arguments
args is required for all calls and should be a string, or a sequence of program arguments. Providing a sequence of arguments is generally preferred, as it allows the module to take care of any required escaping and quoting of arguments (e.g. to permit spaces in file names). If passing a single string, either shell must be True (see below) or else the string must simply name the program to be executed without specifying any arguments.

Passing a file as an input to shell script through python

I have a python program, which calls the shell script through subprocess() module. I am looking for a way to pass a simple file, as an input to shell script. Does this happen through subproess and popen?
I have tried this code for an AWS lambda function
It would be nice/helpful if you could share some excerpt of your code in your question.
But assuming bits of it.
Here is a way to achieve this.
import shlex
from subprocess import PIPE, Popen
import logger
def run_script(script_path, script_args):
"""
This function will run a shell script.
:param script_path: String: the path of script that needs to be called
:param script_args: String: the arguments needed by the shell script
:return:
"""
logger.info("Running bash script {script} with parameters:{params}".format(script=script_path, params=script_args))
# Adding a whitespace in shlex.split because the path gets distorted if args are added without it
session = Popen(shlex.split(script_path + " " + script_args), stderr=PIPE, stdout=PIPE, shell=False)
stdout, stderr = session.communicate()
# Beware that stdout and stderr will be bytes so in order to get a proper python string decode the values.
logger.debug(stdout.decode('utf-8'))
if stderr:
logger.error(stderr)
raise Exception("Error " + stderr.decode('utf-8'))
return True
Now a couple of things to note here
Your bash script should be able to handle the args properly may it be $1 or named params like --file or -f
Just give all the params you want in the string array in shlex method.
Also note the comments mentioned in code above.

Resources