Why is my stdin redirection ('<') not working with subprocess.Popen()? - python-3.x

I'm building a script in python, and one part of it needs to send an email with a file as the message body. Instead of sending the contents of the file, the script sends me the next character entered into the terminal e.g. if I enter c as a part of "cat", it doesn't put c into the terminal, but instead sends me an email with "c" as the body.
This is on CentOS 7.6.1810, with Python 3.5.6.
#!/usr/src/Python-3.5.6/python
import subprocess
import sys
import os
subprocess.Popen(["mail", "-s", "Test Subject", "myemail#myemail.com", "<", "/path/to/file.txt"], stdout=open('stdout', 'w'), stderr=open('errout', 'w'))
The contents of file.txt should be send as the body, but I just get the first letter of whatever the next thing I type is. "stdout" reads "EOT" after this, and nothing is printed to "errout". To be clear, I'm trying to invoke the command
mail -s "Test Subject" myemail#myemail.com < /path/to/file.txt
from inside of the script. This command works as expected outside of the Python script, but inside of it I run into the problem.

subprocess.Popen() executes your new process directly by default. So your code passes some additional arguments < and /path/to/file.txt to the mail executable, which will not yield the expected result.
Redirections like < on unix systems are handled by the shell, not by each individual executable. That's why you want subprocess.Popen() to run your mail command with all the arguments mail as well as the redirection < /path/to/file.txt in a shell instead.
You can do this with the shell=True parameter:
subprocess.Popen(["mail", "-s", "Test Subject", "myemail#myemail.com", "<", "/path/to/file.txt"], stdout=open('stdout', 'w'), stderr=open('errout', 'w'), shell=True)
Note that the shell is probably not necessary – you could have Popen open the file and connect mail's stdin to that descriptor, see anishsane's comment below.
Using a shell should be avoided especially if user data is being passed to the child process, as it would need to be sanitized properly to prevent command injection attacks.
See the Python 3 docs on subprocess.Popen.

Related

Can I pass a variable from python to bash file?

I have a bash file with a bunch of sed commands like this :
sed -i 's/hello my name is Thibault/hello my name is Louis/g' "$1"
so for now i'm doing all of this "by hand", however, I have a python script with a tkinter GUI and several input fields for the user. I would like to find a trick so that if the user inputs "hello my name is Olivia" in the text field then the regex would look like this:
sed -i 's/hello my name is Thibault/hello my name is Olivia/g' "$1"
So I was thinking that i could store the python text input result in the variable to have the regex look like this:
sed -i 's/hello my name is Thibault/$my_variable/g' "$1"
but i don't know how or if this is even possible. Lastly I want to mention that i know i could just ask for the user input in the bash script but this is for my first internship and I have to go through the python GUI.
Edit: i'm on windows 10 if this is any important
Try it like this :
import os
original_text = 'hello my name is Thibault'
new_text = 'hello my name is Louis'
filename = 'test.txt'
os.system (f'sed -i "s/{original_text}/{new_text}/g" {filename}')
For passing data (in your case: some string) from your Python program to a subprocess running a bash script, you have first of all the same options like when calling one bash script from another one: Either design the called script to expect positional parameters (use it as $1 for example) and pass the string as parameter. For instance, if the string is stored in the Python variable parameter, it would look like:
import subprocess
subprocess.call ['bash', './script_to_be_called', parameter]
The other possibility is to design the bash script so that it expects the string to be stored in a variable of a certain name (use it as $PARSTRING for instance) and pass the data via the environment:
import os
os.environ['PARSTRING']=parameter
subprocess.call['bash', './script_to_be_called']
If the "script" executes only a single command, you could alternatively synthesize the command line in your Python program. Assume that you have a string bashcommand, which already holds the complete command which is supposed to be executed by bash, you could do a
import subprocess
subprocess.call ['bash', '-c', bashcommand]
While this should answer your question, I can't help but pointing out, that for executing a single external command, I would not create a shell process, but invoke this program directly as a child process. Also don't forget that spawning a child process takes time, and if you have many such invocations, it might make sense to redesign your approach, for instance by doing everything inside Python, or having only one child prcocess which gets as input the data for all the substitutions to be performed (typically via a file).

how to execute script from python with options

I need to run the following command via python.
/work/data/get_info name=Mike home
The error I am getting is No such file or directory: '/work/data/get_info name=Mike home'. Which isn't correct. the get_info program does exits.
It is working in a perl script I am trying to get the same functionality in python.
perl script
$ENV{work} = '/work/data';
my $myinfo = "$ENV{work}/bin/get_info";
$info = `$myinfo name=Mike home`;
Info dumps out information
my python script
import os, subprocess
os.environ['work'] = '/work/data'
run_info = "{}/bin/get_info name={} {}".format(os.environ['work'],'Mike','home')
p = subprocess.call([run_product_info], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
I get an error No such file or directory: '/work/data/get_info name=Mike
The Python subprocess.call is thinking that the entire string is the name of the program, as if you had double quoted it like "/work/data/get_info name=Mike home" since you passed it as an array.
Either pass it without the array for the shell (if you are sure all escaping/quoting is correct, and see warnings in the docs) or pass each as a separate array element.
subprocess.call(['/work/data/bin/get_info', 'name=Mike', 'home'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.call('/work/data/bin/get_info name=Mike home', stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
https://docs.python.org/3.7/library/subprocess.html#frequently-used-arguments
args is required for all calls and should be a string, or a sequence of program arguments. Providing a sequence of arguments is generally preferred, as it allows the module to take care of any required escaping and quoting of arguments (e.g. to permit spaces in file names). If passing a single string, either shell must be True (see below) or else the string must simply name the program to be executed without specifying any arguments.

Pass a indicator from Bash back to Perl over SSH via STDIN

We have a Linux server which can run a diagnostic script, diag.pl, which coordinates reporting over other servers.
diag.pl iterates over the child servers, and for each of them, SSHs in and runs a bash script, which passes information back:
my $cmd=sprintf("ssh %s sudo /usr/lib/support/report.sh -e %s | uudecode -o \"%s-outfile.tgz\") 2>%1 |", $server, $specialparam, $servername)
The line of code in report.sh that sends the data back is:
uuencode --base64 ${REPORT}.tar.gz /dev/stdout
I would like to update report.sh to send back an additional line of information, something like:
echo "special-file-found=${SFF}" > /tmp/sff.cfg
uuencode --base64 /tmp/sff.cfg > /dev/stdout
Once the special file has been found, the Perl script will update so that it no longer sends the specialparam back to subsequent report.sh calls.
Is there a good way to send that input so that it will be easy for Perl to catch it?
What have I tried
Setting a user.comment attr on the tar.gz using setattr, but the comment does not survive the uuencoding
Currently thinking that my best bet is to use the pseudocode above, creating a new file to encode and send along, and update the Perl script to check it with each new transmission until it finds the special file.
I take it that the objective is to modify a shell script which returns to the caller an encoded file, so that it sends yet more information, specifically a string to be used as a flag in the caller.
It is not clear how the shell script is run from the Perl script, but there are ways to do this so that the caller gets back separate "lines" that are printed, either as they are emitted or altogether after the run completes.
Then you can just add to the shell script the needed extra print to STDOUT, and in the caller check each line of shell output to see whether it conforms to some "protocol;" for example, whether it is, or starts with, special-file-found string. Then you can set flags for further calls or write control file for following runs, etc. Otherwise, the line is the encoded file.
A made-up basic example using pipe-open (see by the end of the page)
use warnings;
use strict;
use feature 'say';
my #cmd = qw(ls -l ./);
my $file_found = quotemeta 'special-file-found';
my ($flag, $binfile);
my $pid = open(my $out, '-|', #cmd) // die "Can't open #cmd: $!";
while (<$out>) {
chomp;
if (/^$file_found/) {
$flag = 1;
}
else {
$binfile = $_;
# whatever else need be done, or perhaps last;
}
}
close $out;
This example runs the command ls -l ./ but instead of it you can run any executable, like #cmd = ('report.sh', 'arg1', 'arg2',...).
Another way is to use backticks (qx) and assign its return to an array, in which case each element receives a line of output.
Yet another, better, way is to use a module which manages external commands. For example, from simple to more capable: IPC::System::Simple, Capture::Tiny, IPC::Run3, IPC::Run.

Accessing the value returned by a shell script in the parent script

I am trying to access a string returned by a shell script which was called from a parent shell script. Something like this:
ex.sh:
echo "Hemanth"
ex2.sh:
sh ex.sh
if [ $? == "Hemanth" ]; then
echo "Hurray!!"
else
echo "Sorry Bro!"
fi
Is there a way to do this? Any help would be appreciated.
Thank you.
Use a command substitution syntax on ex2.sh
valueFromOtherScript="$(sh ex.sh)"
printf "%s\n" "$valueFromOtherScript"
echo by default outputs a new-line character after the string passed, if you don't need it in the above variable use printf as
printf "Hemanth"
on first script. Also worth adding $? will return only the exit code of the last executed command. Its values are interpreted as 0 being a successful run and a non-zero on failure. It will NEVER have a string value as you tried to use.
A Bash script does not really "return" a string. What you want to do is capture the output of a script (or external program, or function, they all act the same in this respect).
Command substitution is a common way to capture output.
captured_output="$(sh ex.sh)"
This initializes variable captured_output with the string containing all that is output by ex.sh. Well, not exactly all. Any script (or command, or function) actually has two output channels, usually called "standard out" (file descriptor number 1) and "standard error" (file descriptor number 2). When executing from a terminal, both typically end up on the screen. But they can be handled separately if needed.
For instance, if you want to capture really all output (including error messages), you would add a "redirection" after your command that tells the shell you want standard error to go to the same place as standard out.
captured_output="$(sh ex.sh 2>&1)"
If you omit that redirection, and the script outputs something on standard error, then this will still show on screen, and will not be captured.
Another way to capture output is sending it to a file, and then read back that file to a variable, like this :
sh ex.sh > output_file.log
captured_output="$(<output_file.log)"
A script (or external program, or function) does have something called a return code, which is an integer. By convention, a value of 0 means "success", and any other value indicates abnormal execution (but not necessarily failure) : the meaning of that return code is not standardized, it is ultimately specific to each script, program or function.
This return code is available in the $? special shell variable immediately after the execution terminates.
sh ex.sh
return_code=$?
echo "Return code is $return_code"

How to write a bash script to give another program response

I have a bash script that does several tasks, including python manage.py syncdb on a fresh database. This command asks for input, like the login info for the admin. Currently, I just type this into the command line every time. Is there a way I can automatically provide these replies as part of the bash script?
Thanks, I don't really know anything about bash.
I'm using Ubuntu 10.10.
I answered a similar question on SF, but this one is more general, and it's good to have on SO.
"You want to use expect for this. It's probably already on your machine [try which expect]. It's the standard tool for any kind of interactive command-line automation. It's a Tcl library, so you'll get some Tcl skills along the way for free. Beware; it's addictive."
I should mention in this case that there is also pexpect, which is a Python expect-alike.
#!/path/to/expect
spawn python manage.py syncdb
expect "login:*"
send -- "myuser\r"
expect "*ssword:*"
send -- "mypass\r"
interact
If the program in question cannot read the input from stdin such as:
echo "some input" | your_progam
then you'll need to look to something like expect and/or autoexepect
You can give defaults values to the variables. In line 4 and 5, if the variables RSRC and LOCAL aren't set, they are set to those default values. This way you can give the options to your script or use the default ones
#!/bin/bash
RSRC=$1
LOCAL=$2
: ${RSRC:="/var/www"}
: ${LOCAL:="/disk2/backup/remote/hot"}
rsync -avz -e 'ssh ' user#myserver:$RSRC $LOCAL
You can do it like this, given an example login.py script:
if __name__ == '__main__':
import sys
user = sys.stdin.readline().strip()
passwd = sys.stdin.readline().strip()
if user == 'root' and passwd == 'password':
print 'Login successful'
sys.exit(0)
sys.stderr.write('error: invalid username or password\n')
sys.exit(1)
good-credentials.txt
root
password
bad-credentials.txt
user
foo
Then you can do the login automatically using:
$cat good-credentials.txt | python login.py
Login successful
$cat bad-credentials.txt | python login.py
error: invalid username or password
The down-side of this approach is you're storing your password in plain text, which isn't great practice.

Resources