How to write a bash script to give another program response - linux

I have a bash script that does several tasks, including python manage.py syncdb on a fresh database. This command asks for input, like the login info for the admin. Currently, I just type this into the command line every time. Is there a way I can automatically provide these replies as part of the bash script?
Thanks, I don't really know anything about bash.
I'm using Ubuntu 10.10.

I answered a similar question on SF, but this one is more general, and it's good to have on SO.
"You want to use expect for this. It's probably already on your machine [try which expect]. It's the standard tool for any kind of interactive command-line automation. It's a Tcl library, so you'll get some Tcl skills along the way for free. Beware; it's addictive."
I should mention in this case that there is also pexpect, which is a Python expect-alike.
#!/path/to/expect
spawn python manage.py syncdb
expect "login:*"
send -- "myuser\r"
expect "*ssword:*"
send -- "mypass\r"
interact

If the program in question cannot read the input from stdin such as:
echo "some input" | your_progam
then you'll need to look to something like expect and/or autoexepect

You can give defaults values to the variables. In line 4 and 5, if the variables RSRC and LOCAL aren't set, they are set to those default values. This way you can give the options to your script or use the default ones
#!/bin/bash
RSRC=$1
LOCAL=$2
: ${RSRC:="/var/www"}
: ${LOCAL:="/disk2/backup/remote/hot"}
rsync -avz -e 'ssh ' user#myserver:$RSRC $LOCAL

You can do it like this, given an example login.py script:
if __name__ == '__main__':
import sys
user = sys.stdin.readline().strip()
passwd = sys.stdin.readline().strip()
if user == 'root' and passwd == 'password':
print 'Login successful'
sys.exit(0)
sys.stderr.write('error: invalid username or password\n')
sys.exit(1)
good-credentials.txt
root
password
bad-credentials.txt
user
foo
Then you can do the login automatically using:
$cat good-credentials.txt | python login.py
Login successful
$cat bad-credentials.txt | python login.py
error: invalid username or password
The down-side of this approach is you're storing your password in plain text, which isn't great practice.

Related

Python3 - Sanitizing user input for shell use

I am busy writing a Python3 script which requires user input, the input is used as parameters in commands passed to the shell.
The script is only intended to be used by trusted internal users - however I'd rather have some contingencies in place to ensure the valid execution of commands.
Example 1:
import subprocess
user_input = '/tmp/file.txt'
subprocess.Popen(['cat', user_input])
This will output the contents of '/tmp/file.txt'
Example 2:
import subprocess
user_input = '/tmp/file.txt && rm -rf /'
subprocess.Popen(['cat', user_input])
Results in (as expected):
cat: /tmp/file.txt && rm -rf /: No such file or directory
Is this an acceptable method of sanitizing input? Is there anything else, per best practice, I should be doing in addition to this?
The approach you have chosen,
import subprocess
user_input = 'string'
subprocess.Popen(['command', user_input])
is quite good as command is static and user_input is passed as one single argument to command. As long as you don't do something really stupid like
subprocess.Popen(['bash', '-c', user_input])
you should be on the safe side.
For commands that require multiple arguments, I'd recommend that you request multiple inputs from the user, e.g. do this
user_input1='file1.txt'
user_input2='file2.txt'
subprocess.Popen(['cp', user_input1, user_input2])
instead of this
user_input="file1.txt file2.txt"
subprocess.Popen(['cp'] + user_input.split())
If you want to increase security further, you could:
explicitly set shell=False (to ensure you never run shell commands; this is already the current default, but defaults may change over time):
subprocess.Popen(['command', user_input], shell=False)
use absolute paths for command (to prevent injection of malicious executables via PATH):
subprocess.Popen(['/usr/bin/command', user_input])
explicitly instruct commands that support it to stop parsing options, e.g.
subprocess.Popen(['rm', '--', user_input1, user_input2])
do as much as you can natively, e.g. cat /tmp/file.txt could be accomplished with a few lines of Python code instead (which would also increase portability if that should be a factor)

Can I pass a variable from python to bash file?

I have a bash file with a bunch of sed commands like this :
sed -i 's/hello my name is Thibault/hello my name is Louis/g' "$1"
so for now i'm doing all of this "by hand", however, I have a python script with a tkinter GUI and several input fields for the user. I would like to find a trick so that if the user inputs "hello my name is Olivia" in the text field then the regex would look like this:
sed -i 's/hello my name is Thibault/hello my name is Olivia/g' "$1"
So I was thinking that i could store the python text input result in the variable to have the regex look like this:
sed -i 's/hello my name is Thibault/$my_variable/g' "$1"
but i don't know how or if this is even possible. Lastly I want to mention that i know i could just ask for the user input in the bash script but this is for my first internship and I have to go through the python GUI.
Edit: i'm on windows 10 if this is any important
Try it like this :
import os
original_text = 'hello my name is Thibault'
new_text = 'hello my name is Louis'
filename = 'test.txt'
os.system (f'sed -i "s/{original_text}/{new_text}/g" {filename}')
For passing data (in your case: some string) from your Python program to a subprocess running a bash script, you have first of all the same options like when calling one bash script from another one: Either design the called script to expect positional parameters (use it as $1 for example) and pass the string as parameter. For instance, if the string is stored in the Python variable parameter, it would look like:
import subprocess
subprocess.call ['bash', './script_to_be_called', parameter]
The other possibility is to design the bash script so that it expects the string to be stored in a variable of a certain name (use it as $PARSTRING for instance) and pass the data via the environment:
import os
os.environ['PARSTRING']=parameter
subprocess.call['bash', './script_to_be_called']
If the "script" executes only a single command, you could alternatively synthesize the command line in your Python program. Assume that you have a string bashcommand, which already holds the complete command which is supposed to be executed by bash, you could do a
import subprocess
subprocess.call ['bash', '-c', bashcommand]
While this should answer your question, I can't help but pointing out, that for executing a single external command, I would not create a shell process, but invoke this program directly as a child process. Also don't forget that spawning a child process takes time, and if you have many such invocations, it might make sense to redesign your approach, for instance by doing everything inside Python, or having only one child prcocess which gets as input the data for all the substitutions to be performed (typically via a file).

Is there a way to know how the user invoked a program from bash?

Here's the problem: I have this script foo.py, and if the user invokes it without the --bar option, I'd like to display the following error message:
Please add the --bar option to your command, like so:
python foo.py --bar
Now, the tricky part is that there are several ways the user might have invoked the command:
They may have used python foo.py like in the example
They may have used /usr/bin/foo.py
They may have a shell alias frob='python foo.py', and actually ran frob
Maybe it's even a git alias flab=!/usr/bin/foo.py, and they used git flab
In every case, I'd like the message to reflect how the user invoked the command, so that the example I'm providing would make sense.
sys.argv always contains foo.py, and /proc/$$/cmdline doesn't know about aliases. It seems to me that the only possible source for this information would be bash itself, but I don't know how to ask it.
Any ideas?
UPDATE How about if we limit possible scenarios to only those listed above?
UPDATE 2: Plenty of people wrote very good explanation about why this is not possible in the general case, so I would like to limit my question to this:
Under the following assumptions:
The script was started interactively, from bash
The script was start in one of these 3 ways:
foo <args> where foo is a symbolic link /usr/bin/foo -> foo.py
git foo where alias.foo=!/usr/bin/foo in ~/.gitconfig
git baz where alias.baz=!/usr/bin/foo in ~/.gitconfig
Is there a way to distinguish between 1 and (2,3) from within the script? Is there a way to distinguish between 2 and 3 from within the script?
I know this is a long shot, so I'm accepting Charles Duffy's answer for now.
UPDATE 3: So far, the most promising angle was suggested by Charles Duffy in the comments below. If I can get my users to have
trap 'export LAST_BASH_COMMAND=$(history 1)' DEBUG
in their .bashrc, then I can use something like this in my code:
like_so = None
cmd = os.environ['LAST_BASH_COMMAND']
if cmd is not None:
cmd = cmd[8:] # Remove the history counter
if cmd.startswith("foo "):
like_so = "foo --bar " + cmd[4:]
elif cmd.startswith(r"git foo "):
like_so = "git foo --bar " + cmd[8:]
elif cmd.startswith(r"git baz "):
like_so = "git baz --bar " + cmd[8:]
if like_so is not None:
print("Please add the --bar option to your command, like so:")
print(" " + like_so)
else:
print("Please add the --bar option to your command.")
This way, I show the general message if I don't manage to get their invocation method. Of course, if I'm going to rely on changing my users' environment I might as well ensure that the various aliases export their own environment variables that I can look at, but at least this way allows me to use the same technique for any other script I might add later.
No, there is no way to see the original text (before aliases/functions/etc).
Starting a program in UNIX is done as follows at the underlying syscall level:
int execve(const char *path, char *const argv[], char *const envp[]);
Notably, there are three arguments:
The path to the executable
An argv array (the first item of which -- argv[0] or $0 -- is passed to that executable to reflect the name under which it was started)
A list of environment variables
Nowhere in here is there a string that provides the original user-entered shell command from which the new process's invocation was requested. This is particularly true since not all programs are started from a shell at all; consider the case where your program is started from another Python script with shell=False.
It's completely conventional on UNIX to assume that your program was started through whatever name is given in argv[0]; this works for symlinks.
You can even see standard UNIX tools doing this:
$ ls '*.txt' # sample command to generate an error message; note "ls:" at the front
ls: *.txt: No such file or directory
$ (exec -a foobar ls '*.txt') # again, but tell it that its name is "foobar"
foobar: *.txt: No such file or directory
$ alias somesuch=ls # this **doesn't** happen with an alias
$ somesuch '*.txt' # ...the program still sees its real name, not the alias!
ls: *.txt: No such file
If you do want to generate a UNIX command line, use pipes.quote() (Python 2) or shlex.quote() (Python 3) to do it safely.
try:
from pipes import quote # Python 2
except ImportError:
from shlex import quote # Python 3
cmd = ' '.join(quote(s) for s in open('/proc/self/cmdline', 'r').read().split('\0')[:-1])
print("We were called as: {}".format(cmd))
Again, this won't "un-expand" aliases, revert to the code that was invoked to call a function that invoked your command, etc; there is no un-ringing that bell.
That can be used to look for a git instance in your parent process tree, and discover its argument list:
def find_cmdline(pid):
return open('/proc/%d/cmdline' % (pid,), 'r').read().split('\0')[:-1]
def find_ppid(pid):
stat_data = open('/proc/%d/stat' % (pid,), 'r').read()
stat_data_sanitized = re.sub('[(]([^)]+)[)]', '_', stat_data)
return int(stat_data_sanitized.split(' ')[3])
def all_parent_cmdlines(pid):
while pid > 0:
yield find_cmdline(pid)
pid = find_ppid(pid)
def find_git_parent(pid):
for cmdline in all_parent_cmdlines(pid):
if cmdline[0] == 'git':
return ' '.join(quote(s) for s in cmdline)
return None
See the Note at the bottom regarding the originally proposed wrapper script.
A new more flexible approach is for the python script to provide a new command line option, permitting users to specify a custom string they would prefer to see in error messages.
For example, if a user prefers to call the python script 'myPyScript.py' via an alias, they can change the alias definition from this:
alias myAlias='myPyScript.py $#'
to this:
alias myAlias='myPyScript.py --caller=myAlias $#'
If they prefer to call the python script from a shell script, it can use the additional command line option like so:
#!/bin/bash
exec myPyScript.py "$#" --caller=${0##*/}
Other possible applications of this approach:
bash -c myPyScript.py --caller="bash -c myPyScript.py"
myPyScript.py --caller=myPyScript.py
For listing expanded command lines, here's a script 'pyTest.py', based on feedback by #CharlesDuffy, that lists cmdline for the running python script, as well as the parent process that spawned it.
If the new -caller argument is used, it will appear in the command line, although aliases will have been expanded, etc.
#!/usr/bin/env python
import os, re
with open ("/proc/self/stat", "r") as myfile:
data = [x.strip() for x in str.split(myfile.readlines()[0],' ')]
pid = data[0]
ppid = data[3]
def commandLine(pid):
with open ("/proc/"+pid+"/cmdline", "r") as myfile:
return [x.strip() for x in str.split(myfile.readlines()[0],'\x00')][0:-1]
pid_cmdline = commandLine(pid)
ppid_cmdline = commandLine(ppid)
print "%r" % pid_cmdline
print "%r" % ppid_cmdline
After saving this to a file named 'pytest.py', and then calling it from a bash script called 'pytest.sh' with various arguments, here's the output:
$ ./pytest.sh a b "c d" e
['python', './pytest.py']
['/bin/bash', './pytest.sh', 'a', 'b', 'c d', 'e']
NOTE: criticisms of the original wrapper script aliasTest.sh were valid. Although the existence of a pre-defined alias is part of the specification of the question, and may be presumed to exist in the user environment, the proposal defined the alias (creating the misleading impression that it was part of the recommendation rather than a specified part of the user's environment), and it didn't show how the wrapper would communicate with the called python script. In practice, the user would either have to source the wrapper or define the alias within the wrapper, and the python script would have to delegate the printing of error messages to multiple custom calling scripts (where the calling information resided), and clients would have to call the wrapper scripts. Solving those problems led to a simpler approach, that is expandable to any number of additional use cases.
Here's a less confusing version of the original script, for reference:
#!/bin/bash
shopt -s expand_aliases
alias myAlias='myPyScript.py'
# called like this:
set -o history
myAlias $#
_EXITCODE=$?
CALL_HISTORY=( `history` )
_CALLING_MODE=${CALL_HISTORY[1]}
case "$_EXITCODE" in
0) # no error message required
;;
1)
echo "customized error message #1 [$_CALLING_MODE]" 1>&2
;;
2)
echo "customized error message #2 [$_CALLING_MODE]" 1>&2
;;
esac
Here's the output:
$ aliasTest.sh 1 2 3
['./myPyScript.py', '1', '2', '3']
customized error message #2 [myAlias]
There is no way to distinguish between when an interpreter for a script is explicitly specified on the command line and when it is deduced by the OS from the hashbang line.
Proof:
$ cat test.sh
#!/usr/bin/env bash
ps -o command $$
$ bash ./test.sh
COMMAND
bash ./test.sh
$ ./test.sh
COMMAND
bash ./test.sh
This prevents you from detecting the difference between the first two cases in your list.
I am also confident that there is no reasonable way of identifying the other (mediated) ways of calling a command.
I can see two ways to do this:
The simplest, as suggested by 3sky, would be to parse the command line from inside the python script. argparse can be used to do so in a reliable way. This only works if you can change that script.
A more complex way, slightly more generic and involved, would be to change the python executable on your system.
Since the first option is well documented, here are a bit more details on the second one:
Regardless of the way your script is called, python is ran. The goal here is to replace the python executable with a script that checks if foo.py is among the arguments, and if it is, check if --bar is as well. If not, print the message and return.
In every other case, execute the real python executable.
Now, hopefully, running python is done trough the following shebang: #!/usr/bin/env python3, or trough python foo.py, rather than a variant of #!/usr/bin/python or /usr/bin/python foo.py. That way, you can change the $PATH variable, and prepend a directory where your false python resides.
In the other case, you can replace the /usr/bin/python executable, at the risk of not playing nice with updates.
A more complex way of doing this would probably be with namespaces and mounts, but the above is probably enough, especially if you have admin rights.
Example of what could work as a script:
#!/usr/bin/env bash
function checkbar
{
for i in "$#"
do
if [ "$i" = "--bar" ]
then
echo "Well done, you added --bar!"
return 0
fi
done
return 1
}
command=$(basename ${1:-none})
if [ $command = "foo.py" ]
then
if ! checkbar "$#"
then
echo "Please add --bar to the command line, like so:"
printf "%q " $0
printf "%q " "$#"
printf -- "--bar\n"
exit 1
fi
fi
/path/to/real/python "$#"
However, after re-reading your question, it is obvious that I misunderstood it. In my opinion, it is all right to just print either "foo.py must be called like foo.py --bar", "please add bar to your arguments" or "please try (instead of )", regardless of what the user entered:
If that's an (git) alias, this is a one time error, and the user will try their alias after creating it, so they know where to put the --bar part
with either with /usr/bin/foo.py or python foo.py:
If the user is not really command line-savvy, they can just paste the working command that is displayed, even if they don't know the difference
If they are, they should be able to understand the message without trouble, and adjust their command line.
I know it's bash task, but i think the easiest way is modify 'foo.py'. Of course it depends on level of script complicated, but maybe it will fit. Here is sample code:
#!/usr/bin/python
import sys
if len(sys.argv) > 1 and sys.argv[1] == '--bar':
print 'make magic'
else:
print 'Please add the --bar option to your command, like so:'
print ' python foo.py --bar'
In this case, it does not matter how user run this code.
$ ./a.py
Please add the --bar option to your command, like so:
python foo.py --bar
$ ./a.py -dua
Please add the --bar option to your command, like so:
python foo.py --bar
$ ./a.py --bar
make magic
$ python a.py --t
Please add the --bar option to your command, like so:
python foo.py --bar
$ /home/3sky/test/a.py
Please add the --bar option to your command, like so:
python foo.py --bar
$ alias a='python a.py'
$ a
Please add the --bar option to your command, like so:
python foo.py --bar
$ a --bar
make magic

Send automatic input to a script called by another script in bash

I'm working on a bash script (my_script) in which I call many scripts, they all together automate a work flow.
But when I call one particular (ksh/bash) script (master_script) there are many inputs and checks taken (not arguments) in it.
It is slowing down the whole of the automation, as every time I have to super wise it and enter the values manually.
I have no option to modify or make a new script (work constraints)
Every time the questions are same. I am trying to take all the answers before executing master_script except one answer(whose value depends on the execution) and then feed it to the master_script at the correct time.
Is there a way we can pass the value to the master_script, during its execution from within my_script.? ./master_script<< EOF .. EOF will not help as I have to enter one answer myself.
The below is just an example and my creation, but depicts what exactly is my requirement.
Example code
my_script
#! /bin/bash
echo "Proceeding...."
#calling master_script
/master_script $arg1 $arg2
echo "Completed.."
echo "Executing other scripts"
/other_scripts"
Execution
$ sh ./my_script
Proceeding....
Started master_script..
Press Enter to Proceed MY_INPUT
Enter username to add (eg.user123) MY_UNAME
Enter preferred uid (eg.1234) MY_UID
Do you want to bla bla..(Y/n) MY_INPUT
Please select among the following
1.option1
2.Option2
Selection: MY_SELECTION
Please choose which extension to use
1.ext1
2.ext2
3.ext3
4.ext4
Do you want to bla bla 2..(Y/n) MY_INPUT
Ended master script
Completed..
Executing other scripts
Requirement
#! /bin/bash
echo "Proceeding...."
# get values for master script
read -p "Proceed(Y/n):" proceed1
read -p "Uname:" uname
read -p "Uid:" uid
read -p "bla bla (Y/n):" bla1
read -p "Selection(1/2):" selection1
read -p "bla bla 2(Y/n):" bla2
#calling master_script
./master_script $arg1 $arg2 {all_inputs}
#Silent Execution of master_script until choosing execution...
Please choose which extension to use
1. ext1
2. ext2
3. ext3
4. ext4
#Silent Execution of master_script after choosing ext and continue with other scripts
./other_scripts
echo "Completed.."
I've read about expect/send combination, but I'm unable to comprehend
how to use it. Any inputs will be greatly helpful
EDIT
I am also not sure about ./master_script<< EOF ... EOF as I have to enter one
answer in the middle of execution myself.
There is a solution using here documents and redirecting the input:
./master_script "$arg1" "$arg2" << ENDINPUT
$proceed1
$uname
$uid
$bla1
$selection1
ENDINPUT
Remark 1: the final ENDINPUT must start the line, don't indent! See Man bash
Remark 2: some scripts or programs check if the input comes from an actual terminal (calling isatty()), for instance when typing a password. It is still possible to automate the entries, but it is much more tricky.

Python subprocess (shell=True), not working for postgres command

Using the command line, I confirm that the following commands executes correctly
echo '\c mydatabase;\i db-reset.sql' | psql -U postgres -h localhost
However, in Python, I can confirm that the following lines do absolutely nothing, and return an status code of 0.
import subprocess
code = subprocess.call(r"echo '\c mydatabase;\i db-reset.sql' | psql -U postgres -h localhost", shell=True)
assert code == 0 # This comes to true
Essentially, why is the command invoked using subprocess not actually doing anything?
It works, but you need more backslashes.
Also, I would recommend you don't use shell=True here.
That is what you do, but without shell:
p = subprocess.Popen(['psql', '-U', 'postgres', '-h', 'localhost'], shell=False, stdin=subprocess.PIPE)
p.communicate(r"\c mydatabase;\i db-reset.sql")
Igor has the right approach without a doubt - though it'd be a good idea to close the session afterwards. However, there's a bigger picture issue here, which is that you should not generally be invoking psql to communicate with PostgreSQL from Python.
Use the psycopg2 module, which is widespread and available almost everywhere, to talk to PostgreSQL directly. This will immensely simplify your database communications.
For cases where you actually need psql, like running scripts, please use psql -f and a database argument. Your command in this case should be:
try:
subprocess.check_call([
'psql', '-q',
'-U', 'postgres',
'-h', 'localhost',
'-f', 'db-reset.sql',
'mydatabase'
])
except subprocess.CalledProcessError, ex:
print("Failed to invoke psql: {0}".format(ex))
... or even better, use check_output if you're on a new enough Python version, so you capture error output too. Note the -q (quiet mode) flag, too.
(Note that subprocess will do its own escaping when you're running on a platform like Windows where there's no sensible execv variant system calls or equivalents. So you don't need to care about painful shell escaping quirks.)

Resources