I am running a ansible playbook through python program
cmd = 'ansible-playbook -i inventory primary.yml'
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, encoding='utf-8')
while True:
output = process.stdout.readline()
if output == '' and process.poll() is not None:
break
if stdout:
print(output.strip())
rc = process.poll()
return rc
This playbook internally calls a shell script like below
- name: call the install script
shell: sudo ./myscript.sh
currently, I am getting the streaming stdout of the ansible-playbook command but I cannot see anything when this myscript.sh get executed.
I have to only wait blindly while the script finished its execution without any stdout. How can see the output of all the child process triggered from main shell command through python.
Related
I am currently creating a function that will run at least 2 bash commands on a remote system via subprocess.run module because I need to capture the return code as well.
The example bash commands is ssh username#ipaddress 'ls && exit'
This will basically print out the contents of the remote home directory and exit the terminal created.
This works when using os.system("ssh username#ipaddress 'ls && exit'") but will not capture the return code.
In subprocess.run
### bashCommand is "ssh username#ipaddress 'ls && exit'"
bashCommand = ['ssh', f"username#ipaddress", "'ls && exit'"]
output = subprocess.run(bashCommand)
However when the subprocess.run command was run,
the terminal says that bash: ls && exit: command not found
the return code is 127 (from output.returncode)
I found ps or pgrep cannot find running script without "#!/bin/bash"
Here is a sample.sh:
while true
do
echo $(date)
done
start the script (ubuntu 18.04, Linux version 4.15.0-101-generic):
$echo $BASH
/bin/bash
./sample.sh
open another terminal, ps only find the command grep
$ps -aux |grep sample.sh
16887 0.0 0.0 16184 1008 pts/4 S+ 07:12 0:00 grep --color=auto sample
pgrep find nothing
$pgrep sample
$
But if I add "#!/bin/bash" to sample.sh, everything works now:
#!/bin/bash <-----add this line
while true
do
echo $(date)
done
I am wondering why.
Let's start with the second of your cases, namely where you do have #!/bin/bash, because it is actually the easier one to deal with first.
With the #!/bin/bash
When you execute a script which starts with #!/path/to/interpreter, the Linux kernel will understand this syntax and will invoke the specified interpreter for you in the same way as if you had explicitly added /path/to/interpreter to the start of the command line. So in the case of your script starting with #!/bin/bash, if you look using ps ux, you will see the command line /bin/bash ./sample.sh.
Without the #!/bin/bash
Now turning to the other one where the #!/bin/bash is missing. This case is more subtle.
A file which is neither a compiled executable nor a file starting with the #! line cannot be executed by the Linux kernel at all. Here is an example of trying to run the sample.sh without the #!/bin/bash line from a python script:
>>> import subprocess
>>> p = subprocess.Popen("./sample.sh")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/subprocess.py", line 394, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1047, in _execute_child
raise child_exception
OSError: [Errno 8] Exec format error
And to show that this is not just a python issue, here is a demonstration of exactly the same thing, but from a C program. Here is the C code:
#include <stdio.h>
#include <unistd.h>
int main() {
execl("./sample.sh", "sample.sh", NULL);
/* exec failed if reached */
perror("exec failed");
return 1;
}
and here is the output:
exec failed: Exec format error
So what is happening here case when you run the script is that because you are invoking it from a bash shell, bash is giving some fault tolerance by running the commands directly after the attempt to "exec" the script has failed.
What is happening in more detail is that:
bash forks a subshell,
inside the subshell it straight away does a call to the Linux kernel to "exec" your executable, and if successful, that would end that (subshell) process and replace it with a process running the executable
however, the exec is not successful, and this means that the subshell is still running
at that point the subshell just reads the commands inside your script and starts executing them directly.
The overall effect is very similar to the #!/bin/bash case, but because the subshell was just started by forking your original bash process, it has the same command line, i.e. just bash, without any command line argument. If you look for this subshell in the output of ps uxf (a tree-like view of your processes), you will see it just as
bash
\_ bash
whereas in the #!/bin/bash case you get:
bash
\_ /bin/bash ./sample.sh
I am using following code to run linux command with python3.5
process = subprocess.run(['sudo shutdown -h'], check=True,stdout=subprocess.PIPE, shell=True)
output = process.stdout.decode('utf-8')
print("Response")
print(output)
It returns empty string as response
Shutdown scheduled for Tue 2019-09-10 22:32:34 CEST, use 'shutdown -c' to cancel.
Response
But when I replace command wit something with
ls -l
or
sudo su
it works, it returs string containing e.g. list of files in directory, like it should work
Edit
Apparently
Commands may send whatever they want to stdout or stderr, and this is completed unrelated to the status the command returns. While stderr is meant for diagnostic messages`, it is up to the program to decide where is sends messages
So one of solutions is to redirect stderr to stdout adding stderr=subprocess.STDOUT
subprocess.run(['sudo shutdown -h'], check=True,stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
I call this in my local machine
ssh -t anon#192.168.50.81 -p 10086 'echo $SHELL && pstree'
I got /bin/zsh and a normal pstree output without shell process.
Why? And is the first output a fake one?
Some shells, like zsh, do not fork a child process to execute the last command in a command line or script. Since the exit status of the line or script is the same as the exit status of the last command, they call exec() in the shell process without forking. So if you execute
sleep 5 && pstree
it will fork a child for sleep, wait for it to finish, then call exec() to run pstree.
Since the pstree process replaces the shell, you don't see the shell in the process tree. pstree will be the child of sshd.
If you change it to
pstree && sleep 5
then you should see the shell in the pstree output, because pstree is no longer the last command.
When I run the script command it loses all the aliases from the existing shell which is not desired for people using lots of aliases. So, I am trying to see if I can automatically source the .profile again to see if it works without the user have to do it.
Here below is the code:
#!/bin/bash -l
rm aliaspipe
mkfifo aliaspipe
bash -c "sleep 1;echo 'source ~/.bash_profile' > aliaspipe ;echo 'alias' > aliaspipe ;echo 'exec 0<&-' > aliaspipe"&
echo "starting script for recording"
script < aliaspipe
Basically I am creating a named pipe and the making the pipe as stdin to the script program, trying to run the source command and then close the stdin from pipe to the terminal stdin so that I can continue with the script.
But when I execute, the script is exiting after I execute "exec 0<&-",
bash-3.2$ exec 0<&-
bash-3.2$ exit
Script done, output file is typescript
Not sure why the exit is called and script is terminated. If I can make the script move the stdin from pipe to terminal then it should be fine.
You can get script to execute a bash login shell by telling it to do so explicitly.
# Gnu script (most Linux distros)
script -c "bash -l"
# BSD script (also Mac OS X)
script typescript bash -l
That will cause your .bash_profile to be sourced.
By the way, redirections are not stacks. When you write exec 0<&-, you're closing standard input, and when bash's standard input is closed, it exits.