How to run shell/python script in shell script? - linux

I'm testing my shell script and python script together, so I write a shell script to call these two script in loop like this:
while ((1)) ; do
/usr/local/bin/ovs-appctl dpdk/vhost-list
sleep 10
/usr/local/bin/ovs-appctl dpdk/virtio-net-show n-f879ac2f
sleep 10
/usr/local/bin/ovs-appctl dpdk/virtio-net-show n-434ab558
sleep 10
/usr/local/bin/ovs-appctl dpdk/virtio-net-show i-brpri-p
sleep 10
`sh ./env_check`
`cat output.txt`
sleep 10
`python vm_qga_tool /opt/cloud/workspace/servers/6608da87-e374-4796-adb1-8faa29f49e9a/qga.sock connect:172.16.0.1`
sleep 10
done
but the result report error:
test-dpdk-virtio-net-show.sh: line 13: Checking: command not found
test-dpdk-virtio-net-show.sh: line 15: {"session":: command not found
Line13 is cat output.txt, line15 is python vm_qga_tool ....
the output.txt is the result of line12 sh ./env_check, and the output of vm_qga_tool is like this:
{"session"...
so how to fix this? How to cat result of the shell script in shell script?

sh ./env_check line will be replaced by the output of the script ./env_check. Current shell will try to execute them as commands. In your case it fails as, luckily, bash couldn't interpret them as commands. This could have been a disaster. For example, if your script produced an output like rm -rf *, you can imagine what would happen.
If you want to display the result of ./env_check you have to prepend the line with echo, so that the output of ./env_check will be fed to echo.
echo `sh ./env_check`
Or if you want to capture the output into a variable you do the following.
out=`sh ./env_check`
Same goes to cat ... and python ...
However use of back-ticks is discouraged. Instead use $(...). (Read more here.)
...
sleep 10
echo $(sh ./env_check)
echo $(cat output.txt)
sleep 10
echo $(python vm_qga_tool /opt/cloud/workspace/servers/6608da87-e374-4796-adb1-8faa29f49e9a/qga.sock connect:172.16.0.1)
sleep 10
...
However, you don't need to do any of that anyway. If you don't need to capture the command output, you can directly invoke the scripts, instead of invoking through a sub-shell.
...
sleep 10
sh ./env_check
cat output.txt
sleep 10
python vm_qga_tool /opt/cloud/workspace/servers/6608da87-e374-4796-adb1-8faa29f49e9a/qga.sock connect:172.16.0.1
sleep 10
...

The issues occurred because your shell accepted output of sh ./env_check and python vm_qga_tool ... as commands, so it will try to run those commands.
To fix it you should eliminate "" symbol fromsh ./env_checkorpython vm_qga_tool ...`

Related

Bash command with pipe('|') alway return exit code of 0, even in error case [duplicate]

I want to execute a long running command in Bash, and both capture its exit status, and tee its output.
So I do this:
command | tee out.txt
ST=$?
The problem is that the variable ST captures the exit status of tee and not of command. How can I solve this?
Note that command is long running and redirecting the output to a file to view it later is not a good solution for me.
There is an internal Bash variable called $PIPESTATUS; it’s an array that holds the exit status of each command in your last foreground pipeline of commands.
<command> | tee out.txt ; test ${PIPESTATUS[0]} -eq 0
Or another alternative which also works with other shells (like zsh) would be to enable pipefail:
set -o pipefail
...
The first option does not work with zsh due to a little bit different syntax.
Dumb solution: Connecting them through a named pipe (mkfifo). Then the command can be run second.
mkfifo pipe
tee out.txt < pipe &
command > pipe
echo $?
using bash's set -o pipefail is helpful
pipefail: the return value of a pipeline is the status of
the last command to exit with a non-zero status,
or zero if no command exited with a non-zero status
There's an array that gives you the exit status of each command in a pipe.
$ cat x| sed 's///'
cat: x: No such file or directory
$ echo $?
0
$ cat x| sed 's///'
cat: x: No such file or directory
$ echo ${PIPESTATUS[*]}
1 0
$ touch x
$ cat x| sed 's'
sed: 1: "s": substitute pattern can not be delimited by newline or backslash
$ echo ${PIPESTATUS[*]}
0 1
This solution works without using bash specific features or temporary files. Bonus: in the end the exit status is actually an exit status and not some string in a file.
Situation:
someprog | filter
you want the exit status from someprog and the output from filter.
Here is my solution:
((((someprog; echo $? >&3) | filter >&4) 3>&1) | (read xs; exit $xs)) 4>&1
echo $?
See my answer for the same question on unix.stackexchange.com for a detailed explanation and an alternative without subshells and some caveats.
By combining PIPESTATUS[0] and the result of executing the exit command in a subshell, you can directly access the return value of your initial command:
command | tee ; ( exit ${PIPESTATUS[0]} )
Here's an example:
# the "false" shell built-in command returns 1
false | tee ; ( exit ${PIPESTATUS[0]} )
echo "return value: $?"
will give you:
return value: 1
So I wanted to contribute an answer like lesmana's, but I think mine is perhaps a little simpler and slightly more advantageous pure-Bourne-shell solution:
# You want to pipe command1 through command2:
exec 4>&1
exitstatus=`{ { command1; printf $? 1>&3; } | command2 1>&4; } 3>&1`
# $exitstatus now has command1's exit status.
I think this is best explained from the inside out - command1 will execute and print its regular output on stdout (file descriptor 1), then once it's done, printf will execute and print icommand1's exit code on its stdout, but that stdout is redirected to file descriptor 3.
While command1 is running, its stdout is being piped to command2 (printf's output never makes it to command2 because we send it to file descriptor 3 instead of 1, which is what the pipe reads). Then we redirect command2's output to file descriptor 4, so that it also stays out of file descriptor 1 - because we want file descriptor 1 free for a little bit later, because we will bring the printf output on file descriptor 3 back down into file descriptor 1 - because that's what the command substitution (the backticks), will capture and that's what will get placed into the variable.
The final bit of magic is that first exec 4>&1 we did as a separate command - it opens file descriptor 4 as a copy of the external shell's stdout. Command substitution will capture whatever is written on standard out from the perspective of the commands inside it - but since command2's output is going to file descriptor 4 as far as the command substitution is concerned, the command substitution doesn't capture it - however once it gets "out" of the command substitution it is effectively still going to the script's overall file descriptor 1.
(The exec 4>&1 has to be a separate command because many common shells don't like it when you try to write to a file descriptor inside a command substitution, that is opened in the "external" command that is using the substitution. So this is the simplest portable way to do it.)
You can look at it in a less technical and more playful way, as if the outputs of the commands are leapfrogging each other: command1 pipes to command2, then the printf's output jumps over command 2 so that command2 doesn't catch it, and then command 2's output jumps over and out of the command substitution just as printf lands just in time to get captured by the substitution so that it ends up in the variable, and command2's output goes on its merry way being written to the standard output, just as in a normal pipe.
Also, as I understand it, $? will still contain the return code of the second command in the pipe, because variable assignments, command substitutions, and compound commands are all effectively transparent to the return code of the command inside them, so the return status of command2 should get propagated out - this, and not having to define an additional function, is why I think this might be a somewhat better solution than the one proposed by lesmana.
Per the caveats lesmana mentions, it's possible that command1 will at some point end up using file descriptors 3 or 4, so to be more robust, you would do:
exec 4>&1
exitstatus=`{ { command1 3>&-; printf $? 1>&3; } 4>&- | command2 1>&4; } 3>&1`
exec 4>&-
Note that I use compound commands in my example, but subshells (using ( ) instead of { } will also work, though may perhaps be less efficient.)
Commands inherit file descriptors from the process that launches them, so the entire second line will inherit file descriptor four, and the compound command followed by 3>&1 will inherit the file descriptor three. So the 4>&- makes sure that the inner compound command will not inherit file descriptor four, and the 3>&- will not inherit file descriptor three, so command1 gets a 'cleaner', more standard environment. You could also move the inner 4>&- next to the 3>&-, but I figure why not just limit its scope as much as possible.
I'm not sure how often things use file descriptor three and four directly - I think most of the time programs use syscalls that return not-used-at-the-moment file descriptors, but sometimes code writes to file descriptor 3 directly, I guess (I could imagine a program checking a file descriptor to see if it's open, and using it if it is, or behaving differently accordingly if it's not). So the latter is probably best to keep in mind and use for general-purpose cases.
(command | tee out.txt; exit ${PIPESTATUS[0]})
Unlike #cODAR's answer this returns the original exit code of the first command and not only 0 for success and 127 for failure. But as #Chaoran pointed out you can just call ${PIPESTATUS[0]}. It is important however that all is put into brackets.
In Ubuntu and Debian, you can apt-get install moreutils. This contains a utility called mispipe that returns the exit status of the first command in the pipe.
Outside of bash, you can do:
bash -o pipefail -c "command1 | tee output"
This is useful for example in ninja scripts where the shell is expected to be /bin/sh.
The simplest way to do this in plain bash is to use process substitution instead of a pipeline. There are several differences, but they probably don't matter very much for your use case:
When running a pipeline, bash waits until all processes complete.
Sending Ctrl-C to bash makes it kill all the processes of a pipeline, not just the main one.
The pipefail option and the PIPESTATUS variable are irrelevant to process substitution.
Possibly more
With process substitution, bash just starts the process and forgets about it, it's not even visible in jobs.
Mentioned differences aside, consumer < <(producer) and producer | consumer are essentially equivalent.
If you want to flip which one is the "main" process, you just flip the commands and the direction of the substitution to producer > >(consumer). In your case:
command > >(tee out.txt)
Example:
$ { echo "hello world"; false; } > >(tee out.txt)
hello world
$ echo $?
1
$ cat out.txt
hello world
$ echo "hello world" > >(tee out.txt)
hello world
$ echo $?
0
$ cat out.txt
hello world
As I said, there are differences from the pipe expression. The process may never stop running, unless it is sensitive to the pipe closing. In particular, it may keep writing things to your stdout, which may be confusing.
PIPESTATUS[#] must be copied to an array immediately after the pipe command returns.
Any reads of PIPESTATUS[#] will erase the contents.
Copy it to another array if you plan on checking the status of all pipe commands.
"$?" is the same value as the last element of "${PIPESTATUS[#]}",
and reading it seems to destroy "${PIPESTATUS[#]}", but I haven't absolutely verified this.
declare -a PSA
cmd1 | cmd2 | cmd3
PSA=( "${PIPESTATUS[#]}" )
This will not work if the pipe is in a sub-shell. For a solution to that problem,
see bash pipestatus in backticked command?
Base on #brian-s-wilson 's answer; this bash helper function:
pipestatus() {
local S=("${PIPESTATUS[#]}")
if test -n "$*"
then test "$*" = "${S[*]}"
else ! [[ "${S[#]}" =~ [^0\ ] ]]
fi
}
used thus:
1: get_bad_things must succeed, but it should produce no output; but we want to see output that it does produce
get_bad_things | grep '^'
pipeinfo 0 1 || return
2: all pipeline must succeed
thing | something -q | thingy
pipeinfo || return
Pure shell solution:
% rm -f error.flag; echo hello world \
| (cat || echo "First command failed: $?" >> error.flag) \
| (cat || echo "Second command failed: $?" >> error.flag) \
| (cat || echo "Third command failed: $?" >> error.flag) \
; test -s error.flag && (echo Some command failed: ; cat error.flag)
hello world
And now with the second cat replaced by false:
% rm -f error.flag; echo hello world \
| (cat || echo "First command failed: $?" >> error.flag) \
| (false || echo "Second command failed: $?" >> error.flag) \
| (cat || echo "Third command failed: $?" >> error.flag) \
; test -s error.flag && (echo Some command failed: ; cat error.flag)
Some command failed:
Second command failed: 1
First command failed: 141
Please note the first cat fails as well, because it's stdout gets closed on it. The order of the failed commands in the log is correct in this example, but don't rely on it.
This method allows for capturing stdout and stderr for the individual commands so you can then dump that as well into a log file if an error occurs, or just delete it if no error (like the output of dd).
It may sometimes be simpler and clearer to use an external command, rather than digging into the details of bash. pipeline, from the minimal process scripting language execline, exits with the return code of the second command*, just like a sh pipeline does, but unlike sh, it allows reversing the direction of the pipe, so that we can capture the return code of the producer process (the below is all on the sh command line, but with execline installed):
$ # using the full execline grammar with the execlineb parser:
$ execlineb -c 'pipeline { echo "hello world" } tee out.txt'
hello world
$ cat out.txt
hello world
$ # for these simple examples, one can forego the parser and just use "" as a separator
$ # traditional order
$ pipeline echo "hello world" "" tee out.txt
hello world
$ # "write" order (second command writes rather than reads)
$ pipeline -w tee out.txt "" echo "hello world"
hello world
$ # pipeline execs into the second command, so that's the RC we get
$ pipeline -w tee out.txt "" false; echo $?
1
$ pipeline -w tee out.txt "" true; echo $?
0
$ # output and exit status
$ pipeline -w tee out.txt "" sh -c "echo 'hello world'; exit 42"; echo "RC: $?"
hello world
RC: 42
$ cat out.txt
hello world
Using pipeline has the same differences to native bash pipelines as the bash process substitution used in answer #43972501.
* Actually pipeline doesn't exit at all unless there is an error. It executes into the second command, so it's the second command that does the returning.
Why not use stderr? Like so:
(
# Our long-running process that exits abnormally
( for i in {1..100} ; do echo ploop ; sleep 0.5 ; done ; exit 5 )
echo $? 1>&2 # We pass the exit status of our long-running process to stderr (fd 2).
) | tee ploop.out
So ploop.out receives the stdout. stderr receives the exit status of the long running process. This has the benefit of being completely POSIX-compatible.
(Well, with the exception of the range expression in the example long-running process, but that's not really relevant.)
Here's what this looks like:
...
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
5
Note that the return code 5 does not get output to the file ploop.out.

How to execute commands read from the txt file using shell? [duplicate]

This question already has answers here:
Run bash commands from txt file
(4 answers)
Closed 4 years ago.
I tried to execute commands read it from txt file. But only 1st command is executing, after that script is terminated. My script file name is shellEx.sh is follows:
echo "pwd" > temp.txt
echo "ls" >> temp.txt
exec < temp.txt
while read line
do
exec $line
done
echo "printed"
if I keep echo in the place of exec, just it prints both pwd and ls. But i want to execute pwd and ls one by one.
o/p am getting is:
$ bash shellEx.sh
/c/Users/Aditya Gudipati/Desktop
But after pwd, ls also need to execute for me.
Anyone can please give better solution for this?
exec in bash is meant in the Unix sense where it means "stop running this program and start running another instead". This is why your script exits.
If you want to execute line as a shell command, you can use:
line="find . | wc -l"
eval "$line"
($line by itself will not allow using pipes, quotes, expansions or other shell syntax)
To execute the entire file including multiline commands, use one of:
source ./myfile # keep variables, allow exiting script
bash myfile # discard variables, limit exit to myfile
A file with one valid command per line is itself a shell script. Just use the . command to execute it in the current shell.
$ echo "pwd" > temp.txt
$ echo "ls" >> temp.txt
$ . temp.txt

command to record terminal does not work with bash

I would like to use "script" command, I have the following code
#!/bin/bash
script &
wait
echo "hello"
echo "hello2"
pid=$(pidof script | awk '{print $1}')
kill -9 $pid
I need the script command to capture the output, but after the command "script &" the output is :
Script started, file is typescript
Script done, file is typescript
and script does not record nothing, any idea of why?
This is how you should do it:
script <output-file> <commands>
Example:
script typescript bash -c 'echo "hello"; echo "hello2"'
Script started, output file is typescript
hello
hello2
Script done, output file is typescript
Then check output file created:
cat typescript
Script started on Sat Dec 19 01:54:04 2015
hello
hello2
Script done on Sat Dec 19 01:54:04 2015
There are two ways you can use the script command :
Save only the outputs of your code (i.e. batch mode)
$ script filename bash -c 'echo foo; echo bar'
which will output
Script started, file is filename
foo
bar
Script done, file is filename
Save all what is displayed on your terminal (i.e. interactive mode). To end the scripting, just type exit or hit Ctrl-D
$ script filename
Script started, file is filename
$ echo foo
foo
$ echo bar
bar
$ exit
exit
Script done, file is filename
Note that the batch way is a hack on the interactive classical way of using script.
In your case, just forget about the & and kill stuff and hit Ctrl-D when you want the script to end.

How to output the start and stop datetime of shell script (but no other log)?

I am still very new to shell scripting (bash)...but I have written my first one and it is running as expected.
What I am currently doing is writing to the log with sh name-of-script.sh >> /cron.log 2>&1. However this writes everything out. It was great for debugging but now I don't need that.
I now only want to see the start date and time along with the end date and time
I would still like to write to cron.log but just the dates as mentioned above But I can't seem to figure out how to do that. Can someone point me in the right direction to do this...either from within the script or similar to what I've done above?
A simple approach would be to add something like:
echo `date`: Myscript starts
to the top of your script and
echo `date`: Myscript ends
to the bottom and
echo `date`: Myscript exited because ...
wherever it exits with an error.
The backticks around date (not normal quotes) cause the output of the date command to be interpolated into the echo statement.
You could wrap this in functions and so forth to make it neater, or use date -u to print in UTC, but this should get you going.
You ask in the comments how you would avoid the rest of the output appearing.
One option would be to redirect the output and error of everything else in the script to /dev/null, by adding '>/dev/null 2>&1' to every line that output something, or otherwise silence them. EG
if fgrep myuser /etc/password ; then
dosomething
fi
could be written:
if fgrep myuser /etc/password >/dev/null 2>&1 ; then
dosomething
fi
though
if fgrep -q myuser /etc/password ; then
dosomething
fi
is more efficient in this case.
Another option would be to put the date wrapper in the crontab entry. Something like:
0 * * * * sh -c 'echo `date`: myscript starting ; /path/to/myscript >/dev/null 2>&1; echo `date`: myscript finished'
Lastly, you could use a subshell. Put the body of your script into a function, and then call that in a subshell with output redirected.
#!/bin/bash
do_it ()
{
... your script here ...
}
echo `date`: myscript starting
( do_it ) >/dev/null 2>&1
echo `date`: myscript finished
Try the following:
TMP=$(date); name-of-scipt.sh; echo "$TMP-$(date)"
or with formatted date
TMP=$(date +%Y%m%d.%H%M%S); name-of-scipt.sh; echo "$TMP-$(date +%Y%m%d.%H%M%S)"

Concatenate strings inside bash script (different behaviour from shell)

I'm trying some staff that is working perfectly when I write it in the regular shell, but when I include it in a bash script file, it doesn't.
First example:
m=`date +%m`
m_1=$((m-1))
echo $m_1
This gives me the value of the last month (actual minus one), but doesn't work if its executed from a script.
Second example:
m=6
m=$m"t"
echo m
This returns "6t" in the shell (concatenates $m with "t"), but just gives me "t" when executing from a script.
I assume all these may be answered easily by an experienced Linux user, but I'm just learning as I go.
Thanks in advance.
Re-check your syntax.
Your first code snippet works either from command line, from bash and from sh since your syntax is valid sh. In my opinion you probably have typos in your script file:
~$ m=`date +%m`; m_1=$((m-1)); echo $m_1
4
~$ cat > foo.sh
m=`date +%m`; m_1=$((m-1)); echo $m_1
^C
~$ bash foo.sh
4
~$ sh foo.sh
4
The same can apply to the other snippet with corrections:
~$ m=6; m=$m"t"; echo $m
6t
~$ cat > foo.sh
m=6; m=$m"t"; echo $m
^C
~$ bash foo.sh
6t
~$ sh foo.sh
6t
Make sure the first line of your script is
#!/bin/bash
rather than
#!/bin/sh
Bash will only enable its extended features if explicitly run as bash. If run as sh, it will operate in POSIX compatibility mode.
First of all, it works fine for me in a script, and on the terminal.
Second of all, your last line, echo m will just output "m". I think you meant "$m"..

Resources