I am trying to run multiple jobs at once and need to write which one failed to a log file. How to do it? I tried to point the result to a log file but it's not getting populated.
sh test1.sh & test_1_Pid=$!
sh test2.sh & test_2_Pid=$!
wait $test_1_Pid
test_1_Pid=$?
wait $test_2_Pid
test_2_Pid=$?
Start your script in a subshell and let each subprocess handle the errors.
(sh test1.sh || echo "test1" >> failure.txt)&
(sh test2.sh || echo "test2" >> failure.txt)&
Or perhaps without a subprocess:
sh test1.sh || echo "test1" >> failure.txt&
sh test2.sh || echo "test2" >> failure.txt&
Demo:
touch job1 job2 job4 job5 job7
rm result # when you repeat the demo
for ((j=1; j<8;j++)); do
(sleep 2; rm job$j 2>/dev/null || echo "job$j failed" >> result)&
done
wait
cat result
The result will show the files that you did not create.
job3 failed
job6 failed
Related
I am trying to build a bash script to run a web server. I need the script to show output of web server in console until specific word appears on the console indicating either the server initialization completed successfully or some error occurs.
I was able to show the console output until timeout occurs:
#!/bin/bash
(exec /opt/aspnetcore-runtime-3.0.0-linux-x64/dotnet /opt/app/Launcher.dll &) | (timeout --foreground 6 cat; cat > /dev/null &)
If an error happens earlier than 6 seconds, then the web server stopped and the control goes back to the terminal, which is the desired behavior.
However, if the web server initialization was successfully completed in 2 seconds, the user must wait for another 4 seconds until the script finishes. I want to return the control back to terminal once some phrase (e.g. SUCCESS INIT!) appear on the console.
On surface, replacing the current wait ('cat') with a custom loop that will exit when 'SUCCESS INIT!' is found will address the problem
Such a loop can be implemented with
while read x && echo "$x" && [ "$x" != "SUCCESS INIT!" ] ; do : ; done
And the combined command
(Launch-command &) | (timeout --foreground 6 sh -c 'while read x && echo "$x" && [ "$x" != "SUCCESS INIT!" ] ; do : ; done' ; cat > /dev/null &)
Not very elegant. If you can, put the 'timeout ... while ...' in a separate script. I did not test, but it should work:
#! /bin/bash
# wait-line.sh
timeout --foreground "$1" sh -c "while read x && echo "$x" && [ "$x" != "$2" ] ; do : done "
cat > /dev/null
And then the original command line will look like
( Launch-command ... & ) | ( wait-line.sh 6 "SUCCESS INIT!" &)
I am running a shell script from another shell script which is a git-hook pre-push.
This is the content of .git/hooks/pre-push:
#!/usr/bin/env bash
protected_branch='master'
current_branch=$(git symbolic-ref HEAD | sed -e 's,.*/\(.*\),\1,')
if [ $protected_branch = $current_branch ]; then
sh test.sh
if [ $? != 0 ]; then
echo "Error"
exit 1
fi
else
exit 0
fi
This is the content of test.sh:
#!/bin/bash
run_base=1
run_test () {
read -p "enter varname: " varname
echo $varname
}
time {
if [ "$run_base" = "0" ] ; then
echo "skipped"
else
time { run_test ; }
echo "run_test done";
fi
}
If I run pre-push script directly, then it works fine, but it doesn't work when I execute git push origin master and pre-push gets triggered automatically.
read in test.sh is not being executed when I trigger the pre-push hook script. Do I need to do anything specific in order to execute read in test.sh which is called from pre-push?
I just test it on my computer and it works perfectly,
isoto#hal9014-2 ~/w/test> ./pre-push
enter varname: asd
asd
real 0m1.361s
user 0m0.000s
sys 0m0.000s
run_test done
real 0m1.361s
user 0m0.000s
sys 0m0.000s
So, the only thing that I did was to add executable permissions, chmod +x *
And also I put both scripts in the same directory, besides everything should work.
Found the answer, I had to add < /dev/tty at the end of read:
read -p "enter varname: " varname < /dev/tty
echo $varname
I want to execute a script but as soons as it started, I want it to echo the PID, so I'm doing the following:
./dummy.SCRIPT & echo $!
but I also want it to echo something when it is done executing the script so I'm guessing it should look something like:
( ./dummy.SCRIPT & echo $! ) && echo "Task Done!"
But it doesn't seem to work, the "Task Done!" echo executes at the same time the dummy.SCRIPT start's, what am I doing wrong!?
Rather than a command line execution, write a small script that does that for you.
$ cat monitordummy.sh
#!/bin/bash
./dummyscript &
pidofdummy=$!
while ps -p $pidofdummy > /dev/null
do
sleep 5 # Modify this according to your run time
echo -n "...." # This will be printed on the stdout
done
echo "Task Done for dummyscript"
Run the script:
$ chmod u+x monitordummy.sh
$ ./monitordummy.sh
It isn't pretty, but should do what you're describing:
{ ./dummy.SCRIPT & PID=$!; echo $PID; } && { wait $PID ; echo "Task Done!"; }
In the first block you run the script, save and print its PID. When that was OK, you run the second block in the background waiting for the child process to finish and print your message. Not that wait also propagates return code of the finished process, so you could at this point report done or failed accordingly.
I want to associate the FTP session with the file descriptor, which would refer to it throughout the script. Including cycles.
For example something like this. But it did not get to do it.
#!/bin/bash
#start of script
exec {ftpdescriptor}<> >(lftp -u $ftpuser,$ftppass $ftpip/$ftptd)
# code
(echo "ls" 1>&"$ftpdecriptor")> myanswer
# code
echo "bye" 1>&"$ftpdecriptor"
exec {ftpdescriptor}>&-
exit 0
# end of script
It works, but the answer is always going to stdout..
Solved the problem so
# start of script
ftpout=$(mktemp)
$timetowaitftp
exec {ftpin}> >(lftp -u $ftpuser,$ftppass $ftpip/$ftptd > $ftpout)
printf >&$ftpin "set net:timeout 10\n"
function ftpio {
:>$ftpout
printf >&$ftpin "$1\n"
i=0
while [ ! $2 ] && [ ! -s $ftpout ] && [ $i -lt 10 ]; do
# echo "waiting answer from ftp 1 sec.."
sleep 1;
let i=i+1
done
}
# code
ftpio "cd /modx" "nowait" # no output of cd command.
ftpio "ls"
cat $ftpout
sleep 15
ftpio "pwd"
cat $ftpout
#ftpio "put /var/www/vhosts/modx/backups/20160121.113318.tar.gz" "nowait" # 12Gb
#ftpio "put /var/www/vhosts/modx/backups/20150930.092338.tar.gz" "nowait" # 800mb
# /code
# end of script
printf 1>&"$ftpin" "bye\n"
exec {ftpin}>&-
rm $ftpout
ftpin - named descriptor
ftpout - temp file with last answer of ftp
I have the following variables set in a script.
SU="/bin/su -s /bin/sh
WSO2_SCRIPT="JAVA_HOME=$JAVA_HOME /opt/autopilot/wso2is/bin/wso2server.sh"
WSO2_USER=autoplt
This part of the script is of concern:
if [ "$RETVAL" -eq "0" ]; then
if [ "$(whoami)" != "${WSO2_USER}" ]; then
$SU - $WSO2_USER -c "${WSO2_SCRIPT} start" >> ${WSO2_LOG} 2>&1 || RETVAL="4"
else
${WSO2_SCRIPT} start >> ${WSO2_LOG} 2>&1 || RETVAL="4"
fi
fi
If I am root, then the following command gets executed:
/bin/su -s /bin/sh - autoplt -c 'JAVA_HOME=/usr/java/latest /opt/autopilot/wso2is/bin/wso2server.sh start'
and
RETVAL
will get evaluated to 0.
When I am user autoplt, the following command gets executed:
JAVA_HOME=/usr/java/latest /opt/autopilot/wso2is/bin/wso2server.sh start
However
RETVAL
will get evaluated to 4?
Are they not the same commands? Shouldn't RETVAL be 0 in each case?
The command gets executed successfully when I run it in the shell as autoplt user.
Therefore is there something wrong with the way I have written it?
the double pipe || used in lines :
$SU - $WSO2_USER -c "${WSO2_SCRIPT} start" >> ${WSO2_LOG} 2>&1 || RETVAL="4"
${WSO2_SCRIPT} start >> ${WSO2_LOG} 2>&1 || RETVAL="4"
means if the first command succeed, then the second will not be executed
It means when running as root, it succeed then no change to RETVAL
and as a user, if fails so change RETVAL to 4