Start lots of background jobs but keep their logs separated - linux

I have little experiences in shell commands in unix.
So far, I have checked stackOverflow and know how to run simple shell scripts in order by
using echo
echo $(sh dosomthing1.sh)
echo $(sh dosomthing2.sh)
directly using sh xxx and wait
sh dosomthing1.sh
wait
sh dosomthing2.sh
using &&
sh dosomthing1.sh && sh dosomthing2.sh
But these ways seem to be helpless to solve my problem...
Here is my problem:
I have a basic shell script to do a maven compile and then using "nohup xxx &" to start a java application in background. the script is shown below:
#get the input env parameter
env=$1
#goto application root directory
cd /applicationDir
#to compile
mvn install -Dmaven.test.skip=true
#to start with parameter env
nohup java -jar -Dspring.profiles.active=$env myApplication.jar &
#to tail the log
tail -20f myApplication.log
I have too many different applications with the same startup scripts and it is hard to start them one by one. I need to start them with one command.
All the shell scripts are expected to be processed one by one in order. If there are any exceptions, skip and run the next one.
And when I tried to write a script like this:
sh start1.sh
wait
echo "application 1 was start up"
sh start2.sh
wait
echo "application 2 was start up"
...
sh startxxx.sh
wait
echo "application xxx was start up"
Though all the children shell scripts will process in order as what I expected, and the output infomations looked like the shell is functioning well, but the fact is only the last application will be started, all the previous command "nohup xxxx &" will be shut down.
Also I have tried to write like this:
sh start1.sh &
sh start2.sh &
...
sh startxxx.sh &
Although the result was what I want, all the application will be started well, but during processing the scripts, because of the parallel running of the scripts, the consoled output is unreadable. It comes to a good result but not a graceful way.
I have no idea how to solve this problem...
Please help me with this, thank you very much!

When you have a script with commands, you cam do chmod +x start.sh. Now the script can be started with ./start.sh. You will avoid an additional sh process and with ls -l you can see which scripts are executable.
In your scripts you have tail -f. This will be very confusing for a backgound process. Start the scripts in the background and view the logging from the console. I do hope that each script is using a different myApplication.jar and myApplication.log.
When the logging in the logfile is duplicated in stdout (your commandline window), you can throw that logging away.
./start1.sh > /dev/null 2>&1 &
./start2.sh > /dev/null 2>&1 &
./startxxx.sh > /dev/null 2>&1 &
The processes will be killed when you logout before the scripts are terminated. This can be avoided with nohup:
nohup ./start1.sh > /dev/null 2>&1 &
nohup ./start2.sh > /dev/null 2>&1 &
nohup ./startxxx.sh > /dev/null 2>&1 &
Edit:
OPS wants to start programs in a fixed order.
Starting scripts exactly one after another in order, should be possible by calling them in the right order (perhaps with an additional sleep 1).
When you need to wait for program 1 finished some init stuff, you need to check that. Use 1 script calling all scripts and add some control statements, like
nohup java something &
while ! grep -q "Started" myApplication.log; do
sleep 1
done
When the java program has an error the while will wait for ever, so replace this with some max retrycount
for ((retry=0l retry<100; retry++)); do
grep -q "Started" myApplication.log && break
sleep 1
done

https://man7.org/linux/man-pages/man8/cron.8.html
This might help you. Cron is a task scheduler, which you can use to run programs in sequence. If the man page is difficult to understand, look for tutorials on it; I'm sure some would exist.

Related

How do I setup two curl commands to execute at different times forever?

For example, I want to run one command every 10 seconds and the other command every 5 minutes. I can only get the first one to log properly to a text file. Below is the shell script I am working on:
echo "script Running. Press CTRL-C to stop the process..."
while sleep 10;
do
curl -s -I --http2 https://www.ubuntu.com/ >> new.txt
echo "------------1st command--------------------" >> logs.txt;
done
||
while sleep 300;
do
curl -s -I --http2 https://www.google.com/
echo "-----------------------2nd command---------------------------" >> logs.txt;
done
I would advise you to go with #Marvin Crone's answer, but researching cronjobs and back-ground processes doesn't seem like the kind of hassle I would go through for this little script. Instead, try putting both loops into separate scripts; like so:
script1.sh
echo "job 1 Running. Type fg 1 and press CTRL-C to stop the process..."
while sleep 10;
do
echo $(curl -s -I --http2 https://www.ubuntu.com/) >> logs.txt;
done
script2.sh
echo "job 2 Running. Type fg 2 and press CTRL-C to stop the process..."
while sleep 300;
do
echo $(curl -s -I --http2 https://www.google.com/) >> logs.txt;
done
adding executable permissions
chmod +x script1.sh
chmod +x script2.sh
and last but not least running them:
./script1.sh & ./script2.sh &
this creates two separate jobs in the background that you can call by typing:
fg (1 or 2)
and stop them with CTRL-C or send them to background again by typing CTRL-Z
I think what is happening is that you start the first loop. Your first loop needs to complete before the second loop will start. But, the first loop is designed to be infinite.
I suggest you put each curl loop in a separate batch file.
Then, you can run each batch file separately, in the background.
I offer two suggestions for you to investigate for your solution.
One, research the use of crontab and set up a cron job to run the batch files.
Two, research the use of nohup as a means of running the batch files.
I strongly suggest you also research the means of monitoring the jobs and knowing how to terminate the jobs if anything goes wrong. You are setting up infinite loops. A simple Control C will not terminate jobs running in the background. You are treading in areas that can get out of control. You need to know what you are doing.

Kill background process started from the same bash script [duplicate]

I have a script that looks like this:
pushd .
nohup java -jar test/selenium-server.jar > /dev/null 2>&1 &
cd web/code/protected/tests/
phpunit functional/
popd
The selenium servers needs to be running for the tests, however after the phpunit command finishes I'd like to kill the selenium-server that was running.
How can I do this?
You can probably save the PID of the process in a variable, then use the kill command to kill it.
pushd .
nohup java -jar test/selenium-server.jar > /dev/null 2>&1 &
serverPID=$!
cd web/code/protected/tests/
phpunit functional/
kill $serverPID
popd
I haven't tested it myself, I'd like to write it on a comment, but not enough reputation yet :)
When the script is excecuted a new shell instance is created. Which means that the jobs in the new script would not list any jobs running in the parent shell.
Since the selenium-server server is the only background process that is created in the new script it can be killed using
#The first job
kill %1
Or
#The last job Same as the first one
kill %-
As long as you don't launch any other process in the background - which you don't - you can use $! directly:
pushd .
nohup java -jar test/selenium-server.jar > /dev/null 2>&1 &
cd web/code/protected/tests/
phpunit functional/
kill $!
popd

Bash redirection in a script in parallel

I have a bash script with a loop of processes that I want to run in parallel:
for i in {1..5}
do
echo Running for simulation $i
python script.py $i > ./outlogs/$i.log 2>&1 &
done
But when I do this the file redirection doesn't work, so $i.log just stays empty. The redirection only works when I do not use the & at the end, but then the script waits for each process to finish before starting the next one, which I don't want.
I tried a solution using script -c, but this does not update in realtime, only once the process ends. Does anyone have better suggestions, where the file redirection works in this script but it still updates in realtime?
You need simply add -u option so it will look like this:
python -u script.py $i > ./outlogs/$i.log 2>&1 &
Option -u is for unbuffered binary stdout and stderr

How to get the process id of command executed in bash script?

I have a script i want to run 2 programs at the same time, One is a c program and the other is cpulimit, I want to start the C program in the background first with "&" and then get the PID of the C program and hand it to cpulimit which will also run in the background with "&".
I tried doing this below and it just starts the first program and never starts cpulimit.
Also i am running this as a startup script as root using systemd in arch linux.
#!/bin/bash
/myprogram &
PID=$!
cpulimit -z -p $PID -l 75 &
exit 0
I think i have this solved now, According to this here: link I need to wrap the commands like this (command) to create a sub shell.
#!/bin/bash
(mygprgram &)
mypid=$!
(cpulimit -z -p $mypid -l 75 &)
exit 0
I just found this while googling and wanted to add something.
While your solution seems to be working (see comments about subshells), in this case you don't need to get the pid at all. Just run the command like this:
cpulimit -z -l 75 myprogram &

Running shell script command after executing an application

I have written a shell script to execute a series of commands. One of the commands in the shell script is to launch an application. However, I do not know how to continue running the shell script after I have launched the application.
For example:
...
cp somedir/somefile .
./application
rm -rf somefile
Once I launched the application with "./application" I am no longer able to continue running the "rm -rf somefile" command, but I really need to remove the file from the directory.
Anyone have any ideas how to compete running the "rm -rf" command after launching the application?
Thanks
As pointed out by others, you can background the application (man bash 'job control', e.g.).
Also, you can use the wait builtin to explicitely await the background jobs later:
./application &
echo doing some more work
wait # wait for background jobs to complete
echo application has finished
You should really read the man pages and bash help for more details, as always:
http://unixhelp.ed.ac.uk/CGI/man-cgi?sh
http://www.gnu.org/s/bash/manual/bash.html#Job-Control-Builtins
Start the application in the background, this way the shell is not going to wait for it to terminate and will execute the consequent commands right after starting the application:
./application &
In the meantime, you can check the background jobs by using the jobs command and wait on them via wait and their ID. For example:
$ sleep 100 &
[1] 2098
$ jobs
[1]+ Running sleep 100 &
$ wait %1
put the started process to background:
./application &
You need to start the command in the background using '&' and maybe even nohup.
nohup ./application > log.out 2>&1

Resources