I would like to make a .sh that runs multiple other ones .sh in new tabs/windows.
something like inside main.sh
"sh1.sh"
wait 5 seconds to load
"sh2.sh"
wait 5 seconds
"sh3.sh"
You could try xterm -e ~/sh1.sh as your command. It'll close as soon as the script has finished though.
If you need to run them in separate windows simultaneously, you need to background each process.
xterm -e sh1.sh &
sleep 5 # why do you want to pause between invocations?
xterm -e sh2.sh &
sleep 5
xterm -e sh3.sh &
This should probably be refactored to use a loop and/or a wrapper function.
for prog in sh1.sh sh2.sh sh3.sh; do
xterm -e $prog &
sleep 5
done
Related
For example, I want to run one command every 10 seconds and the other command every 5 minutes. I can only get the first one to log properly to a text file. Below is the shell script I am working on:
echo "script Running. Press CTRL-C to stop the process..."
while sleep 10;
do
curl -s -I --http2 https://www.ubuntu.com/ >> new.txt
echo "------------1st command--------------------" >> logs.txt;
done
||
while sleep 300;
do
curl -s -I --http2 https://www.google.com/
echo "-----------------------2nd command---------------------------" >> logs.txt;
done
I would advise you to go with #Marvin Crone's answer, but researching cronjobs and back-ground processes doesn't seem like the kind of hassle I would go through for this little script. Instead, try putting both loops into separate scripts; like so:
script1.sh
echo "job 1 Running. Type fg 1 and press CTRL-C to stop the process..."
while sleep 10;
do
echo $(curl -s -I --http2 https://www.ubuntu.com/) >> logs.txt;
done
script2.sh
echo "job 2 Running. Type fg 2 and press CTRL-C to stop the process..."
while sleep 300;
do
echo $(curl -s -I --http2 https://www.google.com/) >> logs.txt;
done
adding executable permissions
chmod +x script1.sh
chmod +x script2.sh
and last but not least running them:
./script1.sh & ./script2.sh &
this creates two separate jobs in the background that you can call by typing:
fg (1 or 2)
and stop them with CTRL-C or send them to background again by typing CTRL-Z
I think what is happening is that you start the first loop. Your first loop needs to complete before the second loop will start. But, the first loop is designed to be infinite.
I suggest you put each curl loop in a separate batch file.
Then, you can run each batch file separately, in the background.
I offer two suggestions for you to investigate for your solution.
One, research the use of crontab and set up a cron job to run the batch files.
Two, research the use of nohup as a means of running the batch files.
I strongly suggest you also research the means of monitoring the jobs and knowing how to terminate the jobs if anything goes wrong. You are setting up infinite loops. A simple Control C will not terminate jobs running in the background. You are treading in areas that can get out of control. You need to know what you are doing.
I'm trying to schedule a series of mpi jobs on an Ubuntu 14.04 LTS machine using a bash script. Basically, I want a simulation to run on every core for a certain amount of time, then terminate and move on to the next case once that time has elapsed.
My issue arises when mpi exits at the end of the first job - it breaks the loop and returns the terminal to my control instead of heading onto the next iteration of the loop.
My script is included below. The file "case_names" is just a text file of directory names. I've tested the script with other commands and it works fine until I uncomment the mpirun call.
#!/bin/bash
while read line;
do
# Access case dierctory
cd $line
echo "Case $line accessed"
# Start simulation
echo "Case $line starting: $(date)"
mpirun -q -np 8 dsmcFoamPlus -parallel > log.dsmcFoamPlus &
# Wait for 10 hour runtime
sleep 36000
# Kill job
pkill mpirun > /dev/null
echo "Case $line terminated: $(date)"
# Return to parent directory
cd ..
done < case_names
Does anyone know of a way to stop mpirun from breaking the loop like this?
So far I've tried GNOME task scheduler and task-spooler, but neither have worked (likely due to aliases that have to be invoked before the commands I use become available). I'd really rather not have to resort to setting up slurm. I've also tried using the disown command to separate the mpi process from the shell I'm running the scheduling script in, and have even written a separate script just to kill processes which the scheduling script runs remotely.
Many thanks in advance!
I've managed to find a workaround that allows me to schedule tasks with a bash script like I wanted. Since this solves my issue, I'm posting it as an answer (although I would still welcome an explanation as to why mpi behaves in this way in loops).
The solution lay in writing a separate script for both calling and then killing mpi, which would itself be called by the scheduling script. Since this child bash process has no loops in it, there are no issues with mpi breaking them after being killed. Also, once this script has exited, the scheduling loop can continue unimpeded.
My (now working) code is included below.
Scheduling script:
while read line;
do
cd $line
echo "CWD: $(pwd)"
echo "Case $line accessed"
bash ../run_job
echo "Case $line terminated: $(date)"
cd ..
done < case_names
Execution script (run_job):
mpirun -q -np 8 dsmcFoamPlus -parallel > log.dsmcFoamPlus &
echo "Case $line starting: $(date)"
sleep 600
pkill mpirun
I hope someone will find this useful.
I have two scripts, in which one is calling the other, and needs to kill it after some time. A very basic, working example is given below.
main_script.sh:
#!/bin/bash
cd "${0%/*}" #make current working directory the folder of this script
./record.sh &
PID=$!
# perform some other commands
sleep 5
kill -s SIGINT $PID
#wait $PID
echo "Finished"
record.sh:
#!/bin/bash
cd "${0%/*}" #make current working directory the folder of this script
RECORD_PIDS=1
printf "WallTimeStart: %f\n\n" $(date +%s.%N) >> test.txt
top -b -p $RECORD_PIDS -d 1.00 >> test.txt
printf "WallTimeEnd: %f\n\n" $(date +%s.%N) >> test.txt
Now, if I run main_script.sh, it will not nicely close record.sh on finish: the top command will keep on running in the background (test.txt will grow until you manually kill the top process), even though the main_script is finished and the record script is killed using SIGINT.
If I ctrl+c the main_script.sh, everything shuts down properly. If I run record.sh on its own and ctrl+c it, everything shuts down properly as well.
If I uncomment wait, the script will hang and I will need to ctrl+z it.
I have already tried all kinds of things, including using 'trap' to launch some cleanup script when receiving a SIGINT, EXIT, and/or SIGTERM, but nothing worked. I also tried bring record.sh back to the foreground using fg, but that did not help too. I have been searching for nearly a day now already, with now luck unfortunately. I have made an ugly workaround which uses pidof to find the top process and kill it manually (from main_script.sh), and then I have to write the "WallTimeEnd" statement manually to it as well from the main_script.sh. Not very satisfactory to me...
Looking forward to any tips!
Cheers,
Koen
Your issue is that the SIGINT is delivered to bash rather than to top. One option would be to use a new session and send the signal to the process group instead, like:
#!/bin/bash
cd "${0%/*}" #make current working directory the folder of this script
setsid ./record.sh &
PID=$!
# perform some other commands
sleep 5
kill -s SIGINT -$PID
wait $PID
echo "Finished"
This starts the sub-script in a new process group and the -pid tells kill to signal every process in that group, which will include top.
I am trying to run the following command as part of the bash script which suppose to open ssh channel, run the program on the remote machine, save the output to the file for 10 sec, kill the process, which was writing to the file and then give the control back to bash script.
#!/bin/bash
ssh hostname '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null; sshpid=!$; sleep 10; kill -9 $sshpid 2>/dev/null &'
Unfortunately, what it seems to be doing is starting the program: nodes-listener remotely, but it never gets any further and it doesn't give control to the bash script. So, the only way to stop the execution is to do Ctrl+C.
Killing ssh doesn't help (or rather can't be executed) since the control is not with bash script as it waits for the command within the ssh session to complete, which of course never happens as it has to be killed to stop.
Here's the command line that you're running on the remote system:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null
sshpid=!$
sleep 10
kill -9 $sshpid 2>/dev/null &
You should change it to this:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null & <-- Ampersand goes here
sshpid=$!
sleep 10
kill -9 $sshpid 2>/dev/null
You want to start nodes-listener and then kill it after ten seconds. To do this, you need to start nodes-listener as a background process, so that the shell which is executing this command line to move on to the next command after starting nodes-listener. The & in your command line is in the wrong place, and would apply only to the kill command. You need to apply it to the nodes-listener command.
I'll also note that your sshpid=!$ line was incorrect. You want sshpid=$!. $! is the process ID of the last command started in the background.
You need to place the ampersand after the first command, then put the remaining commands onto the next line:
ssh hostname -- '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=$!; sleep 10; kill $sshpid 2>/dev/null'
Btw, ssh is returning after all commands had been executed. This does mean it will close the allocated pty as well. If there are still background jobs running in that shell session, they would being killed by SIGHUP. This means, you can probably omit the explicit kill command. (Depends on whether nodes-listener handles SIGHUP and SIGTERM differently). Having this, you could simplify the code to the following:
ssh hostname -- sh -c '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sleep 10'
I have resolved this by pushing the shell script to the remote machine and executing it there. It is actually less tidy and relies on space being available on the remote computer.
Since my remote machine is a small physical device, the issue of the space usage is important (even for the tiny amount of space required in this case).
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=!$
sleep 20
sync
# killing nodes-listener process and giving control back to the base bash
killall -9 nodes-listener 2>/dev/null && echo "nodes-listener is killed"
I have written a shell script to execute a series of commands. One of the commands in the shell script is to launch an application. However, I do not know how to continue running the shell script after I have launched the application.
For example:
...
cp somedir/somefile .
./application
rm -rf somefile
Once I launched the application with "./application" I am no longer able to continue running the "rm -rf somefile" command, but I really need to remove the file from the directory.
Anyone have any ideas how to compete running the "rm -rf" command after launching the application?
Thanks
As pointed out by others, you can background the application (man bash 'job control', e.g.).
Also, you can use the wait builtin to explicitely await the background jobs later:
./application &
echo doing some more work
wait # wait for background jobs to complete
echo application has finished
You should really read the man pages and bash help for more details, as always:
http://unixhelp.ed.ac.uk/CGI/man-cgi?sh
http://www.gnu.org/s/bash/manual/bash.html#Job-Control-Builtins
Start the application in the background, this way the shell is not going to wait for it to terminate and will execute the consequent commands right after starting the application:
./application &
In the meantime, you can check the background jobs by using the jobs command and wait on them via wait and their ID. For example:
$ sleep 100 &
[1] 2098
$ jobs
[1]+ Running sleep 100 &
$ wait %1
put the started process to background:
./application &
You need to start the command in the background using '&' and maybe even nohup.
nohup ./application > log.out 2>&1