BASH: send SIGTSTP signal (ctrl+z) - linux

I'm rushing against the clock for a programming assignment in which I have to run a number of instances of the same program on the same machine at the same time.
Currently, I'm starting the instances one at a time, pressing Ctrl+z to pause them, and then doing 'bg %#' to resume execution in the background.
This is extremely tedious and time consuming to do every time I need to test a small change in my application, so I want to write a bash script that will start the multiple instances for me, however I don't know how to do the background switching in a script.
Can anybody please tell me how I can write a simple script that will start a long standing command, pause it, and resume it in the background?
Thanks

Do you want to just start it in the background? For example:
mycommand &
If you want finer grained job control, you can emulate Ctrl-Z and bg. Control-Z sends SIGTSTP ("tty stop") to the program, which suspends it:
kill -TSTP [processid]
And the bg command just sends it a SIGCONT:
kill -CONT [processid]

You don't. You put an ampersand after the command.
command1 &
command2 &
command3 &

Related

Linux shell scripting: How can I stop a first program when the second will have finished?

I have two programs in Linux (shell scripts, for example):
NeverEnding.sh
AllwaysEnds.sh
The first one does never stop, so I wanna run it in background.
The second one does stop with no problem.
I would like to make a Linux shell script that calls them both, but automatically stops (kill, for example) the first when the second will have finished.
Specific command-line tools allowed, if needed.
You can send the first into the background with & and get the PID of it by $!. Then after the second finishes in the foreground you can kill the first:
#!/bin/bash
NeverEnding.sh &
pid=$!
AllwaysEnds.sh
kill $pid
You don't actually need to save the pid in a variable, since $! only gets updated when you start a background process, it's just make it more easy to read.

Manually triggering a sleeping bash script which is waiting for a signal

I have a bash script which is sleeping infinitely and waiting for a signal handler to do some processing and then go back to sleep. My problem is that I want to trigger this script manually as well, and asynchronous to what's happening in the whole sleep-signal_handle cycle.
I'm wondering how to do this. Would the right way be to manually signal the script from command line? Can this be done while providing an argument? or should I run another instance of the script? The problem with the second approach is I fear synchronization issues when running two instances of the same script acting on the same data.
command line:
echo 'argument1' > tmp.tmp
kill -USR1 [pid of process]
script:
export arg=""
trap "arg=$(cat tmp.tmp)" SIGUSR1
The above is a total shot in the dark, next time please post code with your questions. Thanks.

How to run a background shell script which itself lauches two background processes?

To start with, I am a beginner to programming etc so apologies for lack of professionally accurate terminology in my question but hopefully I will manage to get my points across!
Would you have any suggestions how in bash or tcsh I can run a long background process which itself launches a few programs and has to run three long processes in parallel on different cores and wait for all three to be completed before proceeding...?
I have written a shell script (for bash) to apply an image filter to each frame of a short but heavy video clip (it's a scientific tomogram actually but this does not really matter). It is supposed to:
Create a file with a script to convert the whole file to a different format using an em2em piece of software.
Split the converted file into three equal parts and filter each set of frames in a separate process on separate cores on my linux server (to speed things up) using a program spider. Firstly, three batch-mode files (filter_1/2/3.spi) with required filtration parameters are created and then three subprocesses are launched:
spider spi/spd #filter_1 & # The first process to be launched by the main script and run in the background on one core
spider spi/spd #filter_2 & # The second background process run on the next core
spider spi/spd #filter_3 # The third process to be run in parallel with the two above and be finished before proceeding further.
These filtered fragments are then put together at the end.
Because I wanted the 3 filtration steps to run simultaneously, I sent the first two to background with a simple & and kept the last one in the foreground, so that the main script process will wait for all three to finish (should happen at the same time) before proceeding further to reassemble the 3 chunks. This all works fine when I run my script in the foreground but it throws a lot of output info from the many subprocesses onto the terminal. I can reduce it with:
$ ./My_script 2>&1 > /dev/null
But each spider process still returns
*****Spider normal stop*****
to the terminal. When I try to send the main process to background it keeps stopping all the time.
Would you have any suggestions how I can run the main script in the background and still get it to run the 3 spider sub-processes in parallel somehow?
Thanks!
You can launch each spider in the background, storing the process ids which you can later use in a wait command, such as:
spider spi/spd #filter_1 &
sp1=$!
spider spi/spd #filter_2 &
sp2=$!
spider spi/spd #filter_3 &
sp3=$!
wait $sp1 $sp2 $sp3
If you want to get rid of output, apply redirections on each command.
Update: actually you don't even need to store the PIDs, a wait without parameters will automatically wait for all spawned children.
First, if you are using bash you can use wait to wait for each process to exit. For example, all the messages will be printed only when all processes have finished:
sleep 10 &
P1=$!
sleep 5 &
P2=$!
sleep 6 &
P3=$!
wait $P1
echo "P1 finished"
wait $P2
echo "P2 finished"
wait $P3
echo "P3 finished"
You can use the same idea to wait for the spider processes to finish and only then merge the results.
Regarding the output, you can try to redirect each one to /dev/null instead of redirecting all the output of the script:
sleep 10 &> /dev/null &

How to resume stopped job on a remote machine given pid?

I have a process on a machine which I stopped (with a Ctrl-Z). After ssh'ing onto the machine, how do I resume the process?
You will need to find the PID and then issue kill -CONT <pid>.
You can find the PID by using ps with some options to produce extended output. Stopped jobs have a T in the STAT (or S) column.
If you succeed in continuing the process but it no longer has a controlling terminal (and it needs one) then it could possibly hang or go into a loop: just keep your eye on its CPU usage.
You can type in fg to resume process. If you have multiple processes, you can type fg processname, (e.g. fg vim) or fg job_id.
To find out the job id's, use the jobs command.
Relevant quote from wikipedia on what it does:
fg is a job control command in Unix and Unix-like operating systems that resumes execution of a suspended process by bringing it to the foreground and thus redirecting its standard input and output streams to the user's terminal.
To find out job-id and pid, use "jobs -l", like this:
$ jobs -l
[1]+ 3729 Stopped vim clustertst.cpp
The first column is job_id, and the second is pid.

Linux process in background - "Stopped" in jobs?

I'm currently running a process with the & sign.
$ example &
However, (please note i'm a newbie to Linux) I realised that pretty much a second after such command I'm getting a note that my process received a stopped signal. If I do
$ jobs
I'll get the list with my example process with a little note "Stopped". Is it really stopped and not working at all in the background? How does it exactly work? I'm getting mixed info from the Internet.
In Linux and other Unix systems, a job that is running in the background, but still has its stdin (or std::cin) associated with its controlling terminal (a.k.a. the window it was run in) will be sent a SIGTTIN signal, which by default causes the program to be completely stopped, pending the user bringing it to the foreground (fg %job or similar) to allow input to actually be given to the program. To avoid the program being paused in this way, you can either:
Make sure the programs stdin channel is no longer associated with the terminal, by either redirecting it to a file with appropriate contents for the program to input, or to /dev/null if it really doesn't need input - e.g. myprogram < /dev/null &.
Exit the terminal after starting the program, which will cause the association with the program's stdin to go away. But this will cause a SIGHUP to be delivered to the program (meaning the input/output channel experienced a "hangup") - this normally causes a program to be terminated, but this can be avoided by using nohup - e.g. nohup myprogram &.
If you are at all interested in capturing the output of the program, this is probably the best option, as it prevents both of the above signals (as well as a couple others), and saves the output for you to look at to determine if there are any issues with the programs execution:
nohup myprogram < /dev/null > ${HOME}/myprogram.log 2>&1 &
Yes it really is stopped and no longer working in the background. To bring it back to life type fg job_number
From what I can gather.
Background jobs are blocked from reading the user's terminal. When one tries to do so it will be suspended until the user brings it to the foreground and provides some input. "reading from the user's terminal" can mean either directly trying to read from the terminal or changing terminal settings.
Normally that is what you want, but sometimes programs read from the terminal and/or change terminal settings not because they need user input to continue but because they want to check if the user is trying to provide input.
http://curiousthing.org/sigttin-sigttou-deep-dive-linux has the gory technical details.
Just enter fg which will resolve the error when you then try to exit.

Resources