I want to use shell script to launch Redis server and then monitor a log file:
#!/bin/bash
/path/to/redis/src/redis-server &
tail -f /path/to/log/logfile.log
If I run this script and press Ctrl+C from the terminal, the tail -f terminated, which is what I want, however the Redis also detected SIGINT and exited.
I tried to write the script like this:
#!/bin/bash
trap '' INT TSTP
~/redis/src/redis-server &
tail -f ./script1
This time things go even worse, the tail -f refused to terminate while Redis still detected SIGINT and exited.
It seems that there is some problems specific to Redis regarding ignoring signals.
My goal is to make tail -f responds to Ctrl+C while making Redis ignore this signal.
Please anyone tell me whether this can be achieved and if so, give me some advice?
redis-server catches SIGINT (Ctrl+C), even if SIGINT was being ignored. This is an unusual choice; most software will check and won't catch SIGINT if it's already being ignored.
When it receives SIGINT, it saves the database and shuts down.
If you start it as a service, it won't be associated with any terminal at all, and won't see any Ctrl+C you type.
If you start it as a background job in an interactive shell:
$ /path/to/redis/src/redis-server &
your shell will put it into a process group that is different from the terminal's process group, and typing Ctrl+C won't affect it. (If you bring it to the foreground with fg, Ctrl+C will send SIGINT to the program).
But, when you run a script like this:
#!/bin/bash
/path/to/redis/src/redis-server &
tail -f /path/to/log/logfile.log
the shell that runs the script will be non-interactive, and any program that it starts in the background (with &) will be in the same process group as the shell. So if you run that shell script in the foreground, typing Ctrl+C will send SIGINT to the shell, to redis-server, and to tail.
To prevent Ctrl+C from sending SIGINT to redis-server in a case like this, you need to either put redis-server in its own process group or disassociate it from your terminal. You can do this with setsid, which does both:
#!/bin/bash
setsid /path/to/redis/src/redis-server &
tail -f /path/to/log/logfile.log
Related
I have a bash script:
node web/dist/web/src/app.js & node api/dist/api/src/app.js &
$SHELL
It successfully starts both my node servers. However:
I do not receive any output (from console.log etc) in my terminal window
If I cancel by (Ctrl +C) the processes are not exited, so then I annoyingly have to manually do a taskkill /F /PID etc afterwards.
Is there anyway around this?
The reason you can't stop your background jobs with Ctrl+C is because signals (SIGINT in this case) are received only by the foreground process.
When your foreground process (the non-interactive main script) exits, its children processes become orphans which are immediately adopted by the init process. To kill them, you need their PIDs. (When you run a background process in an interactive shell, it will receive the SIGHUP, and probably exit, when shell exits.)
The solution in your case is to make your script wait for its children, using the shell built-in wait command. wait will ensure your script receives the SIGINT, which you can then handle (with trap) and kill the background jobs (with kill 0):
#!/bin/bash
trap 'kill 0' EXIT
node app1.js &
node app2.js &
wait
By setting trap on EXIT (special pseudo-signal in bash), you'll ensure background processes will terminate whenever your main script exits (either by Ctrl+C/SIGINT, or by any other signal like SIGTERM, SIGHUP, SIGKILL). The kill 0 command kills all processes in the current process group.
Regarding the output -- on Linux, background processes will inherit the standard output/error from shell (if not redirected), and continue to write to your TTY/terminal. If that's not working on Windows, I'm not sure why not.
However, even if your background processes somehow lost their way to your TTY, you can, as a workaround, append to a log file:
node app1.js >>/path/to/file.log 2>&1 &
node app2.js >>/path/to/file.log 2>&1 &
and then tail -f that log file, either in this, or some other terminal:
tail -f /path/to/file.log
I have an embedded system, on which I do telnet and then I run an application in background:
./app_name &
Now if I close my terminal and do telnet from other terminal and if I check then I can see this process is still running.
To check this I have written a small program:
#include<stdio.h>
main()
{
while(1);
}
I ran this program in my local linux pc in background and I closed the terminal.
Now, when I checked for this process from other terminal then I found that this process was also killed.
My question is:
Why undefined behavior for same type of process?
On which it is dependent?
Is it dependent on version of Linux?
Who should kill jobs?
Normally, foreground and background jobs are killed by SIGHUP sent by kernel or shell in different circumstances.
When does kernel send SIGHUP?
Kernel sends SIGHUP to controlling process:
for real (hardware) terminal: when disconnect is detected in a terminal driver, e.g. on hang-up on modem line;
for pseudoterminal (pty): when last descriptor referencing master side of pty is closed, e.g. when you close terminal window.
Kernel sends SIGHUP to other process groups:
to foreground process group, when controlling process terminates;
to orphaned process group, when it becomes orphaned and it has stopped members.
Controlling process is the session leader that established the connection to the controlling terminal.
Typically, the controlling process is your shell. So, to sum up:
kernel sends SIGHUP to the shell when real or pseudoterminal is disconnected/closed;
kernel sends SIGHUP to foreground process group when the shell terminates;
kernel sends SIGHUP to orphaned process group if it contains stopped processes.
Note that kernel does not send SIGHUP to background process group if it contains no stopped processes.
When does bash send SIGHUP?
Bash sends SIGHUP to all jobs (foreground and background):
when it receives SIGHUP, and it is an interactive shell (and job control support is enabled at compile-time);
when it exits, it is an interactive login shell, and huponexit option is set (and job control support is enabled at compile-time).
See more details here.
Notes:
bash does not send SIGHUP to jobs removed from job list using disown;
processes started using nohup ignore SIGHUP.
More details here.
What about other shells?
Usually, shells propagate SIGHUP. Generating SIGHUP at normal exit is less common.
Telnet or SSH
Under telnet or SSH, the following should happen when connection is closed (e.g. when you close telnet window on PC):
client is killed;
server detects that client connection is closed;
server closes master side of pty;
kernel detects that master pty is closed and sends SIGHUP to bash;
bash receives SIGHUP, sends SIGHUP to all jobs and terminates;
each job receives SIGHUP and terminates.
Problem
I can reproduce your issue using bash and telnetd from busybox or dropbear SSH server: sometimes, background job doesn't receive SIGHUP (and doesn't terminate) when client connection is closed.
It seems that a race condition occurs when server (telnetd or dropbear) closes master side of pty:
normally, bash receives SIGHUP and immediately kills background jobs (as expected) and terminates;
but sometimes, bash detects EOF on slave side of pty before handling SIGHUP.
When bash detects EOF, it by default terminates immediately without sending SIGHUP. And background job remains running!
Solution
It is possible to configure bash to send SIGHUP on normal exit (including EOF) too:
Ensure that bash is started as login shell. The huponexit works only for login shells, AFAIK.
Login shell is enabled by -l option or leading hyphen in argv[0]. You can configure telnetd to run /bin/bash -l or better /bin/login which invokes /bin/sh in login shell mode.
E.g.:
telnetd -l /bin/login
Enable huponexit option.
E.g.:
shopt -s huponexit
Type this in bash session every time or add it to .bashrc or /etc/profile.
Why does the race occur?
bash unblocks signals only when it's safe, and blocks them when some code section can't be safely interrupted by a signal handler.
Such critical sections invoke interruption points from time to time, and if signal is received when a critical section is executed, it's handler is delayed until next interruption point happens or critical section is exited.
You can start digging from quit.h in the source code.
Thus, it seems that in our case bash sometimes receives SIGHUP when it's in a critical section. SIGHUP handler execution is delayed, and bash reads EOF and terminates before exiting critical section or calling next interruption point.
Reference
"Job Control" section in official Glibc manual.
Chapter 34 "Process Groups, Sessions, and Job Control" of "The Linux Programming Interface" book.
When you close the terminal, shell sends SIGHUP to all background processes – and that kills them. This can be suppressed in several ways, most notably:
nohup
When you run program with nohup it catches SIGHUP and redirect program output.
$ nohup app &
disown
disown tells shell not to send SIGHUP
$ app &
$ disown
Is it dependent on version of linux?
It is dependent on your shell. Above applies at least for bash.
AFAIK in both cases the process should be killed. In order to avoid this you have to issue a nohup like the following:
> nohup ./my_app &
This way your process will continue executing. Probably the telnet part is due to a BUG similar to this one:
https://bugzilla.redhat.com/show_bug.cgi?id=89653
In order completely understand whats happening you need to get into unix internals a little bit.
When you are running a command like this
./app_name &
The app_name is sent to background process group. You can check about unix process groups here
When you close bash with normal exit it triggers SIGHUP hangup signal to all its jobs. Some information on unix job control is here.
In order to keep your app running when you exit bash you need to make your app immune to hangup signal with nohup utility.
nohup - run a command immune to hangups, with output to a non-tty
And finally this is how you need to do it.
nohup app_name & 2> /dev/null;
In modern Linux--that is, Linux with systemd--there is an additional reason this might happen which you should be aware of: "linger".
systemd kills processes left running from a login shell, even if the process is properly daemonized and protected from HUP. This is the default behavior in modern configurations of systemd.
If you run
loginctl enable-linger $USER
you can disable this behavior, allowing background processes to keep running. The mechanisms covered by the other answers still apply, however, and you should also protect your process against them.
enable-linger is permanent until it is re-disabled. You can check it with
ls /var/lib/systemd/linger
This may have files, one per username, for users who have enable-linger. Any user listed in the directory has the ability to leave background processes running at logout.
I use this command in linux terminal to connect to a server and use it as proxy :
ssh -N -D 7070 root#ip_address
it's get the password and connect and everything is Ok but how can I put this process in background ?
I used CTRL+Z but it stop not put this process in background ...
CTRL-Z is doing exactly what it should, which is stop the process. If you then want to put it in the background, the shell command for doing that is bg:
$ ssh -N -D 7070 -l user 192.168.1.51
user#192.168.1.51's password:
^Z
[1]+ Stopped ssh -N -D 7070 -l mjfraioli 192.168.1.51
$ bg
[1]+ ssh -N -D 7070 -l user 192.168.1.51 &
That way you can enter the password interactively, and only once that is complete, stop it and put it into the background.
Try adding an ampersand to the end of your command:
ssh -N -D 7070 root#ip_address &
Explanation:
This trailing ampersand directs the shell to run the command in the background, that is, it is forked and run in a separate sub-shell, as a job, asynchronously. The shell will immediately return the return status of 0 for true and continue as normal, either processing further commands in a script or returning the cursor focus back to the user in a Linux terminal.
The shell will print out the forked process’s job number and process ID (PID) like so:
$ ./myscript.py &
[1] 1337
The stdout of the forked process will still be attached to the parent, so any output will still appear in your terminal.
After a process is forked using a single trailing ampersand &, its process ID (PID) is stored in a special variable $!. This can be used later to refer to the process:
$ echo $!
1337
Once a process is forked, it can be seen in the jobs list:
$ jobs
[1]+ Running ./myscript.py &
And it can be brought back to the command line before it finishes with the foreground command:
fg
The foreground command takes an optional argument of the job number, if you have forked multiple processes.
A single ampersand & can also delimit a list of commands to be run asynchronously.
./script.py & ./script2.py & ./script3.py &
In this example, all 3 python scripts are run at the same time, in separate sub-shells. Their stdout will still be attached to the parent shell, so if running this from a Linux terminal, you will still see the outputs.
This can also be used as a quick hack to take advantage of multiple cores with shell scripts, but be warned, it is a hack!
To detach a process completely from the shell, you may want to pipe the stdout and stderr to a file or to /dev/null. A nice way of doing this is with the nohup command.
source for above explanation: http://bashitout.com/2013/05/18/Ampersands-on-the-command-line.html
You can add option -f to make the ssh command run in background.
So the answer is ssh -f -D port username#hostname -N.
I am trying to run the following command as part of the bash script which suppose to open ssh channel, run the program on the remote machine, save the output to the file for 10 sec, kill the process, which was writing to the file and then give the control back to bash script.
#!/bin/bash
ssh hostname '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null; sshpid=!$; sleep 10; kill -9 $sshpid 2>/dev/null &'
Unfortunately, what it seems to be doing is starting the program: nodes-listener remotely, but it never gets any further and it doesn't give control to the bash script. So, the only way to stop the execution is to do Ctrl+C.
Killing ssh doesn't help (or rather can't be executed) since the control is not with bash script as it waits for the command within the ssh session to complete, which of course never happens as it has to be killed to stop.
Here's the command line that you're running on the remote system:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null
sshpid=!$
sleep 10
kill -9 $sshpid 2>/dev/null &
You should change it to this:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null & <-- Ampersand goes here
sshpid=$!
sleep 10
kill -9 $sshpid 2>/dev/null
You want to start nodes-listener and then kill it after ten seconds. To do this, you need to start nodes-listener as a background process, so that the shell which is executing this command line to move on to the next command after starting nodes-listener. The & in your command line is in the wrong place, and would apply only to the kill command. You need to apply it to the nodes-listener command.
I'll also note that your sshpid=!$ line was incorrect. You want sshpid=$!. $! is the process ID of the last command started in the background.
You need to place the ampersand after the first command, then put the remaining commands onto the next line:
ssh hostname -- '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=$!; sleep 10; kill $sshpid 2>/dev/null'
Btw, ssh is returning after all commands had been executed. This does mean it will close the allocated pty as well. If there are still background jobs running in that shell session, they would being killed by SIGHUP. This means, you can probably omit the explicit kill command. (Depends on whether nodes-listener handles SIGHUP and SIGTERM differently). Having this, you could simplify the code to the following:
ssh hostname -- sh -c '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sleep 10'
I have resolved this by pushing the shell script to the remote machine and executing it there. It is actually less tidy and relies on space being available on the remote computer.
Since my remote machine is a small physical device, the issue of the space usage is important (even for the tiny amount of space required in this case).
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=!$
sleep 20
sync
# killing nodes-listener process and giving control back to the base bash
killall -9 nodes-listener 2>/dev/null && echo "nodes-listener is killed"
I would like to spawn a process suspended, possibly in the context of another user (e.g. via sudo -u ...), set up some iptables rules for the spawned process, continue running the process, and remove the iptable rules when the process exists.
Is there any standart means (bash, corutils, etc.) that allows me to achieve the above? In particular, how can I spawn a process in a suspended state and get its pid?
Write a wrapper script start-stopped.sh like this:
#!/bin/sh
kill -STOP $$ # suspend myself
# ... until I receive SIGCONT
exec $# # exec argument list
And then call it like:
sudo -u $SOME_USER start-stopped.sh mycommand & # start mycommand in stopped state
MYCOMMAND_PID=$!
setup_iptables $MYCOMMAND_PID # use its PID to setup iptables
sudo -u $SOME_USER kill -CONT $MYCOMMAND_PID # make mycommand continue
wait $MYCOMMAND_PID # wait for its termination
MYCOMMAND_EXIT_STATUS=$?
teardown_iptables # remove iptables rules
report $MYCOMMAND_EXIT_STATUS # report errors, if necessary
All this is overkill, however. You don't need to spawn your process in a suspended state to get the job done. Just make a wrapper script setup_iptables_and_start:
#!/bin/sh
setup_iptables $$ # use my own PID to setup iptables
exec sudo -u $SOME_USER $# # exec'ed command will have same PID
And then call it like
setup_iptables_and_start mycommand || report errors
teardown_iptables
You can write a C wrapper for your program that will do something like this :
fork and print child pid.
In the child, wait for user to press Enter. This puts the child in sleep and you can add the rules with the pid.
Once rules are added, user presses enter. The child runs your original program, either using exec or system.
Will this work?
Edit:
Actually you can do above procedure with a shell script. Try following bash script:
#!/bin/bash
echo "Pid is $$"
echo -n "Press Enter.."
read
exec $#
You can run this as /bin/bash ./run.sh <your command>
One way to do it is to enlist gdb to pause the program at the start of its main function (using the command "break main"). This will guarantee that the process is suspended fast enough (although some initialisation routines can run before main, they probably won't do anything relevant). However, for this you will need debugging information for the program you want to start suspended.
I suggest you try this manually first, see how it works, and then work out how to script what you've done.
Alternatively, it may be possible to constrain the process (if indeed that is what you're trying to do!) without using iptables, using SELinux or a ptrace-based tool like sydbox instead.
I suppose you could write a util yourself that forks, and wherein the child of the fork suspends itself just before doing an exec. Otherwise, consider using an LD_PRELOAD lib to do your 'custom' business.
If you care about making that secure, you should probably look at bigger guns (with chroot, perhaps paravirtualization, user mode linux etc. etc);
Last tip: if you don't mind doing some more coding, the ptrace interface should allow you to do what you describe (since it is used to implement debuggers with)
You probably need the PID of a program you're starting, before that program actually starts running. You could do it like this.
Start a plain script
Force the script to wait
You can probably use suspend which is a bash builitin but in the worst case you can make it stop itself with a signal
Use the PID of the bash process in every way you want
Restart the stopped bash process (SIGCONT) and do an exec - another builtin - starting your real process (it will inherit the PID)