Disabling Hanging Script - linux

When launching a bash script in LINUX, the script succeeds and is successful, yet the terminal hangs. I must always input CTRL+C to end the program. I am able to type in the terminal and press enter, but the script does not respond.
I can not change the script files, but can I launch it so that it disables waiting for the user? Any troubleshooting tips to disable this behaviour?

You can execute the script with & at the end, this will give the control back to the shell (executes the script as a background process).
./script.sh &
If you want to stop the script, you need to get its process id and then kill it. To get the process id, either execute ps aux | grep script where script is your script name, or execute echo $! right after you launched the script. When you have the process id, you can kill the process with kill 1234 where 1234 is the process id.
If the execution time of the script can be estimated, you can kill it automatically after a certain amount of time:
bash -c '(sleep 5m; kill $$ 2> /dev/null) & exec script' &
In this command sleep 5m is the time after the process will be killed, and script is the name of your script (or the command).
For example if the script's execution time is 30 seconds on average, then you can set the timeout to a minute or two to give it some extra time in case the execution is slower than usual. Note that this command doesn't guarantee that the script finished its execution, so use it with care.

Related

How do I stop a scirpt running in the background in linux?

Let's say I have a silly script:
while true;do
touch ~/test_file
sleep 3
done
And I start the script into the background and leave the terminal:
chmod u+x silly_script.sh
./silly_script.sh &
exit
Is there a way for me to identify and stop that script now? The way I see it is, that every command is started in it's own process and I might be able to catch and kill one command like the 'sleep 3' but not the execution of the entire script, am I mistaken? I expected a process to appear with the scripts name, but it does not. If I start the script with 'source silly_script.sh' I can't find a process by the name of 'source'. Do I need to identify the instance of bash, that is executing the script? How would I do that?
EDIT: There have been a few creative solutions, but so far they require the PID of the script execution to be stored right away, or the bash session to not be left with ^D or exit. I understand, that this way of running scripts should maybe be avoided, but I find it hard to believe, that any low privilege user could, even by accident, start an annoying script into the background, that is for instance filling the drive with garbage files or repeatedly starting new instances of some software and even the admin has no other option, than to restart the server, because a simple script can hide it's identifier without even trying.
With the help of the fine people here I was able to derive the answer I needed:
It is true, that the script runs every command in it's own process, so for instance killing the sleep 3 command won't do anything to the script being run, but through a command like the sleep 3 you can find the bash instance running the script, by looking for the parent process:
So after doing the above, you can run ps axf to show all processes in a tree form. You will then find this section:
18660 ? S 0:00 /bin/bash
18696 ? S 0:00 \_ sleep 3
Now you have found the bash instance, that is running the script and can stop it: kill 18660
(Of course your PID will be different from mine)
The jobs command will show you all running background jobs.
You can kill background jobs by id using kill, e.g.:
$ sleep 9999 &
[1] 58730
$ jobs
[1]+ Running sleep 9999 &
$ kill %1
[1]+ Terminated sleep 9999
$ jobs
$
58730 is the PID of the backgrounded task, and 1 is the task id of it. In this case kill 58730 and kill %1` would have the same effect.
See the JOB CONTROL section of man bash for more info.
When you exit, the backgrounded job will get a kill signal and die (assuming that's how it handles the signal - in your simple example it is), unless you disown it first.
That kill will propogate to the sleep process, which may well ignore it and continue sleeping. If this is the case you'll still see it in ps -e output, but with a parent pid of 1 indicating its original parent no longer exists.
You can use ps -o ppid= <pid> to find the parent of a process, or pstree -ap to visualise the job hierarchy and find the parent visually.

Difference between ctrl-z and "&" in linux

My understanding is that when you are running any command (say sleep 10) within a given shell (say bash), what's happening under the hood is that a fork system call is called, and sleep 10 is now running as a child process with the parent being the bash shell I executed the sleep.
Now, if I want to send sleep to the background, I would either do sleep 10 & or run sleep 10 and press ctrl+z so the process is sent to the background. pstree shows that using any of these options, sleep keeps being a child process of the bash shell.
Now my question is, when doing this through SSH, I noted the following:
If I do: sleep 999 & and sleep 888 <- followed by a ctrl+z, and the close the ssh session, only sleep 999 & survived.
Why is this? I actually was expecting one of these:
both processes gets terminated because the parent process is gone
both processes gets associated to init as the parent process.
or run sleep 10 and press ctrl+z so the process is sent to the background
No, not really. Didn't you see that big message that says "[1]+ Stopped sleep 10"? ctrl+z stops the process and returns you to the current shell. You can now type fg to continue process, or type bg to continue the process in the background. Research "bash job control" and see bash manual Job Control Basics.
Why is this?
Stopped processes are first continued SIGCONT and then SIGTERM is sent to them so they can terminate.
Bash manual is available online: https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html .

Kill ssh or\and remote process from bash script

I am trying to run the following command as part of the bash script which suppose to open ssh channel, run the program on the remote machine, save the output to the file for 10 sec, kill the process, which was writing to the file and then give the control back to bash script.
#!/bin/bash
ssh hostname '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null; sshpid=!$; sleep 10; kill -9 $sshpid 2>/dev/null &'
Unfortunately, what it seems to be doing is starting the program: nodes-listener remotely, but it never gets any further and it doesn't give control to the bash script. So, the only way to stop the execution is to do Ctrl+C.
Killing ssh doesn't help (or rather can't be executed) since the control is not with bash script as it waits for the command within the ssh session to complete, which of course never happens as it has to be killed to stop.
Here's the command line that you're running on the remote system:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null
sshpid=!$
sleep 10
kill -9 $sshpid 2>/dev/null &
You should change it to this:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null & <-- Ampersand goes here
sshpid=$!
sleep 10
kill -9 $sshpid 2>/dev/null
You want to start nodes-listener and then kill it after ten seconds. To do this, you need to start nodes-listener as a background process, so that the shell which is executing this command line to move on to the next command after starting nodes-listener. The & in your command line is in the wrong place, and would apply only to the kill command. You need to apply it to the nodes-listener command.
I'll also note that your sshpid=!$ line was incorrect. You want sshpid=$!. $! is the process ID of the last command started in the background.
You need to place the ampersand after the first command, then put the remaining commands onto the next line:
ssh hostname -- '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=$!; sleep 10; kill $sshpid 2>/dev/null'
Btw, ssh is returning after all commands had been executed. This does mean it will close the allocated pty as well. If there are still background jobs running in that shell session, they would being killed by SIGHUP. This means, you can probably omit the explicit kill command. (Depends on whether nodes-listener handles SIGHUP and SIGTERM differently). Having this, you could simplify the code to the following:
ssh hostname -- sh -c '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sleep 10'
I have resolved this by pushing the shell script to the remote machine and executing it there. It is actually less tidy and relies on space being available on the remote computer.
Since my remote machine is a small physical device, the issue of the space usage is important (even for the tiny amount of space required in this case).
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=!$
sleep 20
sync
# killing nodes-listener process and giving control back to the base bash
killall -9 nodes-listener 2>/dev/null && echo "nodes-listener is killed"

How can I launch a new process that is NOT a child of the original process?

(OSX 10.7) An application we use let us assign scripts to be called when certain activities occur within the application. I have assigned a bash script and it's being called, the problem is that what I need to do is to execute a few commands, wait 30 seconds, and then execute some more commands. If I have my bash script do a "sleep 30" the entire application freezes for that 30 seconds while waiting for my script to finish.
I tried putting the 30 second wait (and the second set of commands) into a separate script and calling "./secondScript &" but the application still sits there for 30 seconds doing nothing. I assume the application is waiting for the script and all child processes to terminate.
I've tried these variations for calling the second script from within the main script, they all have the same problem:
nohup ./secondScript &
( ( ./secondScript & ) & )
( ./secondScript & )
nohup script -q /dev/null secondScript &
I do not have the ability to change the application and tell it to launch my script and not wait for it to complete.
How can I launch a process (I would prefer the process to be in a scripting language) such that the new process is not a child of the current process?
Thanks,
Chris
p.s. I tried the "disown" command and it didn't help either. My main script looks like this:
[initial commands]
echo Launching second script
./secondScript &
echo Looking for jobs
jobs
echo Sleeping for 1 second
sleep 1
echo Calling disown
disown
echo Looking again for jobs
jobs
echo Main script complete
and what I get for output is this:
Launching second script
Looking for jobs
[1]+ Running ./secondScript &
Sleeping for 1 second
Calling disown
Looking again for jobs
Main script complete
and at this point the calling application sits there for 45 seconds, waiting for secondScript to finish.
p.p.s
If, at the top of the main script, I execute "ps" the only thing it returns is the process ID of the interactive bash session I have open in a separate terminal window.
The value of $SHELL is /bin/bash
If I execute "ps -p $$" it correctly tells me
PID TTY TIME CMD
26884 ?? 0:00.00 mainScript
If I execute "lsof -p $$" it gives me all kinds of results (I didn't paste all the columns here assuming they aren't relevant):
FD TYPE NAME
cwd DIR /private/tmp/blahblahblah
txt REG /bin/bash
txt REG /usr/lib/dyld
txt REG /private/var/db/dyld/dyld_shared_cache_x86_64
0 PIPE
1 PIPE -> 0xffff8041ea2d10
2 PIPE -> 0xffff 8017d21cb
3r DIR /private/tmp/blahblah
4r REG /Volumes/DATA/blahblah
255r REG /Volumes/DATA/blahblah
The typical way of doing this in Unix is to double fork. In bash, you can do this with
( sleep 30 & )
(..) creates a child process, and & creates a grandchild process. When the child process dies, the grandchild process is inherited by init.
If this doesn't work, then your application is not waiting for child processes.
Other things it may be waiting for include the session and open lock files:
To create a new session, Linux has a setsid. On OS X, you might be able to do it through script, which incidentally also creates a new session:
# Linux:
setsid sleep 30
# OS X:
nohup script -q -c 'sleep 30' /dev/null &
To find a list of inherited file descriptors, you can use lsof -p yourpid, which will output something like:
sleep 22479 user 0u CHR 136,32 0t0 35 /dev/pts/32
sleep 22479 user 1u CHR 136,32 0t0 35 /dev/pts/32
sleep 22479 user 2u CHR 136,32 0t0 35 /dev/pts/32
sleep 22479 user 5w REG 252,0 0 1048806 /tmp/lockfile
In this case, in addition to the standard FDs 0, 1 and 2, you also have a fd 5 open with a lock file that the parent can be waiting for.
To close fd 5, you can use exec 5>&-. If you think the lock file might be stdin/stdout/stderr themselves, you can use nohup to redirect them to something else.
Another way is to abandon the child
#!/bin/bash
yourprocess &
disown
As far as I understand, the application replaces the normal bash shell because it is still waiting for a process to finish even if init should have taken care of this child process.
It could be that the "application" intercepts the orphan handling which is normally done by init.
In that case, only a parallel process with some IPC can offer a solution (see my other answer)
I think it depends on how your parent process tries to detect if your child process has been finished.
In my case (my parent process was gnu make), I succeed by closing stdout and stderr (slightly based on the answer of that other guy) like this:
sleep 30 >&- 2>&- &
You might also close stdin
sleep 30 <&- >&- 2>&- &
or additionally disown your child process (not for Mac)
sleep 30 <&- >&- 2>&- & disown
Currently tested only in bash on kubuntu 14.04 and Mac OSX.
If all else fails:
Create a named pipe
start the "slow" script independent from the "application", make sure executes it's task in an endless loop, starting with reading from the pipe. It will become read-blocked when it tries to read..
from the application, start your other script. When it needs to invoke the "slow" script, just write some data to the pipe. The slow script will start independently so your script won't wait for the "slow" script to finish.
So, to answer the question:
bash - how can I launch a new process that is NOT a child of the original process?
Simple: don't launch it but let an independent entity launch it during boot...like init or on the fly with the command at or batch
Here I have a shell
└─bash(13882)
Where I start a process like this:
$ (urxvt -e ssh somehost&)
I get a process tree (this output snipped from pstree -p):
├─urxvt(14181)───ssh(14182)
where the process is parented beneath pid 1 (systemd in my case).
However, had I instead done this (note where the & is) :
$ (urxvt -e ssh somehost)&
then the process would be a child of the shell:
└─bash(13882)───urxvt(14181)───ssh(14182)
In both cases the shell prompt is immediately returned and I can exit
without terminating the process tree that I started above.
For the latter case the process tree is reparented beneath pid 1 when
the shell exits, so it ends up the same as the first example.
├─urxvt(14181)───ssh(14182)
Either way, the result is a process tree that outlives the shell. The
only difference is the initial parenting of that process tree.
For reference, you can also use
nohup urxvt -e ssh somehost &
urxvt -e ssh somehost & disown $!
Both give the same process tree as the second example above.
└─bash(13882)───urxvt(14181)───ssh(14182)
When the shell is terminated the process tree is, like before, reparented
to pid 1.
nohup additionally redirects the process' standard output to a file
nohup.out so, if that is a useful trait, it may be a more useful choice.
Otherwise, with the first form above, you immediately have a completely
detached process tree.

Get the process ID in a Shell script when a process is launched in foreground

In a shell program I want to launch a program and get its PID and save in a temp file. But here I will launch the program in the foreground and will not exit the shell until the process is in running state
ex:
#!/bin/bash
myprogram &
echo "$!" > /tmp/pid
And this works fine i am able to get the pid of the launched process . But if i launch the program in fore ground i want to know how to get the pid
ex :
#!/bin/bash
myprogram /// hear some how i wan to know the PID before going to next line
As I commented above since your command is still running in foreground you cannot enter a new command in the same shell and goto the next line.
However while this command is running and you want to get the process id of this program from a different shell tab/window process then use pgrep like this:
pgrep -f "myprogram"
17113 # this # will be different for you :P
EDIT: Base on your comment or is it possible to launch the program in background and get the process ID and then wait the script till that process gets exited ?
Yes that can be done using wait pid command as follows:
myprogram &
mypid=$!
# do some other stuff and then
wait $mypid
You can't do this since your shell script isn't running -- the command you just launched in the foreground is.

Resources