In Bash, Kill background function will not kill internal process, why they have different pid? - linux

For convenience, I put my server command into a function, but I background the function got a pid is not my server's pid.
myserver(){
# May contain complicate parameter
sleep 10
}
myserver > my.log &
pid=$!
ps aux|grep sleep
echo "Found PID " $pid is different from ps
So, if I kill $pid will not kill real server process(here is sleep).What should I do ?
UPDATE
sleep 10 &
pid=$!
ps aux|grep sleep
echo Found PID $pid is same
UPDATE
In this case
myserver(){
# May contain complicate parameter
sleep 10
}
myserver > my.log &
kill $!
Will kill the sleep process, but actually, my server is java -jar, when I do kill $!, the java process will not get killed.

In order to kill via the kill command you should provide the PID and not the Job Id.
Check this post about JID and PID
Update on Comment:
Are u sre you are providing it right?
In my system:
$ sleep 20 &
[2] 10080
$ kill -9 $!
[2]- Killed sleep 20
$
Folow up
Ok now I get it. Sorry i misinterpretted your question. What you describe is the expected behavior:
$! Expands to the decimal process ID of the most recent background command (see Lists) executed from the current shell. (For example, background commands executed from subshells do not affect the value of "$!" in the current shell environment.) For a pipeline, the process ID is that of the last command in the pipeline.
So in that case maybe try this proposed solution
Update on Question:
Ok, in the case of java proces I would try a regexp:
pkill -f 'java.*<your process name or some -classpath jar or something unique to the process you want to kill>'
In fact, any string or classpath jar that came along with this command and would result to a match would do the job.

Related

Windows 10, kill child process, can't get correct PID

I use win10 if it's matters, in bash(git)
I have SH script:
start node ./e2e-tests/apimocker_runner.js & pid1=$!
start npm run protractor -- --binaryPath=./build/appInstaller/win-unpacked/ & pid2=$!
sleep 10
kill $pid1
kill $pid2
when i try kill processes by pid i have error:
bash: kill: (6616) - No such process
As i know in $! we have last child process pid, but this don't work.
Cuz then i type in bash:
start bash & pid=$!
echo $pid
$pid === 1000 (for example)
but if i type
echo $$
in created bash by previous command i have another pid
$$ ==== 1200 (for example)
Also I found what if i type start bash this create non-child process, but i want to create child process and wait for them like wait $pid
How i can do this?
A variable set in one bash instance is not visible in another bash instance which is not its child process (and only then if the parent did export variable to expose it to children).

suspend a shell command without pid

I need something like $command & stop This should execute a command and suspend it. The application later resumes back the command for complete results.
I understand that job can be suspended with stop signal to the corresponding pid.
$kill -SIGSTOP 12753
When we execute a command, we barely know its pid. There is extra command involved to take a pid and do the required. I want to avoid the extra command and a time interval.
Basically The application is for a measure of network performance. Trigger all the commands put them in halt mode. The halted commands are resumed back as per the kind of traffic needed.
The process ID of the most recently started background command is available in the shell parameter $!:
$ command & kill -SIGSTOP $!
(Check the documentation for your shell's implementation of kill for the correct format.)
Try killall with the --signal option where you can specify the name of the process.
linux:~ # killall
Usage: killall [OPTION]... [--] NAME...
killall -l, --list
killall -V, --version
-e,--exact require exact match for very long names
-I,--ignore-case case insensitive process name match
-g,--process-group kill process group instead of process
-i,--interactive ask for confirmation before killing
-l,--list list all known signal names
-q,--quiet don't print complaints
-r,--regexp interpret NAME as an extended regular expression
-s,--signal SIGNAL send this signal instead of SIGTERM
-u,--user USER kill only process(es) running as USER
-v,--verbose report if the signal was successfully sent
-V,--version display version information
-w,--wait wait for processes to die
Verified by starting md5sum in a shell session:
linux$ md5sum
and in another session, ran:
killall -s SIGSTOP md5sum
yielding the following in the md5sum session:
[1]+ Stopped md5sum
Kindly confirm if you want to halt your command or run in background(append '&' to your command)?
If your application is expected to start halted command later, then why dont you start your command(to be halted) in that application itself.
This helps :
sleep 5 & kill -SIGSTOP $!
In above, have executed sleep(demo command) for 5 seconds in background.
Next have send to kill for stopping it using its PID obtained by $!.
Demo & kludge using timeout, (for some reason timeout intereprets a '0s' duration as "run forever"), to stop yes before it outputs anything:
# run 'yes' command, let it print 5 numbered lines, but stop it immediately
timeout -s SIGSTOP .000000001s yes | head -n 5 | cat -n
Output (to STDERR):
[1]+ Stopped timeout -s SIGSTOP .000000001s yes | head -n 5 | cat -n
Now restart it:
fg > /dev/null
Output:
1 y
2 y
3 y
4 y
5 y
Technique for users stuck with v8.12 or earlier coreutils, (pre-2011), wherein timeout lacks sub-second intervals. Requires waiting a second.
Wrap the command string in a shell invocation, preceded by a 1s wait -- so timeout waits 1 second, and simultaneously, so does the command string. Total wait time 1 second:
timeout -s SIGSTOP 1s sh -c "sleep 1s; yes | head -n 5 | cat -n"
Output is the same as before, fg is the same too.
Finesse, if waiting even 1 second before sleeping is too much, it can be run in the background like so:
timeout -s SIGSTOP 1s sh -c "sleep 1s; yes | head -n 5 | cat -n" &
Output (process number will vary):
[1] 14601
Then after a second, the output will be the same as the previous two timeout examples.
Assuming you are using the same command, find the command name in ps output, you can launch it in one terminal then open a new terminal
ps -ely
after retrieving the command name:
command & kill -SIGSTOP $(pidof command_name)
pidof needs the exact command name to be able to find the pid.
then to resume it:
kill -SIGCONT $(pidof command_name)
if the command name is not constant, but there is a pattern, you can create a script like this, you can call it pof.sh:
ps -ely | grep $1 | tr -s ' ' | cut -d" " -f3
command & kill -SIGSTOP $(bash pof.sh pattern)
One drawback with this script, is that in case many lines match the pattern it will returns all of theirs pids, if this is a problem, you can put the output in an array and go on from there.

Don't show the output of kill command in a Linux bash script [duplicate]

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

Bash script: Kill one process when other completes

I am trying to write a bash script to run one process after another approximately 1024 times with different command line options. However one of the processes include an infinite loop and I am trying to kill that process before another iteration of the loop begins.
So here is what I tried so far (prog1 includes the infinite loop and I want to kill it when papi finishes running) :
#!/bin/bash
for (( i=0; i<32780; i+=32))
do
./prog1 $i &
pid=$!;
sleep 5s
./papi
kill -s 2 $pid
done
However it does not kill any prog1 instances, and of course it continues to create them at the beginning of each iteration. What am I doing wrong?
It kills a process before finishing:
pkill $!
for example:
telnet 192.168.1.1 1>/dev/null 2>&1 &
pkill $!

Sleep in a while loop gets its own pid

I have a bash script that does some parallel processing in a loop. I don't want the parallel process to spike the CPU, so I use a sleep command. Here's a simplified version.
(while true;do sleep 99999;done)&
So I execute the above line from a bash prompt and get something like:
[1] 12345
Where [1] is the job number and 12345 is the process ID (pid) of the while loop. I do a kill 12345 and get:
[1]+ Terminated ( while true; do
sleep 99999;
done )
It looks like the entire script was terminated. However, I do a ps aux|grep sleep and find the sleep command is still going strong but with its own pid! I can kill the sleep and everything seems fine. However, if I were to kill the sleep first, the while loop starts a new sleep pid. This is such a surprise to me since the sleep is not parallel to the while loop. The loop itself is a single path of execution.
So I have two questions:
Why did the sleep command get its own process ID?
How do I easily kill the while loop and the sleep?
Sleep gets its own PID because it is a process running and just waiting. Try which sleep to see where it is.
You can use ps -uf to see the process tree on your system. From there you can determine what the PPID (parent PID) of the shell (the one running the loop) of the sleep is.
Because "sleep" is a process, not a build-in function or similar
You could do the following:
(while true;do sleep 99999;done)&
whilepid=$!
kill -- -$whilepid
The above code kills the process group, because the PID is specified as a negative number (e.g. -123 instead of 123). In addition, it uses the variable $!, which stores the PID of the most recently executed process.
Note:
When you execute any process in background on interactive mode (i.e. using the command line prompt) it creates a new process group, which is what is happening to you. That way, it's relatively easy to "kill 'em all", because you just have to kill the whole process group. However, when the same is done within a script, it doesn't create any new group, because all new processes belong to the script PID, even if they are executed in background (jobs control is disabled by default). To enable jobs control in a script, you just have to put the following at the beginning of the script:
#!/bin/bash
set -m
Have you tried doing kill %1, where 1 is the number you get after launching the command in background?
I did it right now after launching (while true;do sleep 99999;done)& and it correctly terminated it.
"ps --ppid" selects all processes with the specified parent pid, eg:
$ (while true;do sleep 99999;done)&
[1] 12345
$ ppid=12345 ; kill -9 $ppid $(ps --ppid $ppid -o pid --no-heading)
You can kill the process group.
To find the process group of your process run:
ps --no-headers -o "%r" -p 15864
Then kill the process group using:
kill -- -[PGID]
You can do it all in one command. Let's try it out:
$ (while true;do sleep 99999;done)&
[1] 16151
$ kill -- -$(ps --no-headers -o "%r" -p 16151)
[1]+ Terminated ( while true; do
sleep 99999;
done )
To kill the while loop and the sleep using $! you can also use a trap signal handler inside the subshell.
(trap 'kill ${!}; exit' TERM; while true; do sleep 99999 & wait ${!}; done)&
kill -TERM ${!}

Resources