linux bash script kill process and start it again after kill - linux

Im running several chrome browsers on my computer with different profiles. Profiles are named like "prof1" "prof2" and "prof3". Now I need to make a script which kills specific chrome process and restarts it again.
I cannot use killall command cause I need to be specific which chrome browser I want to kill and if I use kill command script exits after kill command.
I have tried something like this:
#!/bin/bash
kill -9 `ps ax | grep -i prof1 | awk '{print $1}'` &
sleep 2
export DISPLAY=:0.0
/usr/bin/chromium-browser --restore-last-session --user-data-dir=/path/to/prof1/ %U &
This script works nicely but after kill command it exits (saying "Killed") and the browser never gets started again. Kill command does not have any "quiet" option. There is no point of trying 2>&1 cause "Killed" output comes from terminal not from stderr/stdout. I have tried "set -e" and many other things but no luck.
Any help/tips anyone ?

What you can use is pkill's --full/-f flag, which will match the whole command line:
$ sleep 1d &
[1] 23335
$ pkill -f 'sleep 1d'
[1]+ Terminated sleep 1d
And you shouldn't use kill -9.

Related

suspend a shell command without pid

I need something like $command & stop This should execute a command and suspend it. The application later resumes back the command for complete results.
I understand that job can be suspended with stop signal to the corresponding pid.
$kill -SIGSTOP 12753
When we execute a command, we barely know its pid. There is extra command involved to take a pid and do the required. I want to avoid the extra command and a time interval.
Basically The application is for a measure of network performance. Trigger all the commands put them in halt mode. The halted commands are resumed back as per the kind of traffic needed.
The process ID of the most recently started background command is available in the shell parameter $!:
$ command & kill -SIGSTOP $!
(Check the documentation for your shell's implementation of kill for the correct format.)
Try killall with the --signal option where you can specify the name of the process.
linux:~ # killall
Usage: killall [OPTION]... [--] NAME...
killall -l, --list
killall -V, --version
-e,--exact require exact match for very long names
-I,--ignore-case case insensitive process name match
-g,--process-group kill process group instead of process
-i,--interactive ask for confirmation before killing
-l,--list list all known signal names
-q,--quiet don't print complaints
-r,--regexp interpret NAME as an extended regular expression
-s,--signal SIGNAL send this signal instead of SIGTERM
-u,--user USER kill only process(es) running as USER
-v,--verbose report if the signal was successfully sent
-V,--version display version information
-w,--wait wait for processes to die
Verified by starting md5sum in a shell session:
linux$ md5sum
and in another session, ran:
killall -s SIGSTOP md5sum
yielding the following in the md5sum session:
[1]+ Stopped md5sum
Kindly confirm if you want to halt your command or run in background(append '&' to your command)?
If your application is expected to start halted command later, then why dont you start your command(to be halted) in that application itself.
This helps :
sleep 5 & kill -SIGSTOP $!
In above, have executed sleep(demo command) for 5 seconds in background.
Next have send to kill for stopping it using its PID obtained by $!.
Demo & kludge using timeout, (for some reason timeout intereprets a '0s' duration as "run forever"), to stop yes before it outputs anything:
# run 'yes' command, let it print 5 numbered lines, but stop it immediately
timeout -s SIGSTOP .000000001s yes | head -n 5 | cat -n
Output (to STDERR):
[1]+ Stopped timeout -s SIGSTOP .000000001s yes | head -n 5 | cat -n
Now restart it:
fg > /dev/null
Output:
1 y
2 y
3 y
4 y
5 y
Technique for users stuck with v8.12 or earlier coreutils, (pre-2011), wherein timeout lacks sub-second intervals. Requires waiting a second.
Wrap the command string in a shell invocation, preceded by a 1s wait -- so timeout waits 1 second, and simultaneously, so does the command string. Total wait time 1 second:
timeout -s SIGSTOP 1s sh -c "sleep 1s; yes | head -n 5 | cat -n"
Output is the same as before, fg is the same too.
Finesse, if waiting even 1 second before sleeping is too much, it can be run in the background like so:
timeout -s SIGSTOP 1s sh -c "sleep 1s; yes | head -n 5 | cat -n" &
Output (process number will vary):
[1] 14601
Then after a second, the output will be the same as the previous two timeout examples.
Assuming you are using the same command, find the command name in ps output, you can launch it in one terminal then open a new terminal
ps -ely
after retrieving the command name:
command & kill -SIGSTOP $(pidof command_name)
pidof needs the exact command name to be able to find the pid.
then to resume it:
kill -SIGCONT $(pidof command_name)
if the command name is not constant, but there is a pattern, you can create a script like this, you can call it pof.sh:
ps -ely | grep $1 | tr -s ' ' | cut -d" " -f3
command & kill -SIGSTOP $(bash pof.sh pattern)
One drawback with this script, is that in case many lines match the pattern it will returns all of theirs pids, if this is a problem, you can put the output in an array and go on from there.

Don't show the output of kill command in a Linux bash script [duplicate]

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

Kill ssh or\and remote process from bash script

I am trying to run the following command as part of the bash script which suppose to open ssh channel, run the program on the remote machine, save the output to the file for 10 sec, kill the process, which was writing to the file and then give the control back to bash script.
#!/bin/bash
ssh hostname '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null; sshpid=!$; sleep 10; kill -9 $sshpid 2>/dev/null &'
Unfortunately, what it seems to be doing is starting the program: nodes-listener remotely, but it never gets any further and it doesn't give control to the bash script. So, the only way to stop the execution is to do Ctrl+C.
Killing ssh doesn't help (or rather can't be executed) since the control is not with bash script as it waits for the command within the ssh session to complete, which of course never happens as it has to be killed to stop.
Here's the command line that you're running on the remote system:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null
sshpid=!$
sleep 10
kill -9 $sshpid 2>/dev/null &
You should change it to this:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null & <-- Ampersand goes here
sshpid=$!
sleep 10
kill -9 $sshpid 2>/dev/null
You want to start nodes-listener and then kill it after ten seconds. To do this, you need to start nodes-listener as a background process, so that the shell which is executing this command line to move on to the next command after starting nodes-listener. The & in your command line is in the wrong place, and would apply only to the kill command. You need to apply it to the nodes-listener command.
I'll also note that your sshpid=!$ line was incorrect. You want sshpid=$!. $! is the process ID of the last command started in the background.
You need to place the ampersand after the first command, then put the remaining commands onto the next line:
ssh hostname -- '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=$!; sleep 10; kill $sshpid 2>/dev/null'
Btw, ssh is returning after all commands had been executed. This does mean it will close the allocated pty as well. If there are still background jobs running in that shell session, they would being killed by SIGHUP. This means, you can probably omit the explicit kill command. (Depends on whether nodes-listener handles SIGHUP and SIGTERM differently). Having this, you could simplify the code to the following:
ssh hostname -- sh -c '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sleep 10'
I have resolved this by pushing the shell script to the remote machine and executing it there. It is actually less tidy and relies on space being available on the remote computer.
Since my remote machine is a small physical device, the issue of the space usage is important (even for the tiny amount of space required in this case).
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=!$
sleep 20
sync
# killing nodes-listener process and giving control back to the base bash
killall -9 nodes-listener 2>/dev/null && echo "nodes-listener is killed"

How to know if the process is set NOHUP?

Using jobs I know the process is running.
bash-4.2$ jobs
[1]+ Running test.sh &
I wanted it to be set NOHUP so that it won't be killed when I exit. I used
disown
and
bash-4.2$ jobs
shows nothing. I'm not sure if the process is set NOHUP or not. I'm curious about this because after I read the manual it says
disown -h
should be used to set NOHUP.
Edit
I don't think the link Find the Process run by nohup command helps. The question is different than that one.
I'm gonna restate my problem. I run a program without nohup, and later I wanted it to be set NOHUP so that it won't be killed when I exit the system. So I used disown, but later I found the manual says I should have used disown -h to set NOHUP. I want to check if my process is set NOHUP or not successfully. If not, what can I do to set it to be NOHUP?
UPDATE
I know two ways my be helpful:
1) Whenever a process is running over nohup It writes output on ~/nohup.out . So you can check this file by running command find -cmin 2. It shows you if nohup.out is changing each 2 seconds or not.
If it is changing you would understand that sth is running by nohup command, after that you can check it with lsof and continue your checking...
2) If you logout from specific user andgo to tty then do ps aux | grep <user> or ps aux | grep ? then you can understand that is running with nohup command... because there is no pts then it shows you ? instead...
useful command:
ps aux | grep <program> | awk -F" " '{print $7}'
Hope to be helpful

How to get the process ID to kill a nohup process?

I'm running a nohup process on the server. When I try to kill it my putty console closes instead.
this is how I try to find the process ID:
ps -ef |grep nohup
this is the command to kill
kill -9 1787 787
When using nohup and you put the task in the background, the background operator (&) will give you the PID at the command prompt. If your plan is to manually manage the process, you can save that PID and use it later to kill the process if needed, via kill PID or kill -9 PID (if you need to force kill). Alternatively, you can find the PID later on by ps -ef | grep "command name" and locate the PID from there. Note that nohup keyword/command itself does not appear in the ps output for the command in question.
If you use a script, you could do something like this in the script:
nohup my_command > my.log 2>&1 &
echo $! > save_pid.txt
This will run my_command saving all output into my.log (in a script, $! represents the PID of the last process executed). The 2 is the file descriptor for standard error (stderr) and 2>&1 tells the shell to route standard error output to the standard output (file descriptor 1). It requires &1 so that the shell knows it's a file descriptor in that context instead of just a file named 1. The 2>&1 is needed to capture any error messages that normally are written to standard error into our my.log file (which is coming from standard output). See I/O Redirection for more details on handling I/O redirection with the shell.
If the command sends output on a regular basis, you can check the output occasionally with tail my.log, or if you want to follow it "live" you can use tail -f my.log. Finally, if you need to kill the process, you can do it via:
kill -9 `cat save_pid.txt`
rm save_pid.txt
I am using red hat linux on a VPS server (and via SSH - putty), for me the following worked:
First, you list all the running processes:
ps -ef
Then in the first column you find your user name; I found it the following three times:
One was the SSH connection
The second was an FTP connection
The last one was the nohup process
Then in the second column you can find the PID of the nohup process and you only type:
kill PID
(replacing the PID with the nohup process's PID of course)
And that is it!
I hope this answer will be useful for someone I'm also very new to bash and SSH, but found 95% of the knowledge I need here :)
suppose i am running ruby script in the background with below command
nohup ruby script.rb &
then i can get the pid of above background process by specifying command name. In my case command is ruby.
ps -ef | grep ruby
output
ubuntu 25938 25742 0 05:16 pts/0 00:00:00 ruby test.rb
Now you can easily kill the process by using kill command
kill 25938
jobs -l should give you the pid for the list of nohup processes.
kill (-9) them gently.
;)
You could try
kill -9 `pgrep [command name]`
Suppose you are executing a java program with nohup you can get java process id by
`ps aux | grep java`
output
xxxxx 9643 0.0 0.0 14232 968 pts/2
then you can kill the process by typing
sudo kill 9643
or lets say that you need to kill all the java processes then just use
sudo killall java
this command kills all the java processes. you can use this with process. just give the process name at the end of the command
sudo killall {processName}
If your application always uses the same port, you can kill all the processes in that port like this.
kill -9 $(lsof -t -i:8080)
This works in Ubuntu
Type this to find out the PID
ps aux | grep java
All the running process regarding to java will be shown
In my case is
johnjoe 3315 9.1 4.0 1465240 335728 ? Sl 09:42 3:19 java -jar batch.jar
Now kill it kill -9 3315
The zombie process finally stopped.
when you create a job in nohup it will tell you the process ID !
nohup sh test.sh &
the output will show you the process ID like
25013
you can kill it then :
kill 25013
I started django server with the following command.
nohup manage.py runserver <localhost:port>
This works on CentOS:
:~ ns$netstat -ntlp
:~ ns$kill -9 PID
This works for mi fine on mac
kill -9 `ps -ef | awk '/nohup/{ print \$2 }'`
I often do this way. Try this way :
ps aux | grep script_Name
Here, script_Name could be any script/file run by nohup.
This command gets you a process ID. Then use this command below to kill the script running on nohup.
kill -9 1787 787
Here, 1787 and 787 are Process ID as mentioned in the question as an example.
This should do what was intended in the question.
If you are unaware of the PID, then first find it using TOP command
top -U userid
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
You will get the PID using top, then perform the kill operation.
$ kill -9 <PID>
Today I met the same problem. And since it was a long time ago, I totally forgot which command I used and when. I tried three methods:
Using the STIME shown in ps -ef command. This shows the time you start your process, and it's very likely that you nohup you command just before you close ssh(depends on you) . Unfortunately I don't think the latest command is the command I run using nohup, so this doesn't work for me.
Second is the PPID, also shown in ps -ef command. It means Parent Process ID, the ID of process that creates the process. The ppid is 1 in ubuntu for process that using nohup to run. Then you can use ps --ppid "1" to get the list, and check TIME(the total CPU time your process use) or CMD to find the process's PID.
Use lsof -i:port if the process occupy some ports, and you will get the command. Then just like the answer above, use ps -ef | grep command and you will get the PID.
Once you find the PID of the process, then can use kill pid to terminal the process.
About losing your putty: often the ps ... | awk/grep/perl/... process gets matched, too! So the old school trick is like this
ps -ef | grep -i [n]ohup
That way the regex search doesn't match the regex search process!
if you are on a remote server, check memory usage with top , and find your process and its ID. After that, just execute kill [your process ID] .

Resources