Killing multiple Mac OS processes dynamically in one command? [duplicate] - linux

This question already has answers here:
How to kill all processes with the same name using OS X Terminal
(6 answers)
Closed 4 years ago.
I have this issue where sometimes .Net Core crashes when I run the debugger in VS Code, and I'm unable to restart the debugger. Even if I quit out of VS Code and go back in, the processes are still open. Rather than restart the machine, I can kill each process manually, but sometimes there are a whole bunch of them, and it gets pretty tedious. For example:
$ ps -eaf | grep dotnet | grep -v grep
16528 ?? 0:02.65 /usr/local/share/dotnet/dotnet /Users/ceti-alpha-v/Documents/NGRM/application/NGRM.Web/bin/Debug/netcoreapp2.0/NGRM.Web.dll
16530 ?? 0:02.75 /usr/local/share/dotnet/dotnet /Users/ceti-alpha-v/Documents/NGRM/application/NGRM.Web/bin/Debug/netcoreapp2.0/NGRM.Web.dll
16532 ?? 0:02.85 /usr/local/share/dotnet/dotnet /usr/local/share/dotnet/sdk/2.1.403/Roslyn/bincore/VBCSCompiler.dll
$ kill 16528 16530 16532
I'd like to delete the processes automatically with a single command, but I'm not sure how to pipe each PID to kill.

You can use xargs like this
ps -eaf | grep dotnet | grep -v "grep" | awk '{print $2}' | xargs kill
or if you want to just kill all dotnet processes
killall dotnet

You can use command Substitution
kill $(ps -eaf | grep dotnet | grep -v grep | awk '{ print $2 }')

Related

Kill command does the job but gives error

I'm using shell script to stop a python service and to do that I'm executing the below command.
ps -ef | grep service.py | grep -v grep | awk '{print $2}' | xargs kill -9
After executing I'm getting below error:
kill: sending signal to 487225 failed: Operation not permitted
But, when I check the terminal where the process is running, it terminates.
/home/saakhan/Documents/scripts/cluster-helpers.sh: line 75: 526839 Killed python3 src/service.py
Now, in my shell script I'm checking for the error if any, while stopping the service. So, it calls that error function which displays "Failed to stop the Service" even though the service is stopped successfully.
Do I need to change the command for this ?
Note: I have to use 'kill' only and not 'pkill
As #Bodo pointed out, there was more than one PIDs and only one of them got killed while the other showed error as it wasn't started by the same user.
I changed the grep command to filter the process started by the same user and then killed it, so it worked.
Update:
I used this command to successfully run my code and terminate the app as needed.
ps -u | grep service.py | grep -v grep | awk '{print $2}' | xargs kill -9

Why does my kill .sh script sometimes not kill the intended processes?

I have a .sh script that calls a number of other .sh scripts and tee's them into log files, and runs them in the background:
startMyProg.sh:
#!/bin/bash
./MyProg1.sh | tee /path/to/log/prog1_`date +\%Y\%m\%d_\%H\%M\%S`.log &
./MyProg2.sh | tee /path/to/log/prog2_`date +\%Y\%m\%d_\%H\%M\%S`.log &
./MyProgN.sh | tee /path/to/log/progN_`date +\%Y\%m\%d_\%H\%M\%S`.log &
I also have a simple helper script that will kill all the processes with MyProg in the name:
killMyProgs.sh:
#!/bin/bash
kill $(ps aux | grep MyProg | awk '{print $2}')
This system generally works, but occasionally the killMyProg.sh script doesn't kill the processes that it finds using the ps|grep|awk pattern. The part that really throws me for a loop is, when I face an instance where the .sh script doesn't kill the processes, I can call kill $(ps aux | grep MyProg | awk '{print $2}') directly from the command line and it will do what I expect it to! Is there something that I'm missing in my approach? Are there any useful debugging techniques that can help me figure out why my .sh script doesn't kill the processes but calling the exact command from the command line does?
A couple of details that may be relevant:
the "./MyProgN" scripts are calls to to start the same MyProg.jar file with different inputs. So the ps|grep of "MyProg" shows both the .sh scripts AND the java applications that they started and kills all of them.
Using RHEL7
Test few time to run:
ps aux | grep MyProg | awk '{print $2}'
You will notice that sometimes the grep command comes before MyProg
And sometimes grep command comes after MyProg (depnding on the pid).
Because grep command is listed as well in ps aux.
Therefore sometimes your script is killing the first grep command instead of your command.
The easiest solution is to use pkill command.
pkill -9 -f MyProg

Bash script works locally but not on a server

This Bash script is called from a SlackBot using a Python script that uses the paramiko library. This works perfectly when I run the script locally from the bot. It runs the application and then kills the process after the allotted time. But when I run this same script on a server from the SlackBot, it doesn't kill the process. Any suggestions?? The script name is "runslack.sh" which is what is called with the grep command below.
#!/bin/bash
slack-export-viewer -z "file.zip"
sleep 10
ps aux | grep runslack.sh | awk '{print $2}' | xargs kill
Please try this and let me know if this helps you
ps -ef | grep runslack.sh | grep -v grep | awk '{print $2}' |xargs -i kill -9 {}
and i hope your user has sufficient permission to execute/kill this action on your server environment.

how can I kill a process in a shell script

My bash script has:
ps aux | grep foo.jar | grep -v grep | awk '{print $2}' | xargs kill
However, I get the following when running:
usage: kill [ -s signal | -p ] [ -a ] pid ...
kill -l [ signal ]
Any ideas, how to fix this line?
In general, your command is correct. If a foo.jar process is running, its PID will be passed to kill and (should) terminate.
Since you're getting kill's usage as output, it means you're actually calling kill with no arguments (try just running kill on its own, you'll see the same message). That means that there's no output in the pipeline actually reaching xargs, which in turn means foo.jar is not running.
Try running ps aux | grep foo.jar | grep -v grep and see if you're actually seeing results.
As much as you may enjoy a half dozen pipes in your commands, you may want to look at the pkill command!
DESCRIPTION
The pkill command searches the process table on the running system and signals all processes that match the criteria
given on the command line.
i.e.
pkill foo.jar
Untested and a guess at best (be careful)
kill -9 $(ps -aux | grep foo.jar | grep -v grep | awk '{print $2}')
I re-iterate UNTESTED as I'm not at work and have no access to putty or Unix.
My theory is to send the kill -9 command and get the process id from a sub shell command call.

Find out if process / file is running linux

I want to check if a process is running or not. I've been trying by
ps -C /path/file
and get this response:
PID TTY TIME CMD
If I do
pgrep php
I get a list of php processes running, but only the PID.
Is there a possibility to
determine the PID of a file I specify (I want to type the file and get the PID)
get the filename if I type the PID
get all the running processes PIDs in a file to work with that in a later script
OS: Ubuntu 14.04 LTS
I've been looking for this since quite some time, tried all the possibilities I found on SO and else but just can't figure out how to do this best.
"Determine the PID of a file I specify."
lsof | grep <file> | awk '{print $2}'
"Get the filename if I type the PID."
lsof | grep <PID>
lsof | grep <PID> | awk '{print $NF}'
"Get all the running processes PIDs in a file to work with that in a later script."
ps x | awk '{print $1}' > pid-list.txt # PIDs of all processes run by current user
ps ax | awk '{print $1}' > pid-list.txt # PIDs of all processes run by all users
What about ps aux | grep <the-name-of-the-process>.

Resources