I have a program that in distributed mode creates a folder and spawns a bunch of sub processes. Is there any way to find all PIDs that were executed from this folder? Sort of opposite of
$ pwdx pid
where you give a path name and you get a bunch of pids.
thanks
Reporting all processes which absolute path is inside '/usr/bin/' may be done like this:
ls -l /proc/*/exe 2>/dev/null | grep /usr/bin/ | sed 's#.*/proc/##;s#/exe.*##;' | grep -v "self"
Reporting all processes which working directory (working directory can be changed by a simple cd) is inside /tmp/a could be done like this:
ps axo pid | xargs -n1 pwdx 2>/dev/null | grep ': /tmp/a' | sed 's/:.*//'
Related
My bash script has:
ps aux | grep foo.jar | grep -v grep | awk '{print $2}' | xargs kill
However, I get the following when running:
usage: kill [ -s signal | -p ] [ -a ] pid ...
kill -l [ signal ]
Any ideas, how to fix this line?
In general, your command is correct. If a foo.jar process is running, its PID will be passed to kill and (should) terminate.
Since you're getting kill's usage as output, it means you're actually calling kill with no arguments (try just running kill on its own, you'll see the same message). That means that there's no output in the pipeline actually reaching xargs, which in turn means foo.jar is not running.
Try running ps aux | grep foo.jar | grep -v grep and see if you're actually seeing results.
As much as you may enjoy a half dozen pipes in your commands, you may want to look at the pkill command!
DESCRIPTION
The pkill command searches the process table on the running system and signals all processes that match the criteria
given on the command line.
i.e.
pkill foo.jar
Untested and a guess at best (be careful)
kill -9 $(ps -aux | grep foo.jar | grep -v grep | awk '{print $2}')
I re-iterate UNTESTED as I'm not at work and have no access to putty or Unix.
My theory is to send the kill -9 command and get the process id from a sub shell command call.
I want to show only the directories where the binaries are installed. Like
/bin
for
/bin/ls
This is what I've done so far:
ps aux | awk '{print $11}' | grep -x -e "/.*"
But its displaying the filename too, and I dont want that, and example of the output:
/usr/lib/firefox/firefox
But id like it like this:
/usr/lib/firefox
Thank you!
The command in order to extract the name of the directory is dirname "path/to/file". Now as you probably see, it requires an argument (does not read from stdin). You can however use xargs to fix this:
xargs dirname
Now you simply need to add this at the end of your pipeline:
ps aux | awk '{print $11}' | grep -x -e "/.*" | xargs dirname
Demo
Ran this on my Linux machine:
$ ps aux | awk '{print $11}' | grep -x -e "/.*" | xargs dirname | head
/sbin
/lib/systemd
/lib/systemd
/sbin
/usr/sbin
/usr/sbin
/usr/sbin
/usr/sbin
/usr/sbin
/usr/bin
In order to make your command space-safe (a remark by #hek2mgl), you can use:
ps aux | awk '{print $11}' | grep -x -e "/.*" | xargs -I file dirname "file"
Mind this will have an impact on performance: whereas using xargs dirname without any flags would use the loop mechanism of dirname handling multiple parameters, and thus resulting in a tight loop, using the latter will spawn a dirname process for each line individually.
More elegant way
Your program makes use of a lot of text processing, which can be tricky, error prone and furthermore sensitive to changes of the format (of ps,...). A less error prone way can be:
ps -A -o pid | xargs -I pid readlink "/proc/pid/exe" | xargs -I file dirname "file"
I want to monitor the number of file descriptors opened by a process running on my centos box. the below command works for me
watch -n 1 "ls /proc/pid/fd | wc -l"
The problem comes when I need to monitor the same when the above process is restarted. The pid changes and I cant get the stats.
The good thing is that the pname is constant. So I can extract the pid using pgrep pname.
So how can I use the command in the below way:
watch -n 1 "ls /proc/"pgrep <pname>"/fd | wc -l"
I want the pgrep pname value to be dynamically picked up.
Is there any way I can define a variable which continuously gets the latest value of pgrep pname and I can insert the variable here.
watch evaluates its command as shell command each time, so we first have to find a shell command that produces the output. Since there may be multiple matching processes, we can use a loop:
for pid in $(pgrep myprocess); do ls "/proc/$pid/fd"; done | wc -l
Now we can quote that to pass it literally to watch:
watch -n 1 'for pid in $(pgrep myprocess); do ls "/proc/$pid/fd"; done | wc -l'
watch -n 1 "pgrep memcached | xargs -I{} ls /proc/{}/fd | wc -l"
Another one way.
Running Apache and Jboss on Linux, sometimes my server halts unexpectedly saying that the problem was Too Many Open Files.
I know that we might set a higher limit for nproc and nofile at /etc/security/limits.conf to fix the open files problem, but I am trying to get better output, such as using watch to monitor them in real-time.
With this command line I can see how many open files per PID:
lsof -u apache | awk '{print $2}' | sort | uniq -c | sort -n
Output (Column 1 is # of open files for the user apache):
1 PID
1335 13880
1389 13897
1392 13882
If I could just add the watch command it would be enough, but the code below isn't working:
watch lsof -u apache | awk '{print $2}' | sort | uniq -c | sort -n
You should put the command insides quotes like this:
watch 'lsof -u apache | awk '\''{print $2}'\'' | sort | uniq -c | sort -n'
or you can put the command into a shell script like test.sh and then use watch.
chmod +x test.sh
watch ./test.sh
This command will tell you how many files Apache has opened:
ps -A x |grep apache | awk '{print $1}' | xargs -I '{}' ls /proc/{}/fd | wc -l
You may have to run it as root in order to access the process fd directory. This sounds like you've got a web application which isn't closing its file descriptors. I would focus my efforts on that area.
I was looking for the best way to find the number of running processes with the same name via the command line in Linux. For example if I wanted to find the number of bash processes running and get "5". Currently I have a script that does a 'pidof ' and then does a count on the tokenized string. This works fine but I was wondering if there was a better way that can be done entirely via the command line. Thanks in advance for your help.
On systems that have pgrep available, the -c option returns a count of the number of processes that match the given name
pgrep -c command_name
Note that this is a grep-style match, not an exact match, so e.g. pgrep sh will also match bash processes. If you want an exact match, also use the -x option.
If pgrep is not available, you can use ps and wc.
ps -C command_name --no-headers | wc -l
The -C option to ps takes command_name as an argument, and the program prints a table of information about processes whose executable name matches the given command name. This is an exact match, not grep-style. The --no-headers option suppresses the headers of the table, which are normally printed as the first line. With --no-headers, you get one line per process matched. Then wc -l counts and prints the number of lines in its input.
result=`ps -Al | grep command-name | wc -l`
echo $result
ps -Al | grep -c bash
You can try :
ps -ef | grep -cw [p]rocess_name
OR
ps aux | grep -cw [p]rocess_name
For e.g.,:
ps -ef | grep -cw [i]nit
Some of the above didn't work for me, but they helped me on my way to this.
ps aux | grep [j]ava -c
For newbies to Linux:
ps aux prints all the currently running processes, grep searches for all processes that match the word java, the [] brackets remove the process you just ran so it wont include that as a running process and finally the -c option stands for count.
List all process names, sort and count
ps --no-headers -A -o comm | sort | uniq -c
You also can list process attached to a tty
ps --no-headers a -o comm | sort | uniq -c
You may filter with:
ps --no-headers -A -o comm | awk '{ list[$1] ++ } END { for (i in list) { if (list[i] > 10) printf ("%20s: %s\n", i, list[i]) } }'
Following bash script can be run as a cron job and you can possibly get email if any process forks itself too much.
for i in `ps -A -o comm= --sort=+comm | uniq`;
do
if (( `ps -C $i --no-headers | wc -l` > 10 )); then
echo `hostname` $i `ps -C $i --no-headers | wc -l` ;
fi
done
Replace 10 with your number of concern.
TODO: "10" could be passed as command line parameter as well. Also, few system processes can be put into exception list.
You can use ps(will show snapshot of processes) with wc(will count number of words, wc -l option will count lines i.e. newline characters).
Which is very easy and simple to remember.
ps -e | grep processName | wc -l
This simple command will print number of processes running on current server.
If you want to find the number of process running on current server for current user then use -U option of ps.
ps -U root | grep processName | wc -l
change root with username.
But as mentioned in lot of other answers you can also use ps -e | grep -c process_name which is more elegant way.
ps aux | wc -l
This command shows number of processes running on the system by all the users.
For a specific user you can use the following command:
ps -u <username> | wc -l
replace with the actual username before running :)
ps -awef | grep CAP | wc -l
Here "CAP" is the word which is in the my Process_Names.
This command output = Number of Processes + 1
This is why When we are running this command , our system read thats "ps -awef | grep CAP | wc -l " is also a process.
So yes our real answer is (Number of Processes) = Command Output - 1
Note : These processes are only those processes who include the name of "CAP"