I want access to the status.log file of some processes, from bash terminal, in a while loop and compare them. So since the PID are not static how can I gain access to their proc/PID files with their command names and not with PID?
Try to grep output from ps -A by name of the command and get PID from there
Assuming you have pgrep (which you should, it's part of procps), call pgrep -x somecmdname to get a list of PIDs matching that string. From there you can access the proc files as usual.
e.g.
for pid in `pgrep -x somecmd`; do
echo $pid #or do something more interesting
done
Try the command pidof:
$ pidof bash
14317 10465 7204 3514 3466
Then you can loop over the pids:
$ for pid in $(pidof bash); do echo "$pid" ; done
14317
10465
7204
3514
3466
You can use this way
Example:
sleep 1000 &
cd /proc/`pidof sleep`
Refer this link man pidof
Related
In a shell script, I see that using setsid, we could create a new process group. I am not able to find a reliable way to get the group id after the creation. My requirement is simple, launch a process, and after it is done, clean up any descendant (if any). I dont want to kill the main process, hence I have to wait for the main process to end. After which, I can kill the leftover child processes if I had somehow got the group id. which can be done with kill -- -pgid. The missing piece is how do I get the group id ?
This script is what I came up with finally. Hope this helps someone.
$! will give the pid, and a ps has to be used to find its gid.
there was an extra space in front while using ps,the next line of variable expansion removes the leading space.
Finally after waiting for the main process,it kills the group.
#!/bin/sh -x
setsid "$#" &
pid=$!
gidspace=$(ps -o pgid= $pid)
gid="${gidspace## }"
echo "gid $gid"
echo "waiting"
wait $pid
ps -s $gid -o pid,ppid,pgid,command
kill -- -$gid
I managed to do it with a coproc, and a sleep to ensure we have enough time to read back the pid. This is bash-specific of course, and the only way to avoid using a hackish sleep inside a coproc is to write to a temp file and wait for the command to terminate (no need for coproc then).
Using a coproc
Note that I open filehandle 3 to write the pgid back to the parent shell and close it before executing the command.
#!/bin/bash -x
coproc setsid bash -c 'ps -o pgid= $BASHPID >&3; exec 3>&-; exec "$#" & sleep 1' -- "$#" 3>&1
read -u ${COPROC[0]} gid
echo "gid $gid"
ps -s $gid -o pid,ppid,pgid,command
kill -- -$gid
Using a temp file
To avoid having to pass the temp file to the subshell (and the risk the parent dies and removes it before child writes to it) I again open fh 3 so the children can write its pgid to it.
#!/bin/bash -x
t=$(mktemp)
trap 'rm -f "$t"' EXIT
exec {fh}>"$t"
setsid bash -c 'ps -o pgid= $BASHPID >&3; exec 3>&-; exec "$#" &' -- "$#" 3>&${fh}
read gid <$t
echo "gid $gid"
ps -s $gid -o pid,ppid,pgid,command
kill -- -$gid
I have to ssh to a server for running some codes that take a long time to finish. so I need to use nohup command.
I started multiple processes using the nohup command like this:
nohup julia test.jl > Output1.txt &
nohup julia test.jl > Output2.txt &
nohup julia test.jl > Output3.txt &
nohup julia test.jl > Output4.txt &
the problem is that I closed the terminal and when I opened another terminal
I couldn't get the process name and ID using jobs -l.
I tried using ps -p but it answers me for all of the above processes with the same answer julia.
my question is " how can I specify which process is which?" note that only Output file name is different in these processes.
and
"how can I prevent such a problem in the future?"
Thanks for your time and answer.
One way to distinguish between these processes are through there stdout redirections and there is no good way of doing that using ps command.
If you have pgrep installed, you can use that with a simple for loop to know which pid correspond to which output file. Something like the following,
for pid in $(pgrep julia);
do
echo -n "$pid: ";
readlink -f /proc/${pid}/fd/1;
done
/proc/${pid}/fd/1 represents the stdout for the process with pid. It's a symlink, so you need to use readlink to check the source.
Output:
12345: /path/to/output1.txt
12349: /path/to/output2.txt
12350: /path/to/output3.txt
Alternative way would be to use lsof -p $pid, but I find it a bit on the heavy side than what you want to achieve, but the output would be same.
for pid in $(pgrep julia);
do
lsof -p $pid | awk -v var=$pid '/1w/ {print var": "$9}';
done
To find the PIDs of such processes, you can use fuser
$ fuser /path/to/outputfile
or lsof
$ lsof | grep "outputfile"
In order to avoid such a situation in future, use GNU Screen on the server (https://linode.com/docs/networking/ssh/using-gnu-screen-to-manage-persistent-terminal-sessions/).
I have an aribtrary bash command being run that I want to attach some identifying comment to so that I may pkill it if necessary.
For example:
sleep 1000 #uniqueHash93581
pkill -f '#uniqueHash93581'
... but the #uniqueHash93581 does not get interpreted, so pkill won't find the process.
Any way to pass this unique hash so that I may pkill the process?
Bash removes comments before running commands.
A workaround with Linux and GNU grep:
Prefix your command with a variable with a unique value
ID=uniqueHash93581 sleep 1000
Later search this variable to get the PID and kill the process
grep -sa ID=uniqueHash93581 /proc/*/environ | cut -d '/' -f 3 | xargs kill
exec the command in a subshell, and use the -a option to give it a recognizable name. For example:
$ (exec -a foobar sleep 1000) &
$ ps | grep foobar
893 ttys000 0:00.00 foobar 10
Or, just run the job in the background and save its PID.
$ sleep 1000 & pid=$!
$ kill "$pid"
At the begining apologize for my English.
I have a running process on server, and when I execute:
ps -aux | grep script.sh
I get such a result:
root 28104 0.0 0.0 106096 1220 pts/7 S+ 08:27 0:00 /bin/bash ./script.sh
But this script is running from eg. /home/user/my/program/script.sh
So, how I can get the full path of from where the script was running? I have many scripts which name is exactly same, but they are running from different locations and I need to know from where the given script was running.
Thanks for reply!
Try the following script:
for each in `pidof script.sh`
do
readlink /proc/$each/cwd
done
This will find the pid.s of all script.sh scripts running and find the corresponding cwd (current working directories) for /proc.
use pwdx
usage: pwdx pid ...
(show process working directory)
for example,
pwdx 20102
where 20102 is the pid
this will show the process working directory of the process
#!/bin/bash
#declare the associative array with PID as key and process directory as value
declare -A dirr
#This will get the pid of the script
pid_proc=($(ps -eaf | grep "$1.sh" | grep -v "grep" | awk '{print $2}'))
for PID in ${pid_proc[#]}
do
#using Debasish method
dirr[$PID]=$(pwdx $PID)
# Below are different process to get the CWD of running process
# using user1984289 method
#dirr[$PID]=$(readlink /proc/"$PID"/cwd)
#dirr[$PID]=$(cd /proc/$PID/cwd; /bin/pwd)
done
# iterate using the keys of the associative and get the working directory
for PID in "${!dirr[#]}"
do
echo "The script '$1.sh' with PID:'$PID' is in the directory '${dirr[$PID]}'"
done
Use pgrep to get the PIDs of your instances, and then read the link of the associated CWD directory. Basically, the same approach as #user1984289 but using pgrep instead of pidof which does not match bash script names on my system (even with the -x option):
for pid in $(pgrep -f foo.sh); do readlink /proc/$pid/cwd; done
Just change foo.sh to the name of your script.
I have a very simple problem: When I run a shell script I start a program which
runs in an infinite loop. After a while I wanna stop then this program before I can
it again with different parameters. The question is now how do I find out the pid of
the program when I execute it? Basically, I wanna do something like that:
echo "Executing app1 with param1"
./app1 param1 &
echo "Executing app1"
..do some other stuff
#kill somehow app1
echo "Execution of app1 finished!"
Thanks!
In most shells (including Bourne and C), the PID of the last subprocess you launched in the background will be stored in the special variable $!.
#!/bin/bash
./app1 &
PID=$!
# ...
kill $PID
There is some information here under the Special Variables section.
In bash $! expands to the PID of the last process started in the background. So you can do:
./app1 param1 &
APP1PID=$!
# ...
kill $APP1PID
if you want to find out the PID of a process, you can use ps:
[user#desktop ~]$ ps h -o pid -C app1
the parameter -o pid says that you only want the PID of the process, -C app1 specifies the name of the process you want to query, and the parameter h is used to suppress the header of the result table (without it, you'd see a "PID" header above the PID itself). not that if there's more than one process with the same name, all the PIDs will be shown.
if you want to kill that process, you might want to use:
[user#desktop ~]$ kill `ps h -o pid -C app1`
although killall is cleaner if you just want to do that (and if you don't mind killing all "app1" processes). you can also use head or tail if you want only the first or last PID, respectively.
and a tip for the fish users: %process is replaced with the PID of process. so, in fish, you could use:
user#desktop ~> kill %app1
you obtain the pid of app1 with
ps ux | awk '/app1/ && !/awk/ {print $2}'
and then you should be able to kill it....(however, if you've several instances of app1, you may kill'em all)
pidof app1
pkill -f app1
killall app1
I had a problem where the process I was killing was a python script and I had another script which was also running python. I did not want to kill python because of the other script.
I used awk to deal with this (let myscript be your python script):
kill ps -ef|grep 'python myscript.py'|awk '!/awk/ && !/grep/ {print $2}'
Might not be as efficient but I'd rather trade efficiency for versatility in a task like this.