Multiple PIDs being stored in PID file - linux

I have a System V init script I've developed that starts a Java program. For some reason whenever the PID file gets created, it contains multiple PIDs instead of one.
Here's the relevant code that starts the service and writes to the PID file:
daemon --pidfile=$pidfile "$JAVA_CMD &" >> $logfile 2>&1
RETVAL=$?
usleep 500000
if [ $RETVAL -eq 0 ]; then
touch "$lock"
PID=$(ps aux | grep -vE 'grep|runuser|bash' | grep <myservice> | awk '{print $2}')
echo $PID > $pidfile
When I test the ps aux... command manually, a single line returns. When running as a script, it appears that this call is returning multiple PIDs.
Example contents in the PID file: 16601 16602 16609 16619 16690. 16619 is the actual process ID found when manually running the ps aux... command mentioned above.

Try reversing your greps. The first one (-vE) may run BEFORE the myservice one starts up. Grep for your service FIRST, then filter out the unwanted lines:
PID=$(ps aux | grep <myservice> | grep -vE 'grep|runuser|bash' | awk '{print $2}')

I encounted the same issue but not the same statement, it was like this:
PID="$(ps -ef|grep command|grep options|grep -v grep|awk '{print $2}')"
in which I used the same grep order as #Marc said in first answer, but did not filter all the unwanted lines.
So I tried the below one and it worked:
PID="$(ps -ef|grep command|grep options|grep -vE 'grep|runuser|bash'|awk '{print $2}')"

Related

Display the name of all running processes in Linux in a file using a bash script

I need to display the name of all running processes in Linux in a file using a bash script. I wrote the code, but didnt succeed:
#!/bin/bash
for i in `ps aux| awk '{print $5}'`;
echo $i > /tmp/test;
done
Need your assistance, Thanks.
Using the for, the syntax is slightly different:
#!/bin/sh
cat /dev/null > /tmp/test
for i in $(ps aux | awk '{print $5}'); do
echo $i >> /tmp/test;
done
You missed the do operator
The output redirector > on a loop should change to appending >>, otherwise only the last value of the loop will be saved.
But as #stark said, the for is not required:
#!/bin/sh
ps aux | awk '{print $5}' > /tmp/test;
I'm not sure, what your output should look like. With your template, and the fixes from Glauco Leme, I only got the VSZ of all the processes.
I assume you need the cmd of each process, then you just can use ps -e --no-headers --format cmd.
In case you need it in a file:
ps -e --no-headers --format cmd > /tmp/test
I hope this will do what you need.

Bash script kill command in for loop

I want to kill all processes containing some string. I wrote script for doing this. However, when I execute it, it gets "Killed" signal after first iteration of for loop. This is my code:
#!/bin/bash
executeCommand () {
local pname="$1";
echo $HOSTNAME;
local search_terms=($(ps aux | grep $pname | awk '{print $2}'))
for pros in "${search_terms[#]}"; do
kill -9 "$pros"
echo $pros
done
exit
}
executeCommand "$1" # get the string that process to be killed contains
I execute it like ./my_script.sh zookeeper.
When I delete the line containing kill command, for loop executes until end, otherwise, after first kill command, I get as an output "Killed" and program exits.
What is possible reason for this, and any other solution to reach my goal?
The silly (faulty, buggy) way to do this is to add grep -v grep to your pipeline:
# ${0##*/} expands to the name of the running script
# ...thus, we avoid killing either grep, or the script itself
ps aux | grep -e "$pname" | egrep -v "grep|${0##*/}" | awk '{print $2}'
The better way is to use a tool built for the job:
# pkill already, automatically, avoids killing any of its parent processes
pkill "$pname"
That said, matching processes by name is a bad practice to start with -- you'll also kill less yourproc.log or vim yourproc.conf, not just yourproc. Don't do it; instead, use a proper process supervision system (upstart, DJB daemontools, Apple launchd, systemd, etc) to monitor your long-running daemons and kill or restart them when needed.
By the way -- there's no need for a for loop at all: kill can be passed multiple PIDs on a single invocation, like so:
# a bit longer and bash-specific, but avoids globbing
IFS=$'\n' read -r -d '' -a pids \
< <(ps auxw | awk -v proc="$pname" -v preserve="${0##*/}" \
'$0 ~ proc && $0 !~ preserve && ! /awk/ { print $2 }' \
&& printf '\0')
kill -- "${pids[#]}"
...which could also be formulated as something like:
# setting IFS and running `set -f` necessary to make unquoted expansion safe
( IFS=$'\n'; set -f; exec kill -- \
$(ps auxw | awk -v proc="$pname" -v preserve="${0##*/}" \
'$0 ~ proc && $0 !~ preserve && ! /awk/ { print $2 }') )
grep will show , it's own process . it should be removed using grep -v option
Try like this
for i in ` ps -ef | grep "$pname" | grep -v grep | awk '{print $2}'`
do
kill -9 $i
done

Awk not working inside bash script

Im trying to write a bash script and trying to take input from user and executing a kill command to stop a specific tomcat.
...
read user_input
if [ "$user_input" = "2" ]
then
ps -ef | grep "search-tomcat" |awk {'"'"'print $2'"'"'}| xargs kill -9
echo "Search Tomcat Shut Down"
fi
...
I have confirmed that the line
ps -ef | grep "search-tomcat"
works fine in script but:
ps -ef | grep "search-tomcat" |awk {'"'"'print $2'"'"'}
doesnt yield any results in script, but gives desired output in terminal, so there has to be some problem with awk command
xargs can be tricky - Try:
kill -9 $(ps -ef | awk '/search-tomcat/ {print $2}')
If you prefer using xargs then check man page for options for your target OS (i.e. xargs -n.)
Also noting that 'kill -9' is a non-graceful process exit mechanism (i.e. possible file corruption, other strangeness) so I suggest only using as a last resort...
:)

Linux Script to selectively kill processes

I'm looking at a way to automate the following:
Run ps -ef to list all processes.
Filter out those rows containing java in the CMD column.
Filter out those rows containing root in the UID column.
For each of the filtered rows, get the PID column and run pargs <PID>.
If the output of pargs <PID> contains a particular string XYZ, the issue a kill -9 <PID> command.
To filter out rows based on specific column values, is there a better way than grep? I can use
ps -ef | awk '{print $1}' | grep <UID>
but then I lose info from all other columns. The closest thing I have right now is:
ps -ef | grep java | grep root | grep -v grep | xargs pargs | ?????
EDIT
I was able to solve the problem by using a using the following script:
ps -ef | awk '/[j]ava/ && /root/ {print $2}' | while read PID; do
pargs "$PID" | grep "Args" > /dev/null && kill -9 $PID && echo "$PID : Java process killed!"
done
both anubhava's and kojiro's answers helped me reach there. But since I can only accept one answer, I tagged kojiro's answer as the correct one since it helped me a bit more.
Consider pgrep:
pgrep -U 0 java | while read pid; do
pargs "$pid" | grep -qF XYZ && kill "$pid"
done
pgrep and pkill are available on many Linux systems and as part of the "proctools" packages for *BSDs and OS X.
You can reduce all grep by using awk:
ps -ef | awk '/[j]ava/ && /root/ {print $1}' | xargs pargs
Searching for pattern /[j]ava/ will skip this awk process from output of ps.
You can also use pkill if it is available on your system.

how to enable some commands during run time in a shell script

I have written 1 shell script to run Jstack command for a particular Process ID (PID).
But some time it may happen that multiple PIDs are there in a server for Java process.
At that case i want to run that many Jstack commands giving respective PIDs as input to the command.
Eg. If one application has 2 servers (1 tomcat and 1 jboss), then I need to run 2 JStack commands to capture 2 different logs for 2 processes.
So how to handle or check so that the script will automatically decide how many PIDs r there for java process and will run the commands written inside the script?
My script is getting all the PIDs active by
PID1=$(ps -ef|grep java|grep jboss| awk '{print $2}' )
and
PID2=$(ps -ef|grep java|grep tomcat| awk '{print $2}' )
after that I am running Jstack commands as
jstack $PID1 > jStack1.txt & and jstack $PID2 > jStack2.txt &
To get the pid you can just use pgrep instead of ps/grep/grep/awk:
for pid in $(pgrep -f "tomcat|jboss")
do
jstack $pid >> jStack1.txt
done
you need to combine the pids into one list and loop round them.
So something like this to get a seperate file for each pid:
for pid in $( ps -ef | egrep "tomcat|jboss" | awk '{print $2}')
do
jstack $pid > jstack.$pid.txt
done
Following on from your last comment
I'm not sure what you are trying to do with the array and multiple jstack calls in the loop as it will iterate once for each pid, not give you two pids in the loop, and the $0 & $1 indices don't make sense (did you mean just 0 & 1?), and you are using $N each time but the increment for it is commented out so will stay as 0.
If you are sure there can only be two pids, one for tomcat and one for jboss, then your inital code with sleeps added would do it:
#!/bin/bash
Sleep1=$1
# sleep for the first requested time
sleep $Sleep1
# do the tomcat jstack
PID1=$(ps -ef | grep java| grep tomcat | awk '{print $2}')
jstack $PID1 > jstack.tomcat.$PID1.txt
# sleep for another 60secs
sleep 60
# do the jboss jstack
PID2=$(ps -ef | grep java| egrep "jboss|JBoss" | awk '{print $2}')
jstack $PID2 > jstack.jboss.$PID1.txt
If there can be multiple tomcat processes and multiple jboss processes, then you need two loops:
#!/bin/bash
Sleep1 = $1
# sleep for the first requested time
sleep = $Sleep1
# Do all the tomcat jstacks
for pid in $(ps -ef | grep java| grep "tomcat" | awk '{print $2}')
do
jstack $pid > jstack.tomcat.${pid}.txt )
done
# sleep for another 60secs
sleep 60
# Do all the jboss jstacks
for pid in $(ps -ef | grep java| egrep "jboss|JBoss" | awk '{print $2}')
do
jstack $pid > jstack.jboss.${pid}.txt )
done
Or some combinations of these methods could be used depending on exactly what you are after.

Resources