Bash script works locally but not on a server - linux

This Bash script is called from a SlackBot using a Python script that uses the paramiko library. This works perfectly when I run the script locally from the bot. It runs the application and then kills the process after the allotted time. But when I run this same script on a server from the SlackBot, it doesn't kill the process. Any suggestions?? The script name is "runslack.sh" which is what is called with the grep command below.
#!/bin/bash
slack-export-viewer -z "file.zip"
sleep 10
ps aux | grep runslack.sh | awk '{print $2}' | xargs kill

Please try this and let me know if this helps you
ps -ef | grep runslack.sh | grep -v grep | awk '{print $2}' |xargs -i kill -9 {}
and i hope your user has sufficient permission to execute/kill this action on your server environment.

Related

Why does my kill .sh script sometimes not kill the intended processes?

I have a .sh script that calls a number of other .sh scripts and tee's them into log files, and runs them in the background:
startMyProg.sh:
#!/bin/bash
./MyProg1.sh | tee /path/to/log/prog1_`date +\%Y\%m\%d_\%H\%M\%S`.log &
./MyProg2.sh | tee /path/to/log/prog2_`date +\%Y\%m\%d_\%H\%M\%S`.log &
./MyProgN.sh | tee /path/to/log/progN_`date +\%Y\%m\%d_\%H\%M\%S`.log &
I also have a simple helper script that will kill all the processes with MyProg in the name:
killMyProgs.sh:
#!/bin/bash
kill $(ps aux | grep MyProg | awk '{print $2}')
This system generally works, but occasionally the killMyProg.sh script doesn't kill the processes that it finds using the ps|grep|awk pattern. The part that really throws me for a loop is, when I face an instance where the .sh script doesn't kill the processes, I can call kill $(ps aux | grep MyProg | awk '{print $2}') directly from the command line and it will do what I expect it to! Is there something that I'm missing in my approach? Are there any useful debugging techniques that can help me figure out why my .sh script doesn't kill the processes but calling the exact command from the command line does?
A couple of details that may be relevant:
the "./MyProgN" scripts are calls to to start the same MyProg.jar file with different inputs. So the ps|grep of "MyProg" shows both the .sh scripts AND the java applications that they started and kills all of them.
Using RHEL7
Test few time to run:
ps aux | grep MyProg | awk '{print $2}'
You will notice that sometimes the grep command comes before MyProg
And sometimes grep command comes after MyProg (depnding on the pid).
Because grep command is listed as well in ps aux.
Therefore sometimes your script is killing the first grep command instead of your command.
The easiest solution is to use pkill command.
pkill -9 -f MyProg

.bashrc saves previous process id and does not update in alias commands

I have made an alias in .bashrc to kill my python service.py & process
alias servicestop="kill $(ps -ef | grep -w service.py | grep -v grep | awk '{print $2}')"
Whenever I run first time servicestop command it will kill the process.
but again whenever I start process python service.py &, and execute command servicestop it gives an error.
After research, I found following things.
when I run first time python service.py & process. its process id was 512.
and, command servicestop kill that process(512).
Now when I run Second time process python service.py &. its process id was 546.(definitely it will be different).
When I run command servicestop. it will give following error:
-bash: kill: (512) - No such process
That means $(ps -ef | grep -w service.py | grep -v grep | awk '{print $2}') will return the previous pid, which is already killed.
Now please suggest the solution if any possible.
so whenever I want to run servicestop command, I have to run source .bashrc command first, then run servicestop command to make it work.
Please remove the servicestop alias from your .bashrc and add :
servicestop(){
kill $(ps -ef | grep -w service.py | grep -v grep | awk '{print $2}');
}
In a way, functions in .bashrc are "aliases 2.0" : simply better
Better : same function; but with the name of script to kill as parameter :
servicestop(){
kill $(ps -ef | grep -w $1 | grep -v servicestop | awk '{print $2}');
}
Use it like that :
servicestop service.py
servicestop otherSuperService.py

how can I kill a process in a shell script

My bash script has:
ps aux | grep foo.jar | grep -v grep | awk '{print $2}' | xargs kill
However, I get the following when running:
usage: kill [ -s signal | -p ] [ -a ] pid ...
kill -l [ signal ]
Any ideas, how to fix this line?
In general, your command is correct. If a foo.jar process is running, its PID will be passed to kill and (should) terminate.
Since you're getting kill's usage as output, it means you're actually calling kill with no arguments (try just running kill on its own, you'll see the same message). That means that there's no output in the pipeline actually reaching xargs, which in turn means foo.jar is not running.
Try running ps aux | grep foo.jar | grep -v grep and see if you're actually seeing results.
As much as you may enjoy a half dozen pipes in your commands, you may want to look at the pkill command!
DESCRIPTION
The pkill command searches the process table on the running system and signals all processes that match the criteria
given on the command line.
i.e.
pkill foo.jar
Untested and a guess at best (be careful)
kill -9 $(ps -aux | grep foo.jar | grep -v grep | awk '{print $2}')
I re-iterate UNTESTED as I'm not at work and have no access to putty or Unix.
My theory is to send the kill -9 command and get the process id from a sub shell command call.

Awk not working inside bash script

Im trying to write a bash script and trying to take input from user and executing a kill command to stop a specific tomcat.
...
read user_input
if [ "$user_input" = "2" ]
then
ps -ef | grep "search-tomcat" |awk {'"'"'print $2'"'"'}| xargs kill -9
echo "Search Tomcat Shut Down"
fi
...
I have confirmed that the line
ps -ef | grep "search-tomcat"
works fine in script but:
ps -ef | grep "search-tomcat" |awk {'"'"'print $2'"'"'}
doesnt yield any results in script, but gives desired output in terminal, so there has to be some problem with awk command
xargs can be tricky - Try:
kill -9 $(ps -ef | awk '/search-tomcat/ {print $2}')
If you prefer using xargs then check man page for options for your target OS (i.e. xargs -n.)
Also noting that 'kill -9' is a non-graceful process exit mechanism (i.e. possible file corruption, other strangeness) so I suggest only using as a last resort...
:)

How do I get "awk" to work correctly within a "su -c" command?

I'm running a script at the end of a Jenkins build to restart Tomcat. Tomcat's shutdown.sh script is widely known not to work all in many instances and so my script is supposed to capture the PID of the Tomcat process and then attempt to manually shut it down. Here is the command I'm using to capture the PID:
ps -ef | grep Bootstrap | grep -v grep | awk '{print $2}' > tomcat.pid
The output when manually runs retrieves the PID perfectly. During the Jenkins build I have to switch users to run the command. I'm using "su user -c 'commands'" like this:
su user -c "ps -ef | grep Bootstrap | grep -v grep | awk '{print $2}' > tomcat.pid"
Whenever I do this however, the "awk" portion doesn't seem to be working. Instead of just retrieving the PID, it's capturing the entire process information. Why is this? How can I fix the command?
The issue is that $2 is being processed by the original shell before being sent to the new user. Since the value of $2 in the shell is blank, the awk command at the target shell essentially becomes awk {print }. To fix it, you just escape the $2:
su user -c "pushd $TOMCAT_HOME;ps -ef | grep Bootstrap | grep -v grep | awk '{print \$2}' > $TOMCAT_HOME/bin/tomcat.pid"
Note that you want the $TOMCAT_HOME to be processed by the original shell so that it's value is set properly.
You don't need the pushd command as you can replace the awk command with:
cut -d\ -f2
Note: two 2 spaces between -d\ and -f2

Resources