How to run gdb on httpd processes within a shell script - linux

I would like to get all my httpd processes, put in an array, then run gdb on each process, run as a cron, save output to file. For instance:
#!/bin/bash
# Make a list of current httpd pid's and then run "gdb" on each one
pids=( $(pgrep 'httpd') )
for each in "${pids[#]}"
do
echo "$each"
gdb httpd $each >> gdbscipt.out
echo "Done with: $each"
done
When I run it just runs on the first pid.
# ./gdbscript
2046
Then just stops after each pid is processed. Because it seems there is a breakpoint? within gdb after processing each pid.
I want to run it overnight a few times via cron.
Is there a better approach to running gdb on a list of active httpd processes via cron and outputting to a file(s)?
Thanks

Related

How do I stop a scirpt running in the background in linux?

Let's say I have a silly script:
while true;do
touch ~/test_file
sleep 3
done
And I start the script into the background and leave the terminal:
chmod u+x silly_script.sh
./silly_script.sh &
exit
Is there a way for me to identify and stop that script now? The way I see it is, that every command is started in it's own process and I might be able to catch and kill one command like the 'sleep 3' but not the execution of the entire script, am I mistaken? I expected a process to appear with the scripts name, but it does not. If I start the script with 'source silly_script.sh' I can't find a process by the name of 'source'. Do I need to identify the instance of bash, that is executing the script? How would I do that?
EDIT: There have been a few creative solutions, but so far they require the PID of the script execution to be stored right away, or the bash session to not be left with ^D or exit. I understand, that this way of running scripts should maybe be avoided, but I find it hard to believe, that any low privilege user could, even by accident, start an annoying script into the background, that is for instance filling the drive with garbage files or repeatedly starting new instances of some software and even the admin has no other option, than to restart the server, because a simple script can hide it's identifier without even trying.
With the help of the fine people here I was able to derive the answer I needed:
It is true, that the script runs every command in it's own process, so for instance killing the sleep 3 command won't do anything to the script being run, but through a command like the sleep 3 you can find the bash instance running the script, by looking for the parent process:
So after doing the above, you can run ps axf to show all processes in a tree form. You will then find this section:
18660 ? S 0:00 /bin/bash
18696 ? S 0:00 \_ sleep 3
Now you have found the bash instance, that is running the script and can stop it: kill 18660
(Of course your PID will be different from mine)
The jobs command will show you all running background jobs.
You can kill background jobs by id using kill, e.g.:
$ sleep 9999 &
[1] 58730
$ jobs
[1]+ Running sleep 9999 &
$ kill %1
[1]+ Terminated sleep 9999
$ jobs
$
58730 is the PID of the backgrounded task, and 1 is the task id of it. In this case kill 58730 and kill %1` would have the same effect.
See the JOB CONTROL section of man bash for more info.
When you exit, the backgrounded job will get a kill signal and die (assuming that's how it handles the signal - in your simple example it is), unless you disown it first.
That kill will propogate to the sleep process, which may well ignore it and continue sleeping. If this is the case you'll still see it in ps -e output, but with a parent pid of 1 indicating its original parent no longer exists.
You can use ps -o ppid= <pid> to find the parent of a process, or pstree -ap to visualise the job hierarchy and find the parent visually.

How to notify another shell script if a running shell script is stopped?

I run a shell script say script 1. I pause the execution of script 1 and give the bg command so it starts running in background. I start another script 2 which is again paused and made to run in the background. Now I give the pkill -f script1.sh command to kill the script 1.
What I want to do is that when I kill the script 1 , the script 2 should come to know about this and then script 2 should start running another script 3. How can this be done ?
There are different ways of doing this. The most simple is:
script1
^z
bg
script2 $!
And in script2 at some strategic points:
s1pid=$1
...
if ps $s1pid ; then
script3
fi
...
You can also use a PID file. This works good if there is only a single instance of script1 running system-wide, or if you do some tricks in naming or placing the PID file (f.e. put the PID file in the current directory). You can use the PID in the PID file to test if the script1 still runs.
If you are the only one that runs this script, and you can name it anything you want, you can give it perhaps some unique name. In that case, you can
if ps -ef | grep some_unique_name ; then
script3
fi
You can also lock the PID file as #jhnc suggested with flock.
If you do not have strategic points in script2 where you can poll for script1, you might get some sub-process to poll it in the background.

nohup "does not work" MPIrun

I am trying to use the "nohup" command to avoid killing a background process when exiting the terminal on linux MATE.
The process I want to run is a MPIrun process and I use the following command:
nohup mpirun -np 8 solverName -parallel >log 2>&1
when I leave the terminal, the processes running on the different cores are killed.
Also another thing I remarked in the log file, is that if I try to just run the following command
mpirun -np 8 solverName -parallel >log 2>&1
and then to CTRL+Z (stopping the process) the log file indicates :
Forwarding signal 20 to job
and I am unable to actually stop the mpirun command. So I guess there is something I don't understand in what I am doing
The job run in the background is still owned by your login shell (the nohup command doesn't exit until the mpirun command terminates), so it gets signalled when you disconnect. This script (I call it bk) is what I use:
#!/bin/sh
#
# #(#)$Id: bk.sh,v 1.9 2008/06/25 16:43:25 jleffler Exp $"
#
# Run process in background
# Immune from logoffs -- output to file log
(
echo "Date: `date`"
echo "Command: $*"
nice nohup "$#"
echo "Completed: `date`"
echo
) >>${LOGFILE:=log} 2>&1 &
(If you're into curiosities, note the careful use of $* and "$#". The nice runs the job at a lower priority when I'm not there. And version 1.1 was checked into version control — SCCS at the time — on 1987-08-10.)
For your process, you'd run:
$ bk mpirun -np 8 solverName -parallel
$
The prompt returns almost immediately. The key differences between what is in that code and what you do direct from the command line are:
There's a sub-process for the shell script, which terminates promptly.
The script itself runs the command in a sub-shell in background.
Between them, these mean that the process is not interfered with by your login shell; it doesn't know about the grandchild process.
Running direct on the command line, you'd write:
(nohup mpirun -np 8 solverName -parallel >log 2>&1 &)
The parentheses start a subshell; the sub-shell runs nohup in the background with I/O redirection and terminates. The continuing command is a grandchild of your login shell and is not interfered with by your login shell.
I'm not an expert in mpirun, never having used it, so there's a chance it does something I'm not expecting. My impression from the manual page is that it acts more or less like a regular process even though it can run multiple other processes, possibly on multiple nodes. That is, it runs the other processes but monitors and coordinates them and only exits when its children are complete. If that's correct, then what I've outlined is accurate enough.
To kill the process you need the following command.
first:
$ jobs -l
this gives you the PID of the process like this
[1]+ 47274 Running nohup mpirun -np 8 solverName -parallel >log 2>&1
then execute the following command to kill the process.
kill -9 {program PID i.e 47274 }
this will help you with killing the process.
note that ctrl+Z does not kill the process but it suspends it.
for the first part of the question, I recommend to try this command and see if it works or not.
nohup nohup mpirun -n 8 --your_flags ./compited_solver_name > Output.txt &
it worked for me.
tell us if it doesn't work for you.

Get the process ID in a Shell script when a process is launched in foreground

In a shell program I want to launch a program and get its PID and save in a temp file. But here I will launch the program in the foreground and will not exit the shell until the process is in running state
ex:
#!/bin/bash
myprogram &
echo "$!" > /tmp/pid
And this works fine i am able to get the pid of the launched process . But if i launch the program in fore ground i want to know how to get the pid
ex :
#!/bin/bash
myprogram /// hear some how i wan to know the PID before going to next line
As I commented above since your command is still running in foreground you cannot enter a new command in the same shell and goto the next line.
However while this command is running and you want to get the process id of this program from a different shell tab/window process then use pgrep like this:
pgrep -f "myprogram"
17113 # this # will be different for you :P
EDIT: Base on your comment or is it possible to launch the program in background and get the process ID and then wait the script till that process gets exited ?
Yes that can be done using wait pid command as follows:
myprogram &
mypid=$!
# do some other stuff and then
wait $mypid
You can't do this since your shell script isn't running -- the command you just launched in the foreground is.

Run a script in the same shell(bash)

My problem is specific to the running of SPECCPU2006(a benchmark suite).
After I installed the benchmark, I can invoke a command called "specinvoke" in terminal to run a specific benchmark. I have another script, where part of the codes are like following:
cd (specific benchmark directory)
specinvoke &
pid=$!
My goal is to get the PID of the running task. However, by doing what is shown above, what I got is the PID for the "specinvoke" shell command and the real running task will have another PID.
However, by running specinvoke -n ,the real code running in the specinvoke shell will be output to the stdout. For example, for one benchmark,it's like this:
# specinvoke r6392
# Invoked as: specinvoke -n
# timer ticks over every 1000 ns
# Use another -n on the command line to see chdir commands and env dump
# Starting run for copy #0
../run_base_ref_gcc43-64bit.0000/milc_base.gcc43-64bit < su3imp.in > su3imp.out 2>> su3imp.err
Inside it it's running a binary.The code will be different from benchmark to benchmark(by invoking under different benchmark directory). And because "specinvoke" is installed and not just a script, I can not use "source specinvoke".
So is there any clue? Is there any way to directly invoke the shell command in the same shell(have same PID) or maybe I should dump the specinvoke -n and run the dumped materials?
You can still do something like:
cd (specific benchmark directory)
specinvoke &
pid=$(pgrep milc_base.gcc43-64bit)
If there are several invocation of the milc_base.gcc43-64bit binary, you can still use
pid=$(pgrep -n milc_base.gcc43-64bit)
Which according to the man page:
-n
Select only the newest (most recently started) of the matching
processes
when the process is a direct child of the subshell:
ps -o pid= -C=milc_base.gcc43-64bit --ppid $!
when not a direct child, you could get the info from pstree:
pstree -p $! | grep -o 'milc_base.gcc43-64bit(.*)'
output from above (PID is in brackets): milc_base.gcc43-64bit(9837)

Resources