Find process ID of a background process - linux

I am execute a infinite while loop from command line that would execute a rest call with the use of CURL . I'm redirecting the rest o/p to a file . I have appended an "&" at the end of the command so that it goes into background.
Unfortunately, the terminal was exited .
But I see the process is still writing the o/p to the file continuously .
However I don't see any background jobs running with the command job.
Even I used ps/lsof/fuser to find the process ID so that I can manually kill it.
But none of the commands returned me the process ID.
I even tried changing the file to read only mode, but still I see the file continuously growing.
At last I found chattr command that would restrict the file from being written.
But in this case where would I find the process ID that is responsible .
touch /var/log/test_mon_vm.log;while true; echo "start....." >>/var/log/test_mon_vm.log;echo $(date) >> /var/log/test_mon_vm.log; do /usr/bin/curl -I --user rhqadmin:rhqadmin http://localhost:7080/rest/plugins?name=Samba 2>/dev/null | head -n 1 >> /var/log/test_mon_vm.log; echo "end....." >> /var/log/test_mon_vm.log;echo >>/var/log/test_mon_vm.log;echo >>/var/log/test_mon_vm.log;sleep 1;done &

You can find the ID with
ps -fea|grep [your USER]
then search the script list, there's gonna be, if you use an script replace your user with the script name

You can use fg
Background Processes in linux
once the background process is in the foreground you can kill it using ctrl+c
I hope this helps! Have a nice day!

You can try
ps aux | grep "command-you-ran"
e.g.
ps aux | grep "curl"

Related

Get PID in bash file with open screen

I am a beginner in bash programming. I want to obtain PIDs from processes, in order to use trap and kill to receive and send signals to a program in the same file.
In particular, I start the program opening a screen in this way:
screen -d -m "start program"
process_id=`/bin/ps -fu $USER| grep "program" | grep -v "grep" | awk '{print $2}'`
The variable process_id contains two PIDs, not one. If I run without a screen, I don't have this issue (anyway, I have to open the screen).
Does anyone have solutions to this problem?
Another question: If I write
screen -d -m "start program">log
the log file isn't printed. Any suggestions?
For your first question, pgrep(or process grep) is what you are looking for.
For instance, the following will return a list of PIDs of all bash processes running.
preg bash
And if you read the docs:
-signal
Defines the signal to send to each matched process. Either the numeric or the symbolic signal name can be used.
Second question, you could either use the -LogFile flag if your version of screen supports it. Or specify the log file in your .screenrc configuration file.
This has already been answered.
Edit:
If you can't access the user's home directory where the configuration file .screenrc is usually put, you could change the $SCREENRC environment variable to explicitly set to an alternative path for it.

Is a script called somewhere

On one of my linux servers I have a script that performs some controls.
Is there a way of finding out where this script is called? This can be in
another script, cobol program, crontab, ...
Opening every one of them will take a very long time.
If you can modify the script, put in a ps line to get the parent pid, ps again and grep for the parent pid to get the command, then log to file.
Come back in a week or so and you should have the command that is triggering your script. In case it's something nested, you may want to recurse or similar.
To do this without modifying the script, you'll need a watcher script/program that checks for access to the script file or calls ps every so often. However, if you have that kind of access, just modifying the script is probably easier.
Edit: Apparently the commands to get the parent pid and command for it, without repeatedly calling ps, look something like:
ps -p $$ -o ppid=
cat /proc/<pid>/cmdline
(from jweyrich's answer here)
Grep for it:
grep -lr yourscript /etc /opt/anotherlikleydir
failing that, search the whole system : grep -lr yourscript /
Edit:
failing that, search in binaries too: grep -lar yourscript /
failing that, the script is either executed by a logged in user or a scripted remote login... if that's the case, try peachykeen's approach and edit the script... and why not dump a ps axf to a log too.

How to restart background php process? (how to get pid)

I'm a PHP developer, and know very little about shell scripting... So I appreciate any help here.
I have four php scripts that I need running in the background on my server. I can launch them just fine - they work just fine - and I can kill them by looking up their PID.
The problem is I need my script to, from time to time, kill the processes and restart them, as they maintain long standing HTTP requests that sometimes are ended by the other side.
But I don't know how to write a command that'll find these processes and kill them without looking up the PID manually.
We'll start with one launch command :
/usr/local/php5/bin/php -f /home/path/to/php_script.php > /dev/null &
Is there a way to "assign" a PID so it's always the same? or give the process a name? and how would I go about writing that new command?
Thank you!
Nope, you can't "assign" the process PID; instead, you should do as "real" daemons do: make your script save its own PID in some file, and then read it from that file when you need to kill.
Alternative would be to use something like supervisor, that handles all that for you in a quite nice way.
Update - supervisor configuration
Since I mentioned supervisor, I'm also posting here a short supervisor configuration file that should do the job.
[program:yourscriptname]
command=/usr/local/php5/bin/php -f /home/path/to/php_script.php
Have a look here for more configuration options.
Then you can use it like this:
# supervisorctl status
to show the process(es) status.
# supervisorctl start yourscriptname
to start your script
# supervisorctl stop yourscriptname
to stop your script
Update - real world supervisor configuration example
First of all, make sure you have this in your /etc/supervisor/supervisord.conf.
[include]
files = /etc/supervisor/conf.d/*.conf
if not, just add those two lines and
mkdir /etc/supervisor/conf.d/
Then, create a configurtion file for each process you want to launch:
/etc/supervisor/conf.d/script1.conf
[program:script1]
command=/usr/local/php5/bin/php -f /home/path/to/php_script.php
stdout_logfile=/var/log/script1.log
stderr_logfile=/var/log/script1-error.log
/etc/supervisor/conf.d/script2.conf
[program:script2]
command=/usr/local/php5/bin/php -f /home/path/to/php_script2.php
stdout_logfile=/var/log/script2.log
stderr_logfile=/var/log/script2-error.log
...etc, etc.. for all your scripts.
(note that you don't need the trailing & as supervisor will handle all the daemonization thing for you; in fact you shouldn't execute programs that are self-daemonizing inside supervisor).
Then you can start 'em all with:
supervisorctl start all
or just one with something like:
supervisorctl start script1
Starting supervisor from php
Of course, you can start/stop the supervisor-controlled processes using the two commands above, even from inside a script.
Remember however that you'll need root privileges, and it's quite risky to allow eg. a web page to execute commands as root on the server..
If that's the case, I recommend you have a look at the instructions on how to run supervisor as a normal user (I never did that, but you should be able to run it as the www-data user too..).
The canonical way to solve this is to have the process write its PID into a file in a known location, and then any utility scripts can look up the file, read the PID, and manipulate that process. Add a command line argument to the script that gives the name of the PID file to write to.
A work around to this would be to use ps aux, this will show all of the processes with the command that called them. This presumes of course that the 4 scripts are different files, or can be uniquely identified by the command that called them. Pipe that through a grep and you're all set ps aux | grep runningscript.php
OK! so this has been a headache and a half for my who knows NOTHING about shell/bash whatever scripting...
#redShadow 's response would had been perfect, except my hosting provider will not give me access to the /etc/supervisor/ directory. as he said, you must be root - and even using sudo was an admin wouldn't let me make any chances there...
Here's what I came up with:
kill -9 `ps -ef | grep php | grep -v grep | awk '{print $2}'`
because the only types of commands I was executing showed up in the top command as php this command loops thru running processes, finds the php commands and their corresponding PIDs and KILLS them! woot!!
What I do is have my script check for a file that I name "run.txt". If it does not
exist, they exit. Then just br renaming that (empty) file, I can stop all my scripts.

How to use a Linux bash function to "trigger two processes in parallel"

Please kindly consider the following sample code snippet:
function cogstart
{
nohup /home/michael/..../cogconfig.sh
nohup /home/michael/..../freshness_watch.sh
watch -n 15 -d 'tail -n 1 /home/michael/nohup.out'
}
Basically the freshness_watch.sh and the last watch commands are supposed to be executed in parallel, i.e., the watch command doesn't have to wait till its prequel to finish. I am trying to work out a way like using xterm but since the freshness_watch.sh is a script that would last 15 minutes at the most(due to my bad way of writing a file monitoring script in Linux), I definitely want to trigger the last watch command while this script is still executing...
Any thoughts? Maybe in two separate/independent terminals?
Many thanks in advance for any hint/help.
As schemathings indicates indirectly, you probably want to append the '&' character to the end of the line with freshness_watch.sh. (without the single-quotes). I don't see any reason to use '&' for your final watch command, unless you add more commands after that.
'&' at the end of a unix command-line indicates 'run in the back-ground'.
You might want to insert a sleep ${someNumOfSecs} after your call to freshness_watch, to give it some time to have the CPU to it's self.
Seeing as you mention xterm, do you know about the crontab facility that allows you to schedule a job to run anytime you want, and is done without the user having to login? (Maybe this will help with your issue). I like setting jobs to run in crontab, because then you can capture any trace information you care to capture AND any possible output from stderr into a log/trace file.
( nohup wc -l * || nohup ls -l * ) &
( nohup wc -l * ; nohup ls -l * ) &
I'm not clear on what you're attempting to do - the question seems self contradictory.

Can I abort the current running bash command?

Is it possible to manually abort the currently running bash command? So, for example, I'm using 'find' but it's taking ages... how do I manually stop it?
Some things won't respond to Ctrl+C; in that case, you can also do Ctrl+Z which stops the process and then kill %1 - or even fg to go back to it. Read the section in man bash entitled "JOB CONTROL" for more information. It's very helpful. (If you're not familiar with man or the man pager, you can search using /. man bash then inside it /JOB CONTROLEnter will start searching, n will find the next match which is the right section.)
Ok, so this is the order:
1st try: Ctrl+c
2nd try: Ctrl+z
3rd: login to another console, find the process of the command within your first console that is not responding to both previously mentioned abort/sleep keystrokes with: ps aux
Then kill the process with: kill -9 <PROCESSID>
Of course there may be smarter parameters to the ps command or the possibility to grep , but this would complicate the explanation.
Press CtrlC to send SIGINT to the command to attempt to interrupt it.

Resources