How to restart background php process? (how to get pid) - linux

I'm a PHP developer, and know very little about shell scripting... So I appreciate any help here.
I have four php scripts that I need running in the background on my server. I can launch them just fine - they work just fine - and I can kill them by looking up their PID.
The problem is I need my script to, from time to time, kill the processes and restart them, as they maintain long standing HTTP requests that sometimes are ended by the other side.
But I don't know how to write a command that'll find these processes and kill them without looking up the PID manually.
We'll start with one launch command :
/usr/local/php5/bin/php -f /home/path/to/php_script.php > /dev/null &
Is there a way to "assign" a PID so it's always the same? or give the process a name? and how would I go about writing that new command?
Thank you!

Nope, you can't "assign" the process PID; instead, you should do as "real" daemons do: make your script save its own PID in some file, and then read it from that file when you need to kill.
Alternative would be to use something like supervisor, that handles all that for you in a quite nice way.
Update - supervisor configuration
Since I mentioned supervisor, I'm also posting here a short supervisor configuration file that should do the job.
[program:yourscriptname]
command=/usr/local/php5/bin/php -f /home/path/to/php_script.php
Have a look here for more configuration options.
Then you can use it like this:
# supervisorctl status
to show the process(es) status.
# supervisorctl start yourscriptname
to start your script
# supervisorctl stop yourscriptname
to stop your script
Update - real world supervisor configuration example
First of all, make sure you have this in your /etc/supervisor/supervisord.conf.
[include]
files = /etc/supervisor/conf.d/*.conf
if not, just add those two lines and
mkdir /etc/supervisor/conf.d/
Then, create a configurtion file for each process you want to launch:
/etc/supervisor/conf.d/script1.conf
[program:script1]
command=/usr/local/php5/bin/php -f /home/path/to/php_script.php
stdout_logfile=/var/log/script1.log
stderr_logfile=/var/log/script1-error.log
/etc/supervisor/conf.d/script2.conf
[program:script2]
command=/usr/local/php5/bin/php -f /home/path/to/php_script2.php
stdout_logfile=/var/log/script2.log
stderr_logfile=/var/log/script2-error.log
...etc, etc.. for all your scripts.
(note that you don't need the trailing & as supervisor will handle all the daemonization thing for you; in fact you shouldn't execute programs that are self-daemonizing inside supervisor).
Then you can start 'em all with:
supervisorctl start all
or just one with something like:
supervisorctl start script1
Starting supervisor from php
Of course, you can start/stop the supervisor-controlled processes using the two commands above, even from inside a script.
Remember however that you'll need root privileges, and it's quite risky to allow eg. a web page to execute commands as root on the server..
If that's the case, I recommend you have a look at the instructions on how to run supervisor as a normal user (I never did that, but you should be able to run it as the www-data user too..).

The canonical way to solve this is to have the process write its PID into a file in a known location, and then any utility scripts can look up the file, read the PID, and manipulate that process. Add a command line argument to the script that gives the name of the PID file to write to.

A work around to this would be to use ps aux, this will show all of the processes with the command that called them. This presumes of course that the 4 scripts are different files, or can be uniquely identified by the command that called them. Pipe that through a grep and you're all set ps aux | grep runningscript.php

OK! so this has been a headache and a half for my who knows NOTHING about shell/bash whatever scripting...
#redShadow 's response would had been perfect, except my hosting provider will not give me access to the /etc/supervisor/ directory. as he said, you must be root - and even using sudo was an admin wouldn't let me make any chances there...
Here's what I came up with:
kill -9 `ps -ef | grep php | grep -v grep | awk '{print $2}'`
because the only types of commands I was executing showed up in the top command as php this command loops thru running processes, finds the php commands and their corresponding PIDs and KILLS them! woot!!

What I do is have my script check for a file that I name "run.txt". If it does not
exist, they exit. Then just br renaming that (empty) file, I can stop all my scripts.

Related

Get PID in bash file with open screen

I am a beginner in bash programming. I want to obtain PIDs from processes, in order to use trap and kill to receive and send signals to a program in the same file.
In particular, I start the program opening a screen in this way:
screen -d -m "start program"
process_id=`/bin/ps -fu $USER| grep "program" | grep -v "grep" | awk '{print $2}'`
The variable process_id contains two PIDs, not one. If I run without a screen, I don't have this issue (anyway, I have to open the screen).
Does anyone have solutions to this problem?
Another question: If I write
screen -d -m "start program">log
the log file isn't printed. Any suggestions?
For your first question, pgrep(or process grep) is what you are looking for.
For instance, the following will return a list of PIDs of all bash processes running.
preg bash
And if you read the docs:
-signal
Defines the signal to send to each matched process. Either the numeric or the symbolic signal name can be used.
Second question, you could either use the -LogFile flag if your version of screen supports it. Or specify the log file in your .screenrc configuration file.
This has already been answered.
Edit:
If you can't access the user's home directory where the configuration file .screenrc is usually put, you could change the $SCREENRC environment variable to explicitly set to an alternative path for it.

Is a script called somewhere

On one of my linux servers I have a script that performs some controls.
Is there a way of finding out where this script is called? This can be in
another script, cobol program, crontab, ...
Opening every one of them will take a very long time.
If you can modify the script, put in a ps line to get the parent pid, ps again and grep for the parent pid to get the command, then log to file.
Come back in a week or so and you should have the command that is triggering your script. In case it's something nested, you may want to recurse or similar.
To do this without modifying the script, you'll need a watcher script/program that checks for access to the script file or calls ps every so often. However, if you have that kind of access, just modifying the script is probably easier.
Edit: Apparently the commands to get the parent pid and command for it, without repeatedly calling ps, look something like:
ps -p $$ -o ppid=
cat /proc/<pid>/cmdline
(from jweyrich's answer here)
Grep for it:
grep -lr yourscript /etc /opt/anotherlikleydir
failing that, search the whole system : grep -lr yourscript /
Edit:
failing that, search in binaries too: grep -lar yourscript /
failing that, the script is either executed by a logged in user or a scripted remote login... if that's the case, try peachykeen's approach and edit the script... and why not dump a ps axf to a log too.

Stopping all node processes in a directory aside from one I'm just starting

I'm using sublime text 2's build systems to aid development of my mongodb + node.js server, which is really handy as it enables me to test my code without having to keep going back and to to the terminal. The downside is that it's very easy to absent-mindedly leave multiple node processes running in the background, which sometimes causes clashes when one of them is using a port I need in order to test another module.
Is there some way I can stop all node processes running within a given directory whenever I start a new process from that directory? A bash script or similar?
Try something like this:
ps auxwwwe | egrep " [n]ode .+ PWD=$PWD" | awk '{ print $2 }' | xargs kill
This does the following:
Uses ps to get the full path to your processes.
Searches for all processes started with the node command in the present working directory (you can modify this with an absolute path, another environment variable, etc.).
Finds the 2nd column: the pid (use awk or nawk depending on your system).
Runs kill on each pid.

How to capture pid of a linux daemon run from init.d

I have started a service daemon , by running the binary(written in C++) through script file stored rc5.d .
But I am not sure how to capture the pid of the daemon process and store it in pid file in /var/run/.pid . So that I can use the pid for termination.
How can I do this?
Try using start-stop-daemon(8) with the --pidfile argument in your init script. Have your program write its PID to a specified location (usually determined in a configuration file).
What you have to look out for is stale PID files, for instance, if a lock file persisted across a reboot. That logic is best implemented in the init script itself, hence the --exec option to start-stop-daemon.
E.g, if /var/run/foo.pid is 1234, and /proc/1234/exe isn't your service, the lock file is stale and should be quietly removed, allowing the service to start normally.
As far as your application goes, just make sure the location of the lockfile is configurable, and some means exists to tell the init script where to put it.
For instance: (sample: /etc/default/foo) :
PIDFILE=/var/run/foo.pid
OTHEROPTION=foo
Then in /etc/init.d/foo :
[ -f /etc/default/foo ] && . /etc/default/foo
Again, other than writing to the file consistently, all of this logic should be handled outside of your application.
If you know the port the program has open, use fuser command to determine the pid.
You could go about more than one way:
In your program use getpid to write it to a configurable file (perhaps looking in ENV)
Use $! after starting the program (this doesn't work for me on archlinux though :-?)
After starting the program, use pidof

Find file launching a process

I think my server has been compromised and it has many perl processes running. However, I don't know what file they are being launched from so I can delete it. How can I find this information?
If your system has been hacked, you cannot trust any of the software, not even the kernel. Format the disk and re-install everything. There is just no way to be sure you've cleaned out the infection, because you can't trust the very tools you would use to clean things. You can't copy new tools onto the box, because you can't trust the SSH daemon or the /bin/cp command. Anything -- ls, vi, ps, cat, dd, etc. -- could have been replaced with a trojan that works to hide the infected files.
You could check the symbolic link /proc/pid/cwd, also check the ppid from ps(1).
The first thing I would do is look at the parent process id (PPID). That said, if the PPID is 1, that doesn't tell you anything.
Auditing the filesystem could help see here
pstree could also help
If you run the command "ps -ef" you should get a list of all processes running on your machine. Each process will have a process id number (PID), and also a parent PID. Find the offending process(es) and check their parent PIDs. Then find the process with a matching PID, and it should be your culprit.
Try ls -l /proc/<pid>/exe, or ls -l /proc/<pid>/fd. I don't remember if perl keeps the script file open after the program starts, but if it does, it will be one of the process's file descriptors.
But if your system is pwned, don't expect anything to make sense.

Resources