I have started a service daemon , by running the binary(written in C++) through script file stored rc5.d .
But I am not sure how to capture the pid of the daemon process and store it in pid file in /var/run/.pid . So that I can use the pid for termination.
How can I do this?
Try using start-stop-daemon(8) with the --pidfile argument in your init script. Have your program write its PID to a specified location (usually determined in a configuration file).
What you have to look out for is stale PID files, for instance, if a lock file persisted across a reboot. That logic is best implemented in the init script itself, hence the --exec option to start-stop-daemon.
E.g, if /var/run/foo.pid is 1234, and /proc/1234/exe isn't your service, the lock file is stale and should be quietly removed, allowing the service to start normally.
As far as your application goes, just make sure the location of the lockfile is configurable, and some means exists to tell the init script where to put it.
For instance: (sample: /etc/default/foo) :
PIDFILE=/var/run/foo.pid
OTHEROPTION=foo
Then in /etc/init.d/foo :
[ -f /etc/default/foo ] && . /etc/default/foo
Again, other than writing to the file consistently, all of this logic should be handled outside of your application.
If you know the port the program has open, use fuser command to determine the pid.
You could go about more than one way:
In your program use getpid to write it to a configurable file (perhaps looking in ENV)
Use $! after starting the program (this doesn't work for me on archlinux though :-?)
After starting the program, use pidof
Related
I have some path I have to access, which is the result of mounting.
I would like the mounting to be automatic, via a script, and I want that script to run just before an error is thrown from not being able to access the path.
For example, assume the script is
echo scripting!
mkdir -p /non_existing_path
and I want it to run when trying to access (in any way) to the path /non_existing_path.
So when I do for example
cd /non_existing_path
or
touch /non_existing_path/my_file.txt
It would always succeed, with the output scripting!. In reality, the script would be more elaborated than that.
Is this possible at all?
Yes, the important case is that 3rd parties (such as a new C program, command line, or other scripts) that would call for example cd should also be affected, and a call by them to cd as they normally would, should invoke the hooked script beforehand.
Out of kernel:
Write a fuse filesystem that would mount on top of other filesystem, that would upon open() syscall run fork()+execve() a custom script.
In kernel:
Write kernel filesystem that would expose /proc/some/interface and would create a filesystem "on-top" of underlying existing handle. Such kernel module would execute a special command upon open() system call and forward all others. In open() system call, the kernel would write some data to /proc/some/interface and wait for an answer. After receiving the answer, open() syscall would continue.
Write a user-space demon that would for example poll() on events on a /proc/some/interface waiting for events, then read() the events, parse them and execute custom script. Then after the script completion, it would write to /proc/some/interface on the same file descriptor to notify the kernel module that the operation has compleated.
Why don't you use autofs?
autofs is a program for automatically mounting directories on an as-needed basis.
https://help.ubuntu.com/community/Autofs
Not sure I understand.
Do you want the script to run even if the path is not accessible?
Do you want the script to run only if the path is not accessible?
Is the script supposed to mount the "not accessible" path?
In any case, I think you should just use an if else statement
The script would look like:
#!/bin/bash
if [ -d "/non_existing_path" ]
then
bash $1
else
bash $1
mkdir -p /non_existing_path
fi
Lets assume this script's name is "myScript.sh" and the external script's name is "extScript.sh". You would call:
bash myScript.sh /home/user/extScript.sh
This script will check if path exist.
If yes, execute bash /home/user/extScript.sh
If no, execute bash /home/user/extScript.sh and mkdir...
Again, I'm not sure to get you goal, but you can adapt it to your needs.
I have a process that depends on the internet, which dies randomly due to a spotty connection.
I am writing a cron script, so that it checks every minute if the process is running, and restarts it...
the process doesn't kill the terminal window it's in.
I don't want to kill the terminal - then spawn a new one.
I want the shell script I'm writing to execute in the window that's already open...
I'm using i3-sensible-terminal right now, but any terminal would do.
if ! ps -a | grep x123abc > /dev/null ; then
$CMD
fi
I have not yet located the information I need to have that run in a specific terminal.
changing the CMD to include a terminal seems to only open up a new window...
Suggesting a different design to separate running your script from observing your script output.
Write script named worker "that depends on the internet, which dies randomly due to a spotty connection." appends ALL its output to log file \home\$USER\worker.log.
Or just redirect ALL output from script named worker to log file \home\$USER\worker.log.
worker > \home\$USER\worker.log 2>&1
Run script name worker as a restartable service with systemd unit service.
Here is a good article explaining this practice: https://dev.to/setevoy/linux-systemd-unit-files-edit-restart-on-failure-and-email-notifications-5h3k
Continue to observe the log file \home\$USER\worker.log using tailf command
tailf \home\$USER\worker.log
I'm writing a Bash script to monitor a process and detect when it has crashed. To do this, I am monitoring the /proc directory;
start_my_process;
my_process_id=$!;
until [[ ! -d "/proc/$my_process_pid" ]]; do
# alert the process is dead and restart it...
done
Can I be guaranteed that the process's entry in /proc/ will be created BEFORE Bash finishes executing the command to start the process? Or is it possible that by time my check above is executed, the entry for start_my_process might not yet be created?
EDIT:
In the end I actually went against a custom solution and chose monit which is an excellent watchdog tool.
/proc/<pid> is never created. It is not a real directory.
/proc is a virtual filesystem. When you open one of its "files" and read from its output stream, the data are being provided by the kernel. Since the kernel is also responsible for managing process <pid>, the kernel will tell you that /proc/<pid> directory exists as soon as and for as long as the kernel is keeping track of it.
Since bash won't be able to set $! until the process exists, you are definitely safe checking for the process's virtual directory under /proc after that time.
I'm a PHP developer, and know very little about shell scripting... So I appreciate any help here.
I have four php scripts that I need running in the background on my server. I can launch them just fine - they work just fine - and I can kill them by looking up their PID.
The problem is I need my script to, from time to time, kill the processes and restart them, as they maintain long standing HTTP requests that sometimes are ended by the other side.
But I don't know how to write a command that'll find these processes and kill them without looking up the PID manually.
We'll start with one launch command :
/usr/local/php5/bin/php -f /home/path/to/php_script.php > /dev/null &
Is there a way to "assign" a PID so it's always the same? or give the process a name? and how would I go about writing that new command?
Thank you!
Nope, you can't "assign" the process PID; instead, you should do as "real" daemons do: make your script save its own PID in some file, and then read it from that file when you need to kill.
Alternative would be to use something like supervisor, that handles all that for you in a quite nice way.
Update - supervisor configuration
Since I mentioned supervisor, I'm also posting here a short supervisor configuration file that should do the job.
[program:yourscriptname]
command=/usr/local/php5/bin/php -f /home/path/to/php_script.php
Have a look here for more configuration options.
Then you can use it like this:
# supervisorctl status
to show the process(es) status.
# supervisorctl start yourscriptname
to start your script
# supervisorctl stop yourscriptname
to stop your script
Update - real world supervisor configuration example
First of all, make sure you have this in your /etc/supervisor/supervisord.conf.
[include]
files = /etc/supervisor/conf.d/*.conf
if not, just add those two lines and
mkdir /etc/supervisor/conf.d/
Then, create a configurtion file for each process you want to launch:
/etc/supervisor/conf.d/script1.conf
[program:script1]
command=/usr/local/php5/bin/php -f /home/path/to/php_script.php
stdout_logfile=/var/log/script1.log
stderr_logfile=/var/log/script1-error.log
/etc/supervisor/conf.d/script2.conf
[program:script2]
command=/usr/local/php5/bin/php -f /home/path/to/php_script2.php
stdout_logfile=/var/log/script2.log
stderr_logfile=/var/log/script2-error.log
...etc, etc.. for all your scripts.
(note that you don't need the trailing & as supervisor will handle all the daemonization thing for you; in fact you shouldn't execute programs that are self-daemonizing inside supervisor).
Then you can start 'em all with:
supervisorctl start all
or just one with something like:
supervisorctl start script1
Starting supervisor from php
Of course, you can start/stop the supervisor-controlled processes using the two commands above, even from inside a script.
Remember however that you'll need root privileges, and it's quite risky to allow eg. a web page to execute commands as root on the server..
If that's the case, I recommend you have a look at the instructions on how to run supervisor as a normal user (I never did that, but you should be able to run it as the www-data user too..).
The canonical way to solve this is to have the process write its PID into a file in a known location, and then any utility scripts can look up the file, read the PID, and manipulate that process. Add a command line argument to the script that gives the name of the PID file to write to.
A work around to this would be to use ps aux, this will show all of the processes with the command that called them. This presumes of course that the 4 scripts are different files, or can be uniquely identified by the command that called them. Pipe that through a grep and you're all set ps aux | grep runningscript.php
OK! so this has been a headache and a half for my who knows NOTHING about shell/bash whatever scripting...
#redShadow 's response would had been perfect, except my hosting provider will not give me access to the /etc/supervisor/ directory. as he said, you must be root - and even using sudo was an admin wouldn't let me make any chances there...
Here's what I came up with:
kill -9 `ps -ef | grep php | grep -v grep | awk '{print $2}'`
because the only types of commands I was executing showed up in the top command as php this command loops thru running processes, finds the php commands and their corresponding PIDs and KILLS them! woot!!
What I do is have my script check for a file that I name "run.txt". If it does not
exist, they exit. Then just br renaming that (empty) file, I can stop all my scripts.
I'm trying to monitor a normal C program in Monit, but I don't know how to run the program, what configuration should be set in the control file of Monit.
You need to get the PID of the program to be able to monitor it with Monit. Some programs allow commandline arguments to give the location of a file that they are to write their PID to. Otherwise, you can try starting the program from a wrapper script that writes the PID to a known location, e.g. /usr/bin/myprogram & && jobs -p > /var/run/myprogram.pid in bash.