How can a program detect if it is running as a systemd daemon? - linux

Is there any way to detect in a program that it is run by systemd as a daemon?
systemd API
sd_booted()
is used to detected if the whole system is booted by systemd, but says nothing about the program itself.
Thanks

Get the parent process id and see whether that process is systemd.

Starting from systemd v232, an environment variable INVOCATION_ID is given to all processes started as (part of) a service unit. This is a nice trait from systemd and not any other service manager, so it can be used as a convenient way to detect systemd, but not necessarily reliable.
Personally I use this to disable timestamp in logging as systemd journal already does that.

You could set a magic environment variable in the daemon's service file and look for this variable.

Related

Programmatically start systemd service or test if service running

I need to start a service and (later) detect if it running from within a C++ program. Is there a simpler approach than invoking systemctl with suitable arguments and parsing the output?
The source of the service is entirely under my control. (Currently it is written in bash, but a C++ wrapper is entirely possible.)
(I've had a brief look at DBus - it is clearly very powerful, but fails the "simpler" test.)
The source of the service is entirely under my control. (Currently it is written in bash, but C++ is entirely possible.)
The code is for an embedded device running a variant of Debian Jessie. Portability is not a major concern (but obviously the answer will be more useful to others if it is portable).
Most programs are written with the other way in mind (even in pre-systemd days).
Typical services (those having and started with a single server process) are writing their PID (as an ASCII number on a single line) in some /var/run/foobar.pid file at their startup. If you adopt such a convention in your service, you can read that file using fscanf then check that the process is running with kill(pid, 0); (of course, you cannot be certain that it is the same service, but it probably would be).
I have right now more than 20 files matching /var/run/*.pid, notably /var/run/sshd.pid & /var/run/atd.pid
So, assuming you can improve the code of your service FooBar (if that functionality is not there), change its code to write its pid into /var/run/foobar.pid; this is a documented convention.
If you can change the service, you might have it providing some ping or nop functionality; so you would add to it some RPC facility which justs check that the service is running (and could also give some additional information, like the version of the program, etc.). Most existing Linux services have such feature.
Why not flip the problem around? No parsing would be needed.
ttm.update.service will do the following.
systemctl stop ttm.service
systemctl disable ttm.service
#do you update here
#if the service configs changed do
systemctl daemon-reload
systemctl enable ttm.service
systemctl start ttm.service
ttm.service would never have to worry about the updater, it just runs and do it's job.

Automating services with Linux OS starting up and shutting down

I have a script to start and stop my services. My server is based on Linux. How do I automate the process such that when OS is shutdown the stop script runs and when it is starting up, the start script runs?
You should install init script for your program. The standard way is to follow Linux Standards Base section 20 subsections 2-8
The idea is to create a script that will start your application when called with argument start, stop it when called with argument stop, restart it when called with argument restart and make it reload configuration when called with argument reload. This script should be installed in /etc/init.d and linked in various /etc/rd.* directories. The standard describes a comment to put at the beginning of the script and a uitlity to handle the installation.
Please, refer to the documentation; it is to complicated to explain everything in sufficient detail here.
Now that way should be supported by all Linux distribution. But Linux community is currently searching for better init system and there are two new, improved, systems being used:
systemd is what most of the world seems to be going to
upstart is a solution Ubuntu created and sticks to so far
They provide some better options like ability to restart your application when it fails, but your script will then be specific to the chosen system.

Killing a daemon using a PID file

A common Linux/UNIX idiom when it comes to running daemons is to spawn the daemon, and create a PID file which just contains the process ID of the daemon. This way, to stop/restart the daemon you can simply have scripts which kill $(cat mydaemon.pid)
Now, there's a lot of opportunity here for inconsistent state. Suppose the machine running the daemon is forcefully shut off, then restarted. Now you have a PID file which refers to a non-existent process.
Okay, so no problem... your daemon will just try to kill the non-existent process, find that it's not a real process, and continue as usual.
But... what if it is a real process - just not your daemon? What if it's someone else's process, or some other important process? You have no way of knowing - so killing it is potentially dangerous. One possibility would be to check the name of the process. Of course, this isn't foolproof either because there's no reason another process might not have the same name. Especially, if for example, your daemon runs under an interpreter, like Python, in which case the process name will never be something unique - it will simply be "python", in which case you might inadvertently kill someone else's process.
So how can we handle a situation like this where we need to restart a daemon? How can we know the PID in the pid file necessarily is the daemon?
You just keep adding on layers of paranoia:
pid file
process name matching
some communication channel/canary
The most important thing that you could to in order to ensure that the pid isn't stale following a reboot is to store it in /var/run, which is a location that is guaranteed to be cleared every reboot.
For process name matching, you can actually redefine the name of the process at the fork/exec point, which will allow you to use some unique name.
The communication channel/canary is a little more complex and is prone to some gotchas. If a daemon creates a named socket, then the presence of the socket + the ability to connect and communicate with the daemon would be considered evidence that the process is running.
If you really want to provide a script for your users, you could the let the daemon process manage its pidfile on its own and add an atexit and a SIGABRT handler to unlink the pidfile even on unclean shutdown.
More ways include also storing the process startup time in the pidfile. Together with volatile storage (e.g. /var/run) this is a pretty reliable way to identify a process. This makes the kill command a bit more complicated though.
However, I personally think that a daemon developer should not care (too much) about this and let this being handled by the target platforms way to manage daemons (systemd, upstart, good ol’ SysV init scripts). These have usually more knowledge: systemd for example will happily accept a daemon which does not fork at all, allowing it to monitor its status directly and without the requirement for a PID file. You could then provide configuration files for the most common solutions (currently probably systemd, given that Debian migrates to it too and it will thus also hit ubuntu soon), which are usually easier to write than a full fledged daemon process management.

How to keep Supervisord running unconditionally?

In the Supervisord conf files you can specify to autorestart a certain program with:
autorestart=true
But is there an equivalent for [Supervisord] itself?
What is the recommended method of making sure Supervisord continues running unconditionally, especially if the Supervisord process gets killed.
Thanks!
Actually your question is a particular application of the famous "Quis custodiet ipsos custodes?" that is "Who will guard the guards?".
In a modern Linux system the central guarding point is init process (the process number 1). If init dies, the Linux kernel immediately panics, and thus you have to go to your data center (I mean go afoot) and press reset button. There're a lot of alternative init implementations, here is one of those "comparison tables" :)
The precise answer how to configure a particular init implementation depends on what init version you use in that system. For example systemd has its own machinery for configure service restart upon their deaths (directives Restart=, RestartSec=, WatchdogSec= etc in a corresponding unit-file. Other init implementations like Ubuntu Upstart also has its analogues (respawn directive in a service configuration file). Even old good SysV init has respawn option for a service line in /etc/inittab, but usually user-level services aren't started directly inittab, only virtual console managers (getty, mgetty etc)

Detailed procedures of linux rebooting

I'm interested in how rebooting is implemented in Linux. When I press ctrl-alt-del or click "restart" in the menu bar, what happens next?
Thanks!
brings the system down in a secure way. All logged-in users are notified that the system is going down, and login(1) is blocked. It is possible to shut the system down immediately or after a specified delay. All processes are first notified that the system is going down by the signal SIGTERM.
It does its job by signalling the init process, asking it to change the runlevel. Runlevel 0 is used to halt the system, runlevel 6 is used to reboot the system, and runlevel 1 is used to put to system into a state where administrative tasks can be performed;
So basically reboot calls the "shutdown".
Quick answer is that all the scripts that are in /etc/rc6.d are executed.
scripts that start with "K" are executed with the "stop" parameter.
scripts that start with "S" are executed with the "start"parameter.
For more you can start reading about runlevels here: http://en.wikipedia.org/wiki/Runlevel
There are different init systems on Linux, and they also control what happens on restart/shutdown. See https://unix.stackexchange.com/questions/18209/detect-init-system-using-the-shell to tell which you're using.
If you're using SysVinit, then there is a runlevel associated with the overall system status. The init system will first run all the kill scripts associated with your current runlevel and then the start scripts associated with runlevel 6. If your current runlevel was 5, it would run /etc/rc5.d/K* and then /etc/rc6.d/S*. They might be in another directory, such as /etc/init.d/rc5.d/k*, depending on your Linux distribution.
If you're using systemd, then instead of having an overall "runlevel", there would be a list of defined targets and services. A list of targets is essentially a runlevel. These are defined in .service and .target files under /etc/systemd. There will likely be a "reboot.target" defined under there, and other services with a dependency on that will be run on reboot. See the systemd homepage or this stackexchange question for an example.
Some Ubuntu versions also use upstart, but I think it's been replaced by systemd in more recent versions. If you are using upstart, see this guide or this askubuntu question.
One thing to be careful of is that regardless of which init system you're using you may be using init scripts generally associated with another one. So you may be using sysVinit, but some of the rc*.d scripts may be links to things that invoke systemd scripts. Or vice-versa.

Resources