How to have Linux wait till my program finishes its SIGTERM action? - linux

How to have Linux wait till my C++ program completes its cleanup routine. The program initially calls function sigaction(2) to register a custom SIGTERM handler. If test the handler by running kill -s TERM $(ps -C a.out -o pid=), it would have no problem. However, in the real shutting down is another case. Sometimes the handler can get its job all done, but sometimes not. Apparently there is a race condition when the machine is shutting down. Does anyone know how to make the system wait a little bit longer so as to avoid the race condition? Thanks.

In the comments you said you use MX Linux version 18.2. It seems to be based on Debian 9, which uses systemd by default, but still has the option to revert to classic SysVinit if desired. The web pages of MX Linux seem to emphasize the UI and do not mention anything special about the init system, so I'll assume it uses systemd too.
With systemd, a mechanism called control groups (cgroups for short) is in play: when systemd starts a service, it will also place its process in a special cgroup. This cgroup is automatically inherited by any process started by the service. When systemd is stopping a service, it will first execute any custom ExecStop actions, if any are defined for the service, wait for TimeoutStopSec, and if there are then any processes left in the service's cgroup, systemd will send them a SIGTERM, wait for another TimeoutStopSec, and then will send a SIGKILL for any left-over processes in the cgroup.
The thing tripping you up is probably the fact that user sessions are also encapsulated in a cgroup: anything you start manually, with e.g. sh /etc/init.d/yourservice start will still count as part of your user session, even if it performs all the actions needed to classically daemonize. And so, when you initiate a shutdown, the first action is to log out any user sessions... which causes your manually-started service to first receive a SIGHUP, then SIGTERM after a short delay, and possibly a SIGKILL after another short delay. Once the user sessions are cleaned up, the rest of the shutdown process will happen.
In order to use init.d scripts successfully with systemd, you'll need to know a few things.
systemd's compatibility mechanism for init.d scripts works by automatically generating a native systemd .service unit file for every init.d script, and then using those unit files just like native systemd services. This causes three requirements you might be unfamiliar with:
your init.d script should have a Linux Standard Base comment block before any non-comment lines in the script that describes the dependencies to any other services. It should look somewhat like this (example from Dovecot IMAP server):
### BEGIN INIT INFO
# Provides: dovecot
# Required-Start: $local_fs $remote_fs $network $syslog $time
# Required-Stop: $local_fs $remote_fs $network $syslog
# Should-Start: postgresql mysql slapd winbind nslcd
# Should-Stop: postgresql mysql slapd winbind nslcd
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Dovecot init script
# Description: Init script for dovecot services
### END INIT INFO
After placing your init.d script, you should run systemctl daemon-reload to trigger the re-run of the systemd-sysv-generator that produces the unit file that will call your script.
After placing the script as /etc/init.d/yourservice and running systemctl daemon-reload, you should start the service using either systemctl start yourservice or service yourservice start. Only these methods cause systemd to place your service to its own control group, which will be important for the orderly shutdown your service needs. Running sh /etc/init.d/yourservice start will not do that.
You can use systemctl cat yourservice to view the resulting auto-generated yourservice.service unit file.
It might be a better idea to just write a native systemd service unit file for your service. You can find the distribution's standard unit files at /lib/systemd/system/ directory; you can use them as examples, but you should place your custom unit file into /etc/systemd/system instead, so your unit file won't be overridden by any package updates.

Related

What's difference between these redis starting commands

sudo /etc/init.d/redis-server start
sudo service redis-server start
sudo systemctl start redis-server
sudo redis-server --daemonize yes
The last one is "nearest to the metal", it directly starts the Redis server process with no special options, and is "stand-alone". I would use this type of command when just "messing around" in the Terminal with quick tests and when trying to get an initial configuration tested and running.
The first 3 are all basically wrappers around starting the Redis server process to make it compatible with systemd or other Linux startup systems. They potentially add more layers of management, like:
reporting to the systemctl logs
saving the process id so the process can be killed or restarted
potentially specifying a different config file
potentially waiting for other services to become available before starting Redis
I would prefer one of the first three for routine, every-day, managed starting up of Redis on a production system.

Redis Startup issues on Debian Stretch (9)

Actually I'm on my way to switch to debian 9 for the new production servers of the company and want to provision them with ansible. So far, everything works fine, but now I'm stuck with redis-server.
By default, Debian 9 comes with redis version 3.2. I'm installing the package via apt-get install redis-server. After that, redis starts up as a daemon in the background. Now I want to apply some custom configuration, like binding to 2 different IPs (127.0.0.1 and the server IP).
After changing this as well as the daemonize option (to yes), redis is no longer willing to start in the background. Whenever doing either service redis-server start or /etc/init.d/redis-server start, the command just stucks.
journalctl -xe tells me, that the pid file is not readable (redis-server.service: PID file /var/run/redis/redis-server.pid not readable (yet?) after start-post: No such file or directory) even though it should be created according to init.d script:
start)
echo -n "Starting $DESC: "
mkdir -p $RUNDIR
touch $PIDFILE
chown redis:redis $RUNDIR $PIDFILE
chmod 755 $RUNDIR
After all, I can see, that both service redis-server start and /etc/init.d/redis-server are starting the server and I'm also able to connect to the server via redis-cli. But the damn process stucks.
Can anyone help? If you need further information, just let me know. I'll provide what ever possible if this solves the problem!
best
Chris
I had a similar situation on a Centos 7 server.
The resolution was to change supervised from no to auto
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised auto
When you run the process as daemon it need to interact with systemd for process management (if I read well some documentation).
Thanks

How do I use systemd to replace cron jobs meant to run every five minutes?

We have an embedded target environment (separate from out host build environment) in which systemd is running but not cron.
We also have a script which, under most systems, I would simply create a cron entry to run it every five minutes.
Now I know how to create a service under systemd but this script is a one-shot that exits after it's done its work. What I'd like to do is have it run immediately on boot (after the syslog.target, of course) then every five minutes thereafter.
After reading up on systemd timers, I created the following service file is /lib/systemd/system/xyzzy.service:
[Unit]
Description=XYZZY
After=syslog.target
[Service]
Type=simple
ExecStart=/usr/bin/xyzzy.dash
and equivalent /lib/systemd/system/xyzzy.timer:
[Unit]
Description=XYZZY scheduler
[Timer]
OnBootSec=0min
OnUnitActiveSec=5min
[Install]
WantedBy=multi-user.target
Unfortunately, when booting the target, the timer does not appear to start since the output of systemctl list-timers --all does not include it. Starting the timer unit manually seems to work okay but this is something that should be run automatically with user intervention.
I would have thought the WantedBy would ensure the timer unit was installed and running and would therefore start the service periodically. However, I've noticed that the multi-user.target.wants directory does not actually have a symbolic link for the timer.
How is this done in systemd?
The timer is not active until you actually enable it:
systemctl enable xyzzy.timer
If you want to see how it works before rebooting, you can also start it:
systemctl start xyzzy.timer
In terms of doing that for a separate target environment where you aren't necessarily able to easily run arbitrary commands at boot time (but do presumably control the file system content), you can simply create the same symbolic links (in your development area) that the enable command would do.
For example (assuming SYSROOT identifies the root directory of the target file system):
ln -s ${SYSROOT}/lib/systemd/system/xyzzy.timer
${SYSROOT}/lib/systemd/system/multi-user.target.wants/xyzzy.timer
This will effectively put the timer unit into an enabled state for the multi-user.target, so systemd will start it with that target.
Also, normally your custom files would be stored in /etc/systemd/system/. The equivalent lib directory is intended to host systemd files installed by packages or the OS.
If it's important that your cron job run precisely every 5 minutes, you should check the accuracy because systemd's monotonic timers can slip over time

Make ExecStartPost command to run in background

I have a systemd service for my spring boot application connected to consul server, behind haproxy. consul provides consul-template to automatically update the service location in haproxy configuration file via consul-template command.
consul-template takes a template file and writes to the final haproxy configuration file and then reload the haproxy.
Now, consul-template process needs to run in background always along with my application, so that as the application comes up, it can detect new application startup and update its location in the configuration file.
Here is my systemd service file for this.
[Unit]
Description=myservice
Requires=network-online.target
After=network-online.target
[Service]
Type=forking
PIDFile=/home/dragon/myservice/run/myservice.pid
ExecStart=/home/dragon/myservice/bin/myservice-script start
ExecReload=/home/dragon/myservice/bin/myservice-script reload
ExecStop=/home/dragon/myservice/bin/myservice-script stop
ExecStartPost=consul-template -template '/etc/haproxy/haproxy.cfg.template:/etc/haproxy/haproxy.cfg:sudo systemctl reload haproxy'
User=dragon
[Install]
WantedBy=multi-user.target
Now, when I start systemctl start myservice, my application starts and the call to consul-template also works, but consul-template process doesn't go in background. I have to press Ctl+C and then systemctl comes back and I have both my application and consul-template process running.
Is there way to run the consul-template process in background specified in ExecStartPost?
I was trying to add & at the end of the ExecStartPost command, but then consul-template complains that it is an additional invalid argument and it fails.
I was also trying to make the command as /bin/sh -c "consul-template command here...", but then this also doesn't work. Even nohup in this command wasn't working.
Any help is really appreciated.
A workaround would be to have a bash file as your entrypoint, add all you need in there, then it will all magically work
I was trying to accomplish the same task. I wanted to fire off some HTTP requests to Tomcat once the service had started, so that I could warmup our servers ahead of the first user request.
I went through a lot of trial and error with using trying to use ExecStartPost to fire off an async process, but actually worked. By calling a shell script, I could trigger off background processes, but from my testing Systemd appears to kill the process thread when ExecStartPost finishes, so any child processes end up getting killed too. I tried various combinations of using &, setsid, nohup, etc, even some Perl to try and trigger off the an executable in it's own thread, but as soon as the shell script exite from ExecStartPost any processes running where killed. It's possible there's some solution that would work using ExecStartPost, but I couldn't find it.
However, what did work is creating a new service (like #divinedragon mentions) which piggy backs off the service I wanted to monitor (in this case Tomcat).
Since it took me a little research to get something working the way I wanted, I wanted to share my solution in case it helps someone.
The first step is to create a new service (e.g. /usr/lib/systemd/system/tomcat-service-listener.service):
[Unit]
Description=Tomcat start/stop event listener
# make sure to stop the service when Tomcat stops
BindsTo=tomcat.service
# waits for both Nginx & Tomcat to be started before this service is started
After=nginx.service tomcat.service
[Service]
Type=oneshot
ExecStart=/path/to/your/script.sh start
ExecStop=/path/to/your/script.sh stop
RemainAfterExit=yes
TimeoutStartSec=300
[Install]
# When the service is enabled, forces this service to start when Tomcat is started
WantedBy=tomcat.service
Some notes on what is happening here:
The BindsTo make sure the service gets stopped when Tomcat is stopped. This triggers the ExecStop command.
The After make sure that on server reboot, this service does not start until both Nginx & Tomcat have started.
The WantedBy will create the wants symlink for Tomcat (when the service is enabled), which will force Tomcat to start this service any time it's restarted.
The RemainAfterExit=yes is necessary for the ExecStop to work. If you only care about triggering something when you're service is started and don't care about when the service is stopped, you can set this to no and remove the ExecStop line.
Make the TimeoutStartSec long enough for whatever task you plan on running.
To get this service working, you then need to do the following:
# make the service executable
chmod 664 /usr/lib/systemd/system/tomcat-service-listener.service
# make Systemd aware of the new service
systemctl daemon-reload
# register the service so it's started/stopped with Tomcat
systemctl enable tomcat-service-listener.service
Now all you need script to trigger off the logic you want. In my case, I wanted to warmup some servers once Tomcat started so my /path/to/your/script.sh looks something like:
#!/bin/sh
SCRIPT_MODE="$1"
LOGFILE=/var/logs/myscript.log
log_message() {
local MESSAGE="$1"
echo "$(date '+%Y-%m-%d %H:%M:%S') $MESSAGE" >> "$LOGFILE"
return 0
}
warmup_server() {
local SERVER_ADDRESS="$1"
local SERVER_DESCRIPTION="$2"
log_message "Warming up $SERVER_DESCRIPTION..."
# we want to track the time it took to warm up the server
local START_TIME=$(date +%s)
# server restarts can take a while for all services to start, so we must retry long enough for all relevant services to start
HTTP_STATUS=$(curl --insecure --location --silent --show-error --fail --retry 60 --retry-delay 2 --retry-max-time 240 --output /dev/null --write-out "%{http_code}" '$SERVER_ADDRESS')
# we want to track the time it took to warm up the server
local TOTAL_STARTUP_TIME=$(($(date +%s)-$START_TIME))
log_message "$SERVER_DESCRIPTION started in $TOTAL_STARTUP_TIME seconds... (Status: $HTTP_STATUS)"
return 0
}
# monitor when Tomcat has stopped
if [ "$SCRIPT_MODE" == "stop" ]; then
log_message "Tomcat listener shutting down..."
exit 0
elif [ "$SCRIPT_MODE" == "start" ]; then
log_message "Tomcat listener started..."
fi
# servers to warm up
warmup_server 'https://127.0.0.1' 'Localhost #1'
warmup_server 'https://127.0.0.2' 'Localhost #2'
This seems to be working exactly as I want. The service starts up when the server is reboot and starting/stopping/restarting Tomcat fires off the expected events. Since it's independent of the Tomcat service, I can restart this warmup script if needed. It also doesn't delay the Tomcat startup time, since it is its own service, therefore running asynchronously like I wanted.

Execute a script having curl call before shutdown in SysVinit

So by default I am in runlevel 3. During shutdown I switch into runlevel 0. But I am not getting any success if am putting my script (having a curl call) in /etc/rc0.d/, as in runlevel 0 network is already stopped and therefore it is not able to do the curl call.
How to get the desired result ?
Generally in the older SysVinit systems, boot sequence and shutdown sequence were controlled by the alpha-numeric order of symbolic links to your init-script located in each runlevel directory under /etc/init.d (or /etc/rc.d/) where the links numbered S## (start) were run during boot and K## (kill/stop) scripts were run during shutdown. The services available at any given point in time are controlled by what is running during the boot or shutdown sequence. For example an older SuSE scheme would be:
/etc/init.d/
boot.d/
rc0.d/ # runlevel 0
rc1.d/ # runlevel 1
rc2.d/ # runlevel 2
rc3.d/ # runlevel 3
...
S01random # S## - Start init script ## in order 00 -> XX
S01resmgr
S02consolekit
S03haldaemon
S05network # network start
...
K01stopblktrace # KXX - Kill (stop) init script ## in order
K02atieventsd
K09cron
...
K14sshd
K15smbfs
K16apcupsd
K16auditd
K16nmb
K16portmap
K16splash_early
K17syslog
K18network # network shutdown
...
rc4.d/
rc5.d/
rc6.d/
rcS.d/
If you look at the boot/shutdown sequence for runlevel-3 in /etc/init.d/rc3.d/ you see that the network start and shutdown are controlled by S05network on boot and K18network on shutdown. So if you wanted to create a custom script to run curl on shutdown prior to the network shutting down, you would need to create an init script and create a soft-link in /etc/init.d/rc3.d and have it numbered prior to the network services (ssh, etc.) being taken down. Above if you created and numbered the soft-link to your kill script K10curlonsd (curl on shutdown), it would run after cron shutdown, but before any of the network services were taken down.
The scheme should still be the same on centos, although your /etc/init.d may be /etc/rc.d, etc., but the general approach will be the same. Let me know if you have any questions.

Resources