Programing Rust to stop safety with systemctl - multithreading

I'm writing a very demanding program in rust that had variable Threads for processing very important data, and want to know if there is a way that i can send a signal to stop it with systemctl in a way that i can be sure that it is finishing it's dutties before stop, as its very demanding, uses http_request, and threads are variables I can not make an estimation of how much time i have to wait since signal sended until the process is dead.
In esscense, it is a daemon that is in a loop until a variable sets false, like this
loop {
// Process goes here
if !is_alive {
break;
}
}
What i'm doing right now is that the program is asking a "config.json" file if it is "alive", but i think it's not the best way because i don't know when the program stops, just can see that is stoping, but not how much is going to last, and if i do this way, systemctl is going to show the service alive, even if i shuted it down manually.

If you want to experiment with Systemd service behavior, I would take a look at the Systemd documentation. In this case, I would direct you to the section about TimeoutStopSec.
According to the documentation, you can disable any timeout on systemd stop commands with TimeoutStopSec=infinity. This combined with actually handling the SIGTERM signal that systemd uses by default should do the trick.
Furthermore, there is the KillSignal option by which you can specify the signal that is sent to your program to stop it or ExecStop to specify a program to run in order to stop your service.
With these you should be able to figure it out, I hope.

Related

Flask-Restx(/Flask-Restplus) detect reload

my problem is the following: In my Flask-Restx-Application I created a Runner-Thread which runs asynchronously to the main-thread of the Flask-Application.
Now when I do changes as usual the Debugger still shows * Detected change in 'XXXXX', reloading which is a useful feature. The problem is that now it got stuck and cannot reload because of the running Thread which must be stopped manually.
I would still like to use the automatic reload if possible in combination with the asynchronous Runner-Thread. Is there a possibility to "detect" those reloads by triggering an Event or something similar? Then I could manually shutdown the Runner-Thread and restart it with the application. Or is there at least a possibility to not block the reload in order to proceed reloading the flask-restx-related stuff?
Thanks in advance for any help.
PS: I find it hard to add code here because I do not know which parts of the flask-app are important. If you need any code to answer the question I will add it in an Edit.
You need to make your thread a daemon thread if you want the reloader to work. The reloader will try to kill and restart the program (by killing the main thread), but because your other thread is not a daemon, it will fail to kill the program and reload it. A daemon thread is one that only lives as long as the main thread lives, and therefore making your other thread a daemon will fix your issue. Here is an example:
from threading import Thread
...
t = Thread(...)
t.daemon = True # this makes your thread a daemon thread
t.start()

systemd: Stop dependent service when main service crashes

(systemd version 229)
I have a primary service A, and a secondary service B. The primary A can run by itself. But service B cannot run correctly by itself: it needs A to be running (technically B can run, but this is what I want systemd to prevent). My goal: If A is not running, B should not run. Given that A and B are running, when A stops or dies/crashes, then B should be stopped.
How do I achieve this?
I get close by adding [Unit] items to b.service, using
Requisite=A.service
After=A.service
The result of the above is that
B won't start unless A is running (good).
B is stopped when A is stopped (good).
However, if I kill A, service B continues to run (bad).
How can I fix this last behavior? Neither PartOf nor BindsTo seems to do the trick, but perhaps I don't have the right incantation of combinations of options? Its not clear to me from the man pages what options can be combined.
systemd.unit man page: https://www.freedesktop.org/software/systemd/man/systemd.unit.html
Related: Systemctl dependency failure, stop dependent services
You can use, Requires= or PartOf= or BindTo=
see this article for detail of their usage
To achieve your 3rd objective make use of PartOf keyword.
In B.service you need to add dependency on A under [Unit] section as below
[Unit]
..
..
PartOf=A.service
With this, whenever A is killed B shall also stop.
If you start Service A with Type=notify you may be able to achieve something if you terminate A with SIGINT or SIGTERM, you can actually handle that and send a message on $NOTIFY_FD to systemd, but that option is still not possible with SIGKILL. It's a bit involved but might be able to achieve what you want.
You should also consider making A Restart=always. This will at-least make sure that A will remain available and B won't keep giving out errors. When you kill A (outside systemd), there's no way for systemd to know that A was killed - especially if you do so with kill -9 (SIGKILL cannot be handled). . So one of the best way to handle that will be to make Service A, Restart=always.

How to detect restart before terminating program on Linux

I want to detect that system is restarting before it terminates my program on Linux.
I tried using /var/run/utmp file to detect runlevel, put inotify on its changes but seems like system is closing this program before I get signal. I catch shutdown with it if I set runlevel with telinit command, but dont catch if I just restart with button on top-right corner in Ubuntu.
Any idea how can it be done?
Catch the SIGTERM signal and be quick with saving/doing whatever and then exit. You've got approximately 10 seconds before you'll get SIGKILL which you can't catch, and you'll be force terminated.
If the system isn't sending you a SIGTERM to allow proper shutdown, change your system to something proper, this is the standard way of doing it.
See man 7 signal and man 3 sigaction for signal handling.
(Note that I don't know of a standard way to check if a system is rebooting or not, I don't think such thing exists. But as mentioned above, a proper system will send you SIGTERM and let you do your cleanup/exit. Hard reboot excluded, because thats almost equivalent of pulling the power cord.)

What's the most efficient way to prevent a node.js script from terminating?

If I'm writing something simple and want it to run until explicitly terminated, is there a best practice to prevent script termination without causing blocking, using CPU time or preventing callbacks from working?
I'm assuming at that point I'd need some kind of event loop implementation or a way to unblock the execution of events that come in from other async handlers (network io, message queues)?
A specific example might be something along the lines of "I want my node script to sleep until a job is available via Beanstalkd".
I think the relevant counter-question is "How are you checking for the exit condition?".
If you're polling a web service, then the underlying setInterval() for the poll will keep it alive until cancelled. If you're taking in input from a stream, that should keep it alive until the stream closes, etc.
Basically, you must be monitoring something in order to know whether or not you should exit. That monitoring should be the thing keeping the script alive.
Node.js end when it have nothing else to do.
If you listen on a port, it have something to do, and a way to receive beanstalk command, so it will wait.
Create a function that close the port and you ll have your explicit exit, but it will wait for all current job to end before closing.

Linux, timing out on subprocess

Ok, I need to write a code that calls a script, and if the operation in script hangs, terminates the process.
The preferred language is Python, but I'm also looking through C and bash script documentation too.
Seems like an easy problem, but I can't decide on the best solution.
From research so far:
Python: Has some weird threading model where the virtual machine uses
one thread at a time, won't work?
C: The preferred solution so far seems to use SIGALARM + fork +
execl. But SIGALARM is not heap safe, so it can trash everything?
Bash: timeout program? Not standard on all distros?
Since I'm a newbie to Linux, I'm probably unaware of 500 different gotchas with those functions, so can anyone tell me what's the safest and cleanest way?
Avoid SIGALRM because there is not much safe stuff to do inside the signal handler.
Considering the system calls that you should use, in C, after doing the fork-exec to start the subprocess, you can periodically call waitpid(2) with the WNOHANG option to inspect whether the subprocess is still running. If waitpid returns 0 (process is still running) and the desired timeout has passed, you can kill(2) the subprocess.
In bash you can do something similar to this:
start the script/program in background with &
get the process id of the background process
sleep for some time
and then kill the process (if it is finished you cannot kill it) or you can check if the process is still live and then to kill it.
Example:
sh long_time_script.sh &
pid=$!
sleep 30s
kill $pid
you can even try to use trap 'script_stopped $pid' SIGCHLD - see the bash man for more info.
UPDATE: I found other command timeout. It does exactly what you need - runs a command with a time limit. Example:
timeout 10s sleep 15s
will kill the sleep after 10 seconds.
There is a collection of Python code that has features to do exactly this, and without too much difficulty if you know the APIs.
The Pycopia collection has the scheduler module for timing out functions, and the proctools module for spawning subprocesses and sending signals to it. The kill method can be used in this case.

Resources