Will systemd block system startup if daemon does not fork? - linux

If I configure a systemd service in a way that type is set to "forking" and TimeoutStartSec is set to "infinity" then would my system startup block if the service configured never goes into background?
If not, what are the side effects of having such a configuration?

The systemd implementation was to replace the SYS-V which already was getting a parallel startup by specifying dependencies instead of a simple priority (order defined by a two digit number such as 05-service and 67-daemon). But the SYS-V was not constrained in any way, so most processes would not really be properly defined. (The priority system was a filename and the dependencies were defined in a comment at the start of your init script).
systemd drew heavily from that concept of starting things in parallel by implementing a make like mechanism where you can say to build B only once A is built.
# Makefile
B: A
generate-B
A: A.c
gcc -o A A.c
So systemd in general won't be blocked because of one rogue service, however, if you now create a second service (i.e. B in my make example) which depends on that service which never returns as expected, that second service will never be started. i.e.
# Makefile
B: A
generate-B [never reach since A never ends]
A: A.c
sleep forever
In other words, since your OS doesn't depend on your service, it will still load as expected. Your environment, however, is going to be affected if you start creating dependencies on your first services. On the other hand, there are probably various types of failsafe to circumvent, at least partially, the kind of setup you are talking about.

Related

systemd: Stop dependent service when main service crashes

(systemd version 229)
I have a primary service A, and a secondary service B. The primary A can run by itself. But service B cannot run correctly by itself: it needs A to be running (technically B can run, but this is what I want systemd to prevent). My goal: If A is not running, B should not run. Given that A and B are running, when A stops or dies/crashes, then B should be stopped.
How do I achieve this?
I get close by adding [Unit] items to b.service, using
Requisite=A.service
After=A.service
The result of the above is that
B won't start unless A is running (good).
B is stopped when A is stopped (good).
However, if I kill A, service B continues to run (bad).
How can I fix this last behavior? Neither PartOf nor BindsTo seems to do the trick, but perhaps I don't have the right incantation of combinations of options? Its not clear to me from the man pages what options can be combined.
systemd.unit man page: https://www.freedesktop.org/software/systemd/man/systemd.unit.html
Related: Systemctl dependency failure, stop dependent services
You can use, Requires= or PartOf= or BindTo=
see this article for detail of their usage
To achieve your 3rd objective make use of PartOf keyword.
In B.service you need to add dependency on A under [Unit] section as below
[Unit]
..
..
PartOf=A.service
With this, whenever A is killed B shall also stop.
If you start Service A with Type=notify you may be able to achieve something if you terminate A with SIGINT or SIGTERM, you can actually handle that and send a message on $NOTIFY_FD to systemd, but that option is still not possible with SIGKILL. It's a bit involved but might be able to achieve what you want.
You should also consider making A Restart=always. This will at-least make sure that A will remain available and B won't keep giving out errors. When you kill A (outside systemd), there's no way for systemd to know that A was killed - especially if you do so with kill -9 (SIGKILL cannot be handled). . So one of the best way to handle that will be to make Service A, Restart=always.

How can I run a service when the runlevel changes

I want to automatically start the proftpd service when the runlevel changes from 2 to 5. When it changes back to 2 it should be stopped again.
Any ideas?
If you use sysvinit, the procedure is easy. Just have a K??yourServiceName script in /etc/rc2.d and a S??yourServiceName in /etc/rc5.d. They will be called with the runlevel in $RUNLEVEL environment variable and with a stop and start (respectively) parameters. The ?? represent two digits that represent the order of execution to use (priority?).
This has been replaced in new scripts (mainly in debian, but I think others follow this approach also) by having several fields in the scripts themselves indicating the dependencies between scripts and execution is done in parallel for scriptis that don't depend on each other, but serially for scripts that depend between themselves. You can read about this approach in the scripts themselves. The scripts are installed normally in /etc/init.d, and symbolic links are made from there to the proper directory with the proper two digit positions by the utilities controlling this.
Finally, if you use systemd (it has replaced completely the sysv init process) there's another method to deal with it. You'll have to look for the doc of systemd(8) ad I'm not aware of it. I only know it's a dbus service provider and processes comunicate with it via this new technology.
The two first methods are somewhat interoperable, as if you fix the priority of execution and don't fill the dependencies, the system v init process will respect it.
Edit
This approach assumes you run proftpd as an independent service (not as dependant of xinetd(8) or inetd(8)) and it has scripts to launch and stop it on a runlevel change.
In case you need to run it depending on xinetd(8), i don't know now if xinetd has parameters to allow you to serve based on the runlevel. If it has, you are lucky. If it hasn't you will have to switch your approach.

Linux: Default prioirty of applications that run dynamically as tasks

I'm a new user of linux and am confused about task priorities of applications that run dynamically at run time.
Here's the scenario:
1. I create an application called myApplication and install it in one of the bin folders (/usr/sbin)
2. This task is not run until
a. I start it specifically from shell or
b. I call it from a script based on some event.
The application executes and terminates.
How do I know its priority?
Q2. Will it take default priority and nice value? I see that my application's nice value is 0 which I assume is default.
Q3. How can I find the priority of all such applications that are installed in one of the bin folders but are called run time and terminate after their desired work is done?
I thoroughly searched the forum before posting this query and I apologize if it has already been answered.
Many many thanks in advance.
Keshav
Programs don't generally have an associated nice value. The default is for spawned processes to inherit the parent's niceness, regardless of which program is being started. To change this you can use the nice or renice command-line utilities, or the setpriority() system call.

Multiple Process InitScript Logic

I am developing initscripts for some of our software, and am having difficulty deciding how I should use it for a particular piece.
We have homegrown software responsible for passing data around out network, it's built on a standard pubsub model. There is a publisher process (two, actually, for two different use cases), a broker process, and a subscriber process). Any combination of these processes, and even multiple of the same process, can run simultaneously on a given box. I'm having trouble deciding how best to allow this to be configured. Since it can vary from box to box, that will likely go into /etc/sysconfig/pubsub which will be read in by the initscript.
The only things I will have to allow to be configured is (1) the process name, which is one of log_publish, dir_publish, broker, subscribe, and (2) the configuration file that corresponds to that particular process.
I wish to avoid telling people how to modify the initscript per box in order to change the list of running processes, so this unique configuration file per box is the best way I can come up with to accomplish that.
I assume this also means that I will have to have some kind of unique identifier per process on the box, as I intend to use the touch /var/lock/subsys/* method that most RedHat initscripts use already to lock a process from running twice. Knowing this, I know the identifier can't always be random, otherwise it will never be effective in order to prevent duplicate processes with the same configuration file (because, again, I need to be able to run multiple processes with different configuration files).
I have no idea how best to represent this in configuration.
I've implemented this similarly to how VNC does it when run as an initscript.
If you look at your distro's configuration file for vnc init (ex. RedHat/CentOS: /etc/sysconfig/vncservers), you see this:
# The VNCSERVERS variable is a list of display:user pairs.
#
# Uncomment the line below to start a VNC server on display :1
# as my 'myusername' (adjust this to your own). You will also
# need to set a VNC password; run 'man vncpasswd' to see how
# to do that.
#
# DO NOT RUN THIS SERVICE if your local area network is
# untrusted! For a secure way of using VNC, see
# <URL:http://www.uk.research.att.com/vnc/sshvnc.html>.
# VNCSERVERS="1:myusername"
# VNCSERVERARGS[1]="-geometry 800x600"
Pretty straight forward. You define a screen number, and parameters to match if necessary.
So now, I have, for example:
PUBSUBPROCS="1:publish 2:broker 3:subscribe"
PUBSUBARGS[1]="/config/publish.cfg"
PUBSUBARGS[2]="/config/broker.cfg"
PUBSUBARGS[3]="/config/subscribe.cfg"
And most all of the logic for parsing this was also ripped out from the vncserver initscript, which I will not post here for length reasons.
I'd say have multiple initscripts, one per process type, and then let the configuration for each determine how many of that process to spawn.

Automatically adjusting process priorities under Linux

I'm trying to write a program that automatically sets process priorities based on a configuration file (basically path - priority pairs).
I thought the best solution would be a kernel module that replaces the execve() system call. Too bad, the system call table isn't exported in kernel versions > 2.6.0, so it's not possible to replace system calls without really ugly hacks.
I do not want to do the following:
-Replace binaries with shell scripts, that start and renice the binaries.
-Patch/recompile my stock Ubuntu kernel
-Do ugly hacks like reading kernel executable memory and guessing the syscall table location
-Polling of running processes
I really want to be:
-Able to control the priority of any process based on it's executable path, and a configuration file. Rules apply to any user.
Does anyone of you have any ideas on how to complete this task?
If you've settled for a polling solution, most of the features you want to implement already exist in the Automatic Nice Daemon. You can configure nice levels for processes based on process name, user and group. It's even possible to adjust process priorities dynamically based on how much CPU time it has used so far.
Sometimes polling is a necessity, and even more optimal in the end -- believe it or not. It depends on a lot of variables.
If the polling overhead is low-enough, it far exceeds the added complexity, cost, and RISK of developing your own style kernel hooks to get notified of the changes you need. That said, when hooks or notification events are available, or can be easily injected, they should certainly be used if the situation calls.
This is classic programmer 'perfection' thinking. As engineers, we strive for perfection. This is the real world though and sometimes compromises must be made. Ironically, the more perfect solution may be the less efficient one in some cases.
I develop a similar 'process and process priority optimization automation' tool for Windows called Process Lasso (not an advertisement, its free). I had a similar choice to make and have a hybrid solution in place. Kernel mode hooks are available for certain process related events in Windows (creation and destruction), but they not only aren't exposed at user mode, but also aren't helpful at monitoring other process metrics. I don't think any OS is going to natively inform you of any change to any process metric. The overhead for that many different hooks might be much greater than simple polling.
Lastly, considering the HIGH frequency of process changes, it may be better to handle all changes at once (polling at interval) vs. notification events/hooks, which may have to be processed many more times per second.
You are RIGHT to stay away from scripts. Why? Because they are slow(er). Of course, the linux scheduler does a fairly good job at handling CPU bound threads by downgrading their priority and rewarding (upgrading) the priority of I/O bound threads -- so even in high loads a script should be responsive I guess.
There's another point of attack you might consider: replace the system's dynamic linker with a modified one which applies your logic. (See this paper for some nice examples of what's possible from the largely neglected art of linker hacking).
Where this approach will have problems is with purely statically linked binaries. I doubt there's much on a modern system which actually doesn't link something dynamically (things like busybox-static being the obvious exceptions, although you might regard the ability to get a minimal shell outside of your controls as a feature when it all goes horribly wrong), so this may not be a big deal. On the other hand, if the priority policies are intended to bring some order to an overloaded shared multi-user system then you might see smart users preparing static-linked versions of apps to avoid linker-imposed priorities.
Sure, just iterate through /proc/nnn/exe to get the pathname of the running image. Only use the ones with slashes, the others are kernel procs.
Check to see if you have already processed that one, otherwise look up the new priority in your configuration file and use renice(8) to tweak its priority.
If you want to do it as a kernel module then you could look into making your own binary loader. See the following kernel source files for examples:
$KERNEL_SOURCE/fs/binfmt_elf.c
$KERNEL_SOURCE/fs/binfmt_misc.c
$KERNEL_SOURCE/fs/binfmt_script.c
They can give you a first idea where to start.
You could just modify the ELF loader to check for an additional section in ELF files and when found use its content for changing scheduling priorities. You then would not even need to manage separate configuration files, but simply add a new section to every ELF executable you want to manage this way and you are done. See objcopy/objdump of the binutils tools for how to add new sections to ELF files.
Does anyone of you have any ideas on how to complete this task?
As an idea, consider using apparmor in complain-mode. That would log certain messages to syslog, which you could listen to.
If the processes in question are started by executing an executable file with a known path, you can use the inotify mechanism to watch for events on that file. Executing it will trigger an I_OPEN and an I_ACCESS event.
Unfortunately, this won't tell you which process caused the event to trigger, but you can then check which /proc/*/exe are a symlink to the executable file in question and renice the process id in question.
E.g. here is a crude implementation in Perl using Linux::Inotify2 (which, on Ubuntu, is provided by the liblinux-inotify2-perl package):
perl -MLinux::Inotify2 -e '
use warnings;
use strict;
my $x = shift(#ARGV);
my $w = new Linux::Inotify2;
$w->watch($x, IN_ACCESS, sub
{
for (glob("/proc/*/exe"))
{
if (-r $_ && readlink($_) eq $x && m#^/proc/(\d+)/#)
{
system(#ARGV, $1)
}
}
});
1 while $w->poll
' /bin/ls renice
You can of course save the Perl code to a file, say onexecuting, prepend a first line #!/usr/bin/env perl, make the file executable, put it on your $PATH, and from then on use onexecuting /bin/ls renice.
Then you can use this utility as a basis for implementing various policies for renicing executables. (or doing other things).

Resources