How can I start docker.service after nslcd.service? - linux

All my unix host use the ldap backend.
docker group is existing on the ldap, this is also why docker.service must start after nslcd.service.
I have tried to edit systemctl startup configuration for docker.service:
$ sudo systemctl edit --full docker.service
And I add nslcd.service to After, Wants, Requires:
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target docker.socket firewalld.service nslcd.service
Wants=network-online.target nslcd.service
Requires=docker.socket nslcd.service
I still can't get docker to run after that service:
sudo service docker status
● docker.service - Docker Application Container Engine
Loaded: loaded (/etc/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: inactive (dead)
Docs: https://docs.docker.com
Oct 10 19:35:02 dev-08 systemd[1]: Dependency failed for Docker Application Container Engine.
There is no problem to start container manually after starts, since I login through ldap.

docker group is existing on the ldap, this is also why docker.service must start after nslcd.service.
It is generally a bad idea to have system services depend on users and groups in a remote directory service (because problems with the directory service can impact service availability on your host).
And I add nslcd.service to After, Wants, Requires
Specifying both a Wants= and a Requires= relationship is redundant. The Requires= relationship is simply a stronger version of Wants=: using Requires= means that if you start the docker service, and nslcd is not already running, it will be started as well. Using Wants= in the same situation, docker would start without starting nslcd.
I still can't get docker to run after that service
Is is entirely likely that nslcd takes a moment to connect to the directory service. In this case, it's possible that the process has started, which satisfies the After= dependency, so docker starts even though your groups are not yet available.
There are a few ways of addressing this situation:
In light of my initial comment, just create a local docker group. This is by far the simplest and most robust solution.
Create a new oneshot unit that spins until the docker group exists. Make this unit depend on nslcd, and make docker depend on the new unit.
Possibly replacing nslcd with something that implements local caching (e.g., sssd) would also resolve this problem.
On a different note, it is a bad idea to directly edit the unit files as you have done in this example, because if you've installed Docker via packaging tools (apt/yum/etc), your modifications will be overwritten next time you upgrade the package. A better solution is to create drop-in files to augment the unit configuration.
Update
Option 2 might look like this:
[Unit]
Requires=nslcd.service docker.service
After=nslcd.service
Before=docker.service
[Service]
Type=oneshot
ExecStart=/bin/sh -c "while ! getent group docker; do sleep 1; done"

Related

How can I change the docker unix service name to docker-ce?

I need to rename the docker.service to docker-ce.service. I know this is a bit unusual.
The reason being is I currently use kubespray to provision my kubernetes cluster and it installs containerd for me as the runtime. In addition to this, for some specific machines only, I also require docker to be installed on these machines. Kubespray is quite thorough when installing containerd as it goes ahead and uninstalls docker if it finds the service present. To overcome this, renaming the service of docker would work and allow kubespray to not uninstall it however this is proving quite troublesome.
Changing the service itself is simple, however the docker.socket no longer starts and errors with the following:
[root#vm01234 system]# systemctl status docker.socket
● docker.socket - Docker Socket for the API
Loaded: loaded (/usr/lib/systemd/system/docker.socket; enabled; vendor preset: disabled)
Active: inactive (dead)
Listen: /var/run/docker.sock (Stream)
Dec 09 16:47:04 vm01234 systemd[1]: docker.socket: Socket service docker.service not loaded, refusing.
Dec 09 16:47:04 vm01234 systemd[1]: Failed to listen on Docker Socket for the API.
Had a look through all the Docker docs and other threads but couldn't find anything. Hoping someone is able to help.
Running on RHEL 8.6.
For reference, here is are my service and socket files:
docker.socket
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
docker-ce.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target docker.socket firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/local/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3
# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
was able to solve the problem by adding a Service value to the [socket] block.
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
Service=docker-ce.service
[Install]
WantedBy=sockets.target

systemd After=nginx.service is not working

I am trying to setup a custom systemd service on my linux system and was experimenting with it
Following is my custom service, where it will trigger a bash file
[Unit]
Description=Example systemd service.
After=nginx.service
[Service]
Type=simple
ExecStart=/bin/bash /usr/bin/test_service.sh
[Install]
WantedBy=multi-user.target
Since I have mentioned After=nginx.service i was expecting nginx serivce to start automatically
So after starting the above service, i check the status of nginx, which has not started
However if i replace After with Wants it works
Can someone differenciate between After and Wants and when to use what?
Specifying After=foo tells systemd how to order the units if they are both started at the same time. It will not cause the foo unit to autostart.
Use After=foo in combination with Wants=foo or Requires=foo to start foo if it's not already started and also to keep desired order of the units.
So your [Unit] should include:
[Unit]
Description=Example systemd service.
After=nginx.service
Wants=nginx.service
Difference between Wants and Requires:
Wants= : This directive is similar to Requires= , but less strict. Systemd will attempt to start any units listed here when this unit is activated. If these units are not found or fail to start, the current unit will continue to function. This is the recommended way to configure most dependency relationships.

What's difference between these redis starting commands

sudo /etc/init.d/redis-server start
sudo service redis-server start
sudo systemctl start redis-server
sudo redis-server --daemonize yes
The last one is "nearest to the metal", it directly starts the Redis server process with no special options, and is "stand-alone". I would use this type of command when just "messing around" in the Terminal with quick tests and when trying to get an initial configuration tested and running.
The first 3 are all basically wrappers around starting the Redis server process to make it compatible with systemd or other Linux startup systems. They potentially add more layers of management, like:
reporting to the systemctl logs
saving the process id so the process can be killed or restarted
potentially specifying a different config file
potentially waiting for other services to become available before starting Redis
I would prefer one of the first three for routine, every-day, managed starting up of Redis on a production system.

Kubernetes Pod: Failed to get D-Bus Connection

I have a docker containter based on centos/systemd. I run the container with
docker run -d --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro <image>
Then i can access the container with:
docker exec -ti <containerID> /bin/bash
Then i can list all loaded units with the command systemctl . This works fine.
Now i want to deploy the image into a kubernetes cluster, this works also fine and i can access the running pod in the cluster via kubectl exec -ti <pod> /bin/bash
If i type now the command systemctl i get the error message
Failed to get D-Bus connection: Operation not permitted
How is it possible to make systemd/systemctl available in the pod?
HINT: Need systemd because of software running inside container, so supervisord is not an option here
It is a sad observation that the old proposal from Daniel Walsh (Redhat) is still floating around - which includes a hint to run a "privileged container" to get some systemd behaviour, by basically talking to the daemon outside of the container.
Drop that. Just forget it. You can't get that in a real cluster unless violating its basic designs.
And in most cases, the requirement for systemd in a container is not very strict when looking closer. There are quite a number of service-manager or an init-daemon implmentations for containers. You could try with the docker-systemctl-replacement script for example.
The command to start systemd would have to be in a script in the container. I use /usr/sbin/init or /usr/lib/systemd/systemd --systemd --unit=basic.target. Additionally you need start systemd with the tmpfs for /run to store runtime information. Scripting it is not easy and Tableau is a good example of why it's being done.
Also, I recommend to NOT use --privileged at all costs, because it's a security risk plus you may accidentally alter or bring down the host with changes made inside the container.

Systemd service failing on startup

I'm trying to get a nodejs server to run on startup, so I created the following systemd unit file:
[Unit]
Description=TI SensorTag Communicator
After=network.target
[Service]
ExecStart=/usr/bin/node /home/pi/sensortag-comm/sensortag.js
User=root
[Install]
WantedBy=multi-user.target
I'm not sure what I'm doing wrong here. It seems to fail before the nodejs script even starts, as no logging occurs. My script is dependent on mysql 5.5 (I think this is where I'm running into an issue). Any insight, or even a different solution would be appreciated.
Also, it runs fine once I'm logged into the system.
Update
The service is enabled, and is logging through journalctl. I'll update with the results on 7/11/16.
Not sure why it didn't work the first time, but upon checking journalctl the issue was 100% that MySQL hadn't started. I once again changed it to After=MySQL.service and it worked perfectly!
If there is no mention of the service at all in the output of journalctl that could indicate that the service was not enabled to start at boot.
Make you run systemctl enable my-unit-name before your next boot test.
Also, since you depend on MySQL being up and running, you should declare that with something like: After=mysql.service. The exact service name may depend on your Linux distribution, which you didn't state.
Adding User=root adds nothing, as system units would be run by root by default anyway.
When you said "it fails", you didn't specify whether it was failing at boot time, or with a test run by systemctl start my-unit-name.
After attempting to start a service, there should be logging if you run journalctl -u my-unit.name.service.
You might also consider adding StandardOutput=journal to your unit file to make sure you capture output from the service you are running as well.

Resources