Breaking ordering cycle by deleting job - linux

We have some application that use Docker in Azure Vm (centos7). We use systemd in order to manage their lifecycle in case of failure. A typical example of this is this service unit:
[Unit]
Description=Service app-subs
After=docker.service
Requires=docker.service
[Service]
Environment="JAVA_OPTS=-XX:+UseContainerSupport -XX:MaxRAMPercentage=70 -XshowSettings:vm"
TimeoutStartSec=0
Restart=always
ExecStartPre=-/usr/bin/docker stop app-subs
ExecStartPre=-/usr/bin/docker rm app-subs
ExecStartPre=/usr/bin/docker login registry.azure.io -u USER -p PASSWORD
ExecStart=/usr/bin/docker run \
--init \
--rm \
-p 8236:8080 \
registry.azure.io /app-subs:0.2.22
[Install]
WantedBy=multi-user.target
We work like this for all our docker app since 2 year and we never had problem with this type of service and we do not make any update. Today one of our vm trigger this error:
Apr 11 04:46:20 vm22 systemd: Found dependency on docker.service/start
Apr 11 04:46:20 vm22 systemd: Breaking ordering cycle by deleting job app-subs.service/start
Apr 11 04:46:20 vm22 systemd: Job app-subs.service/start deleted to break ordering cycle starting with docker.service/start
This make the service inactive that launch our application. We don't make any change since a long time and do not update this service recently. So we tried to figure out why we have an "ordering cycle" in our systemd services with these commands:
sudo systemctl show -p Requires,Wants,Requisite,BindsTo,PartOf,Before,After app-subs.service
Result:
Requires=docker.service system.slice basic.target
Requisite=
Wants=
BindsTo=
PartOf=
Before=shutdown.target multi-user.target
After=basic.target docker.service systemd-journald.socket system.slice
Docker service:
sudo systemctl show -p Requires,Wants,Requisite,BindsTo,PartOf,Before,After docker.service
Result:
Requires=basic.target docker.socket containerd.service system.slice
Requisite=
Wants=network-online.target
BindsTo=
PartOf=
Before= shutdown.target app-subs.service
After=basic.target docker.socket network-online.target multi-user.target systemd-journald.socket system.slice firewalld.service containerd.service
If i understand well Docker have to be launch before app-subs.service but after multi-user.target and our app have to be launch before multi-user.target but after docker.service. But multi-user.target seem to be a step trigger by this in our services.
[Install]
WantedBy=multi-user.target
that allow our service to be launch at startup.
We have the same configuration in multiple machine and app. Only one machine trigger this error and i cannot find out what happened for this. The machine reboot in the night without explanation.
Does anyone have any idea what might have happened or how to make sure it doesn't happen again?

Related

How can I change the docker unix service name to docker-ce?

I need to rename the docker.service to docker-ce.service. I know this is a bit unusual.
The reason being is I currently use kubespray to provision my kubernetes cluster and it installs containerd for me as the runtime. In addition to this, for some specific machines only, I also require docker to be installed on these machines. Kubespray is quite thorough when installing containerd as it goes ahead and uninstalls docker if it finds the service present. To overcome this, renaming the service of docker would work and allow kubespray to not uninstall it however this is proving quite troublesome.
Changing the service itself is simple, however the docker.socket no longer starts and errors with the following:
[root#vm01234 system]# systemctl status docker.socket
● docker.socket - Docker Socket for the API
Loaded: loaded (/usr/lib/systemd/system/docker.socket; enabled; vendor preset: disabled)
Active: inactive (dead)
Listen: /var/run/docker.sock (Stream)
Dec 09 16:47:04 vm01234 systemd[1]: docker.socket: Socket service docker.service not loaded, refusing.
Dec 09 16:47:04 vm01234 systemd[1]: Failed to listen on Docker Socket for the API.
Had a look through all the Docker docs and other threads but couldn't find anything. Hoping someone is able to help.
Running on RHEL 8.6.
For reference, here is are my service and socket files:
docker.socket
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
docker-ce.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target docker.socket firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/local/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3
# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
was able to solve the problem by adding a Service value to the [socket] block.
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
Service=docker-ce.service
[Install]
WantedBy=sockets.target

need to waiting process in linux Operating system

Greatings to you first
I am a student at the university, and my end of study project is to master an information security protocol for autonomous systems
I have a task in this project
when I type in the command line "kill [pid]", the process starts automatically after a delay of a few seconds
how I can achieve this task and thank you in advance
use systemd service
Create a file test.service under /etc/systemd/system, such as:
[Unit]
Description=test service
[Service]
User=root
Restart=always
# number of seconds to wait before restarting
RestartSec=5
# Change it to some meaningful processes
ExecStart=/bin/sleep 30000
[Install]
WantedBy=multi-user.target
To start the service: sudo systemctl start test.service
Then if you kill the process, it will automatically restart in 5 seconds
Note: if you modify the service file, you will have to run sudo systemctl daemon-reload and sudo systemctl restart test to reload the change

Daemon service in systemd

I have managed to install daemon service in /etc/systemd/system, however I am not sure about 2 things:
Whether the daemon services should reside there
How can I elegantly check whether a daemon service is installed or not in systemd?
1.If the daemon services should reside there
yes, it is the .service location. The file that you should put here is:
mydeamon.service
[Unit]
Description=ROT13 demo service
After=network.target
StartLimitIntervalSec=0
[Service]
Type=simple
Restart=always
RestartSec=1
User=**YourUser**
ExecStart=**pathToYourScript**
[Install]
WantedBy=multi-user.target
You’ll need to:
set your actual username after User=
set the proper path to your script in ExecStart= (usually /usr/bin/ You can put your script here)
creating-a-linux-service-with-systemd
2.How can I elegantly check if a daemon service is installed or not in systemd?
systemctl has an is-active subcommand for this:
systemctl is-active --quiet service
will exit with status zero if service is active, non-zero otherwise, making it ideal for scripts:
systemctl is-active --quiet service && echo Service is running
test Service is running

how to reload a pythonic service on centos 7?

I have a python app that I made it as a service on centos 7.
I created a file in /usr/lib/systemd/system with my project name. And wrote these on it:
[Unit]
Description=My Script Service
After=multi-user.target
[Service]
Type=idle
ExecStart=/usr/bin/python3.6 /usr/src/python-project/sampleService-services/serverprotocol.py
[Install]
WantedBy=multi-user.target
After that:
$ sudo systemctl daemon-reload
$ sudo systemctl enable sampleService.service
$ sudo reboot
I can start, restart and stop this service with commands:
$ systemctl start sampleService.service
$ systemctl restart sampleService
$ systemctl stop sampleService
But when i try to reload it with these commands:
$ systemctl reload sampleService
or
$ service sampleService reload
I get this error:
Failed to reload sampleService.service: Job type reload is not applicable for unit basiscore.service.
See system logs and 'systemctl status sampleService.service' for details.
Is there any command for reload this pythonic service ?!
how can I reload my service without restarting it ?!
Under the ExecStart= line, try to add
Restart=on-failure
RestartSec=10s
For systemctl reload ... to work, you need to provide an ExecReload= line in your unit (service) file. A common example is:
ExecReload=/bin/kill -HUP $MAINPID
That requires your program to catch and act on a SIGHUP signal. If your application has a different mechanism to trigger a reload of its configuration while running, then provide some other suitable command which generates that trigger.

gunicorn does not start after boot

I'm running a Debian web server with nginx and gunicorn running a django app. I've got everything up and running just fine but after rebooting the server I get a 502 bad gateway error. I've traced the issue back to gunicorn being inactive after the reboot. If I start the service the problem is fixed until I reboot the server again.
Starting the service:
systemctl start gunicorn.service
After the reboot here is my gunicorn service status:
{username}#instance-3:~$ sudo systemctl status gunicorn
● gunicorn.service - gunicorn daemon
Loaded: loaded (/etc/systemd/system/gunicorn.service; enabled)
Active: inactive (dead)
Contents of my /etc/systemd/system/gunicorn.service file:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User={username}
Group={username}
WorkingDirectory=/home/{username}/web/{projname}
ExecStart=/usr/local/bin/gunicorn {projname}.wsgi:application
Restart=on-failure
[Install]
WantedBy=multi.user.target
Any ideas to figure out why the gunicorn service isn't starting after reboot?
Edit:
Could the issue be that the gunicorn.conf has a different dir in chdir and the exec than the working directory?
{username}#instance-3:~$ cat /etc/init/gunicorn.conf
cription "Gunicorn application server handling {projname}"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
setuid {username}
setgid {username}
chdir /home/data-reporting/draco_reporting
exec {projname}/bin/gunicorn --workers 3 --bind unix:/home/{username}/data-reporting/{projname}/{projname}.sock {projname}.wsgi:application
You have a small typo in your gunicorn.service file. Change to:
WantedBy=multi-user.target
Also, you may want to change to:
Restart=always
I made old school crontab and the problem was solved.
crontab -e
and then
#reboot sudo systemctl restart nginx && sudo systemctl restart gunicorn.service
and just save crontab.

Resources