Systemd enabled services not starting at boot anymore - linux

I made some custom systemd services long time ago, all have the same configuration (except for the ExecStart of course)
This configuration worked for years, I have ubuntu up and running since version 18.04 LTS, but now looks like some of these systemd services aren't starting at boot at all, the configuration is as follow (myapp.service):
[Unit]
Description="myapp"
After=syslog.target network-online.target
Wants=network-online.target
[Service]
Restart=always
RestartSec=10
User=root
Group=root
WorkingDirectory=/opt/myapp
ExecStart=/usr/local/bin/myapp
KillMode=control-group
[Install]
WantedBy=multi-user.target
The service is enabled:
$ sudo systemctl enable myapp
Created symlink /etc/systemd/system/multi-user.target.wants/myapp.service → /lib/systemd/system/myapp.service.
If i do "systemctl status myapp" after a reboot:
● myapp.service - "myapp"
Loaded: loaded (/lib/systemd/system/myapp.service; enabled; vendor preset: enabled)
Active: inactive (dead)
If i do "journalctl -u myapp -f" after a reboot:
Jan 13 12:10:06 myhost systemd[1]: Started myapp.
Jan 17 07:15:03 myhost systemd[1]: Stopping myapp...
Jan 17 07:15:09 myhost systemd[1]: Stopped myapp.
What's wrong with my configuration?
If I manually start /usr/local/bin/myapp there are no errors on the script and I've also tried running it with tmux, now it's 3 days running in bg and no errors. But systemd just won't start it after a reboot.
Today, i've also tried to install a new service that require a systemd config, the package is zram-config and by default it is enabled at boot.
But after a: apt install zram-config && sudo reboot:
$ sudo systemctl status zram-config
● zram-config.service - Initializes zram swaping
Loaded: loaded (/lib/systemd/system/zram-config.service; enabled; vendor preset: enabled)
Active: inactive (dead)
but if now i do:
$ sudo systemctl start zram-config
$ sudo systemctl status zram-config
● zram-config.service - Initializes zram swaping
Loaded: loaded (/lib/systemd/system/zram-config.service; enabled; vendor preset: enabled)
Active: active (exited) since Mon 2020-01-27 12:25:55 CET; 1s ago
Process: 5541 ExecStart=/usr/bin/init-zram-swapping (code=exited, status=0/SUCCESS)
Main PID: 5541 (code=exited, status=0/SUCCESS)
Jan 27 12:25:55 myhost systemd[1]: Starting Initializes zram swaping...
Jan 27 12:25:55 myhost init-zram-swapping[5541]: Setting up swapspace version 1, size = 985,7 MiB (1033568256 bytes)
Jan 27 12:25:55 myhost init-zram-swapping[5541]: nessuna etichetta, UUID=4ac5c2cd-0c68-4f6d-a5c0-d8f91a509c71
Jan 27 12:25:55 myhost init-zram-swapping[5541]: Setting up swapspace version 1, size = 985,7 MiB (1033568256 bytes)
Jan 27 12:25:55 myhost init-zram-swapping[5541]: nessuna etichetta, UUID=83a4f201-d591-4222-89a6-5bc5aebedef4
Jan 27 12:25:55 myhost init-zram-swapping[5541]: Setting up swapspace version 1, size = 985,7 MiB (1033568256 bytes)
Jan 27 12:25:55 myhost init-zram-swapping[5541]: nessuna etichetta, UUID=1f6f742e-6fb8-4332-b226-bf6918f7ee28
Jan 27 12:25:55 myhost init-zram-swapping[5541]: Setting up swapspace version 1, size = 985,7 MiB (1033568256 bytes)
Jan 27 12:25:55 myhost init-zram-swapping[5541]: nessuna etichetta, UUID=a5509c55-46f5-4112-8fe1-68171f31409e
Jan 27 12:25:55 myhost systemd[1]: Started Initializes zram swaping.
I really don't understand what's wrong with systemd on my Ubuntu install, is it better I do a fresh reinstall of whole OS?
Thanks

Check the full output of journalctl for a message about cycles, like:
Job <your.service> deleted to break ordering cycle starting with <something else>
I had the similar issue. It was due to ordering cycles, which was very hard to debug and fix.

Related

Unable to run mongoDB after installation

I'm On fedora 34 and trying to install mongoDB on this machine.
I followed installation instruction from official Docs from here. Everything installed correctly, but now i'm unable to start the service.
I executed sudo systemctl start mongod and it showed
root#fedora /v/l/mongodb [100]# systemctl start mongod
Job for mongod.service failed because the control process exited with error code.
See "systemctl status mongod.service" and "journalctl -xeu mongod.service" for details.
This is the source for mongod.service in /usr/lib/systemd/system/mongod.service
[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network-online.target
Wants=network-online.target
[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongod.conf"
EnvironmentFile=-/etc/sysconfig/mongod
ExecStart=/usr/bin/mongod $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb
PermissionsStartOnly=true
PIDFile=/var/run/mongodb/mongod.pid
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for mongod as specified in
# https://docs.mongodb.com/manual/reference/ulimit/#recommended-ulimit-settings
[Install]
WantedBy=multi-user.target
I think problem begins here
12 ExecStart=/usr/bin/mongod $OPTIONS
I'm a noob in systemd and daemons and services
This is the log from systemctl status mongod.service
root#fedora /v/l/mongodb [1]# systemctl status mongod.service
× mongod.service - MongoDB Database Server
Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sun 2021-08-01 11:18:36 IST; 16min ago
Docs: https://docs.mongodb.org/manual
Process: 8116 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 8117 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 8118 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 8119 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=14)
CPU: 149ms
Aug 01 11:18:36 fedora systemd[1]: Starting MongoDB Database Server...
Aug 01 11:18:36 fedora mongod[8119]: about to fork child process, waiting until server is ready for connections.
Aug 01 11:18:36 fedora mongod[8122]: forked process: 8122
Aug 01 11:18:36 fedora mongod[8119]: ERROR: child process failed, exited with 14
Aug 01 11:18:36 fedora mongod[8119]: To see additional information in this output, start without the "--fork" option.
Aug 01 11:18:36 fedora systemd[1]: mongod.service: Control process exited, code=exited, status=14/n/a
Aug 01 11:18:36 fedora systemd[1]: mongod.service: Failed with result 'exit-code'.
Aug 01 11:18:36 fedora systemd[1]: Failed to start MongoDB Database Server.
Your mongodb exited with status code 14 as systemctl status says.
Here you can find the answer

I can't start SonarQube in CENTOS 7 because the PIDs keeps on changing rapidly

I am trying to Integrating Jenkins 2 and SonarQube 6.7.3 on centos 7 due to this tutorial:
https://www.youtube.com/watch?v=osc0j_Z1x0w
I install and config all software like: java, PostgreSQL, SonarQube,... I am really confused how to start sonar correctly in order to be able open sonar dashboard on Firefox by http://localhost:9000/ now i can't open it because the sonar service not started correctly and PID keeps on changing rapidly :
[root#localhost ~]# service sonar status
SonarQube is running (36211).
[root#localhost ~]# service sonar status
SonarQube is not running.
[root#localhost ~]# service sonar status
SonarQube is running (36602).
[root#localhost ~]# service sonar status
SonarQube is running (36602).
[root#localhost ~]# service sonar status
SonarQube is not running.
[root#localhost ~]# service sonar status
SonarQube is running (36993).
[root#localhost ~]# service sonar status
SonarQube is running (36993).
[root#localhost ~]#
sonar can't be opened on FireFox
I start and enable PostgreSQL by this commands :
[root#localhost ~]# systemctl start postgresql-9.6
[root#localhost ~]# systemctl enable postgresql-9.6
[root#localhost ~]# systemctl status postgresql-9.6
● postgresql-9.6.service - PostgreSQL 9.6 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-9.6.service; enabled;
vendor preset: disabled)
Active: active (running) since Mon 2018-05-07 11:44:12 +0430; 1h 26min ago
Docs: https://www.postgresql.org/docs/9.6/static/
***Main PID: 1288 (postmaster)***
CGroup: /system.slice/postgresql-9.6.service
├─1288 /usr/pgsql-9.6/bin/postmaster -D /var/lib/pgsql/9.6/data/
├─1626 postgres: logger process
├─1630 postgres: checkpointer process
├─1631 postgres: writer process
├─1632 postgres: wal writer process
├─1633 postgres: autovacuum launcher process
└─1634 postgres: stats collector process
May 07 11:43:57 localhost.localdomain systemd[1]: Starting PostgreSQL 9.6
database server...
May 07 11:44:11 localhost.localdomain postmaster[1288]: < 2018-05-07
11:44:11.217 +0430 > LOG: redir...ess
May 07 11:44:11 localhost.localdomain postmaster[1288]: < 2018-05-07
11:44:11.217 +0430 > HINT: Futu...g".
May 07 11:44:12 localhost.localdomain systemd[1]: Started PostgreSQL 9.6
database server.
Hint: Some lines were ellipsized, use -l to show in full.
when I run the command again systemctl status postgresql-9.6 the output is the same as last run with Main PID: 1288 (postmaster) :
[root#localhost ~]# systemctl status postgresql-9.6
● postgresql-9.6.service - PostgreSQL 9.6 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-9.6.service; enabled;
vendor preset: disabled)
Active: active (running) since Mon 2018-05-07 11:44:12 +0430; 1h 29min ago
Docs: https://www.postgresql.org/docs/9.6/static/
***Main PID: 1288 (postmaster)***
CGroup: /system.slice/postgresql-9.6.service
├─1288 /usr/pgsql-9.6/bin/postmaster -D /var/lib/pgsql/9.6/data/
├─1626 postgres: logger process
├─1630 postgres: checkpointer process
├─1631 postgres: writer process
├─1632 postgres: wal writer process
├─1633 postgres: autovacuum launcher process
└─1634 postgres: stats collector process
May 07 11:43:57 localhost.localdomain systemd[1]: Starting PostgreSQL 9.6
database server...
May 07 11:44:11 localhost.localdomain postmaster[1288]: < 2018-05-07
11:44:11.217 +0430 > LOG: redir...ess
May 07 11:44:11 localhost.localdomain postmaster[1288]: < 2018-05-07
11:44:11.217 +0430 > HINT: Futu...g".
May 07 11:44:12 localhost.localdomain systemd[1]: Started PostgreSQL 9.6
database server.
Hint: Some lines were ellipsized, use -l to show in full.
[root#localhost ~]#
but when I start and run sonar, the outputs are different and Main PID keeps on changing rapidly :
[root#localhost ~]# systemctl start sonar
[root#localhost ~]# systemctl enable sonar
[root#localhost ~]# systemctl status sonar
● sonar.service - SonarQube service
Loaded: loaded (/etc/systemd/system/sonar.service; enabled; vendor preset:
disabled)
Active: active (running) since Mon 2018-05-07 13:19:57 +0430; 6s ago
Process: 121807 ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop
(code=exited, status=0/SUCCESS)
Process: 121850 ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start
(code=exited, status=0/SUCCESS)
***Main PID: 121893 (wrapper)***
CGroup: /system.slice/sonar.service
├─121893 /opt/sonarqube/bin/linux-x86-64/./wrapper /opt/sonarqube/bin/linux-x86-64/../../conf...
└─121895 java -Dsonar.wrapped=true -Djava.awt.headless=true -Xms8m -
Xmx32m -Djava.library.pat...
May 07 13:19:56 localhost.localdomain systemd[1]: sonar.service
holdoff time over, scheduling restart.
May 07 13:19:56 localhost.localdomain systemd[1]: Starting SonarQube
service...
May 07 13:19:56 localhost.localdomain sonar.sh[121850]: Starting
SonarQube...
May 07 13:19:56 localhost.localdomain sonar.sh[121850]: PID:
May 07 13:19:56 localhost.localdomain sonar.sh[121850]:
"/opt/sonarqube/bin/linux-x86-64/./wrapper" "...be"
May 07 13:19:57 localhost.localdomain sonar.sh[121850]: Started
SonarQube.
May 07 13:19:57 localhost.localdomain systemd[1]: Started SonarQube
service.
Hint: Some lines were ellipsized, use -l to show in full.
[root#localhost ~]#
the previous command Main PID: 121893 (wrapper) and the other command is Main PID: 123382 (wrapper)
[root#localhost ~]# systemctl status sonar
● sonar.service - SonarQube service
Loaded: loaded (/etc/systemd/system/sonar.service; enabled; vendor preset:
disabled)
Active: active (running) since Mon 2018-05-07 13:21:11 +0430; 3s ago
Process: 123296 ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop
(code=exited, status=0/SUCCESS)
Process: 123339 ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start
(code=exited, status=0/SUCCESS)
***Main PID: 123382 (wrapper)***
CGroup: /system.slice/sonar.service
├─123382 /opt/sonarqube/bin/linux-x86-64/./wrapper
/opt/sonarqube/bin/linux-x86-64/../../conf...
├─123384 java -Dsonar.wrapped=true -Djava.awt.headless=true -Xms8m -
Xmx32m -Djava.library.pat...
└─123411 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-
0.b14.el7_4.x86_64/jre/bin/java -XX:+UseCo...
May 07 13:21:10 localhost.localdomain systemd[1]: sonar.service
holdoff time over, scheduling restart.
May 07 13:21:10 localhost.localdomain systemd[1]: Starting SonarQube
service...
May 07 13:21:10 localhost.localdomain sonar.sh[123339]: Starting
SonarQube...
May 07 13:21:10 localhost.localdomain sonar.sh[123339]: PID:
May 07 13:21:10 localhost.localdomain sonar.sh[123339]:
"/opt/sonarqube/bin/linux-x86-64/./wrapper" "...be"
May 07 13:21:11 localhost.localdomain sonar.sh[123339]: Started
SonarQube.
May 07 13:21:11 localhost.localdomain systemd[1]: Started SonarQube
service.
Hint: Some lines were ellipsized, use -l to show in full.

Puppet Server not starting up in Centos7

I have recently installed puppet5 in Centos7 (Running in VirtualBox). After installation I tried starting it which thrown the below message.
Is there anything should I do with configuration?
[root#puppet ~]# systemctl status puppetserver -l
● puppetserver.service - puppetserver Service
Loaded: loaded (/usr/lib/systemd/system/puppetserver.service; enabled; vendor preset: disabled)
Active: activating (start) since Thu 2018-01-25 13:59:44 IST; 32s ago
Control: 10284 (bash)
CGroup: /system.slice/puppetserver.service
├─10284 bash /opt/puppetlabs/server/apps/puppetserver/cli/apps/start
├─10291 java -Xms2g -Xmx2g -XX:MaxPermSize=256m -Djava.security.egd=/dev/urandom -XX:OnOutOfMemoryError=kill -9 %p -cp /opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar:/opt/puppetlabs/server/apps/puppetserver/jruby-1_7.jar:/opt/puppetlabs/server/data/puppetserver/jars/* clojure.main -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetserver/conf.d --bootstrap-config /etc/puppetlabs/puppetserver/services.d/,/opt/puppetlabs/server/apps/puppetserver/config/services.d/ --restart-file /opt/puppetlabs/server/data/puppetserver/restartcounter
└─10366 sleep 1
Jan 25 13:59:44 puppet systemd[1]: Starting puppetserver Service...
Journal Logs:
Jan 25 14:01:29 puppet puppetserver[10419]: Background process 10426 exited before start had completed
Jan 25 14:01:29 puppet systemd[1]: puppetserver.service: control process exited, code=exited status=1
Jan 25 14:01:29 puppet systemd[1]: Failed to start puppetserver Service.
-- Subject: Unit puppetserver.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit puppetserver.service has failed.
--
-- The result is failed.
It looks like the VM has insufficient memory to run the server.
Edit the file /etc/default/puppetserver and lower the values of
JAVA_ARGS=" -Xms2g -Xmx2g ...
to:
JAVA_ARGS="-Xms1g -Xmx1g ...
The VM must have at least 1GB RAM configured with the edited settings.

systemd stops process even with incorrect ExecStop

I'm trying to manage multiple redis instances with systemd. Below is my systemd unit file
[Unit]
Description=Redis instances
After=network.target
[Service]
Type=simple
ExecStart=/home/redis/bin/redis-server /home/redis/conf/redis-%i.conf
ExecStop=/home/redis/bin/redis-cli -p %i INFO # <= INFO should not stop redis process
User=redis
Group=redis
[Install]
WantedBy=multi-user.target
I could successfully start redis-server by
$ sudo systemctl start redis#6379
$ systemctl status redis#6379
● redis#6379.service - Redis instances
Loaded: loaded (/usr/lib/systemd/system/redis#.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2017-09-05 17:48:42 JST; 2h 52min ago
Main PID: 86962 (redis-server)
CGroup: /system.slice/system-redis.slice/redis#6379.service
└─86962 /home/redis/bin/redis-server 0.0.0.0:6379 [cluster]
As a test of stopping redis-server, I intentionally used INFO instead of SHUTDOWN at the ExecStop line.
But when I executed below command, systemd still killed my redis process.
sudo systemctl stop redis#6379
And there's no output by:
sudo journalctl -f -u redis#6379
I wonder how could this happen ?
PS:
I replaced redis-cli with an non-exist one:
ExecStop=/tmp/nonexist
journalctl showed error log like:
Sep 05 20:51:57 myhost systemd[96306]: Failed at step EXEC spawning /tmp/nonexist: No such file or directory
Sep 05 20:51:57 myhost systemd[1]: redis#6379.service: control process exited, code=exited status=203
Sep 05 20:51:57 myhost systemd[1]: Unit redis#6379.service entered failed state.
Sep 05 20:51:57 myhost systemd[1]: redis#6379.service failed.
But the running redis process was still killed.

Systemd Service for jar file gets "operation timed out" error after few minues or stay in "activating mode"

the service unit is:
[Unit]
Description=test
After=syslog.target
After=network.target
[Service]
Type=forking
ExecStart=/bin/java -jar /home/ec2-user/test.jar
TimeoutSec=300
[Install]
WantedBy=multi-user.target
it starts fine for 1-4 minues. But later it fails:
tail /var/log/messages:
Feb 27 18:43:44 ip-172-31-40-48 systemd: Reloading.
Feb 27 18:44:06 ip-172-31-40-48 systemd: Starting test...
Feb 27 18:44:06 ip-172-31-40-48 java: 5.1.73
Feb 27 18:44:06 ip-172-31-40-48 java: Starting the internal [HTTP/1.1] server on port 8182
Feb 27 18:49:06 ip-172-31-40-48 systemd: test.service operation timed out.Terminating.
Feb 27 18:49:06 ip-172-31-40-48 systemd: test.service: control process exited, code=exited status=143
Feb 27 18:49:06 ip-172-31-40-48 systemd: Failed to start test.
Feb 27 18:49:06 ip-172-31-40-48 systemd: Unit test.service entered failed state.
systemctl status test.service (while restarting- stays in activating mode):
test.service - Setsnew
Loaded: loaded (/etc/systemd/system/test.service; enabled)
Active: activating (start) since Sun 2015-03-01 14:29:36 EST; 2min 30s ago
Control: 32462 (java)
CGroup: /system.slice/test.service
systemctl status test.service (after fail):
test.service - test
Loaded: loaded (/etc/systemd/system/test.service; enabled)
Active: failed (Result: exit-code) since Fri 2015-02-27 18:49:06 EST; 18min ago
Process: 27954 ExecStart=/bin/java -jar /home/ec2-user/test.jar (code=exited, status=143)
when running the jar in command line it works just fine.
tried changing the jar location because I thought it's a permissions problem
selinux is off
How can i fix this issue so I could start the jar on boot? there any alternatives? (RHEL7 do not include service command)
You made the service type forking, but this service does not fork. It just runs directly. Thus systemd waited five minutes for the program to daemonize itself, and it never did. The correct type for such a service is simple.
You also disabled SELinux, which is another problem you should resolve.

Resources