systemd stops process even with incorrect ExecStop - linux

I'm trying to manage multiple redis instances with systemd. Below is my systemd unit file
[Unit]
Description=Redis instances
After=network.target
[Service]
Type=simple
ExecStart=/home/redis/bin/redis-server /home/redis/conf/redis-%i.conf
ExecStop=/home/redis/bin/redis-cli -p %i INFO # <= INFO should not stop redis process
User=redis
Group=redis
[Install]
WantedBy=multi-user.target
I could successfully start redis-server by
$ sudo systemctl start redis#6379
$ systemctl status redis#6379
● redis#6379.service - Redis instances
Loaded: loaded (/usr/lib/systemd/system/redis#.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2017-09-05 17:48:42 JST; 2h 52min ago
Main PID: 86962 (redis-server)
CGroup: /system.slice/system-redis.slice/redis#6379.service
└─86962 /home/redis/bin/redis-server 0.0.0.0:6379 [cluster]
As a test of stopping redis-server, I intentionally used INFO instead of SHUTDOWN at the ExecStop line.
But when I executed below command, systemd still killed my redis process.
sudo systemctl stop redis#6379
And there's no output by:
sudo journalctl -f -u redis#6379
I wonder how could this happen ?
PS:
I replaced redis-cli with an non-exist one:
ExecStop=/tmp/nonexist
journalctl showed error log like:
Sep 05 20:51:57 myhost systemd[96306]: Failed at step EXEC spawning /tmp/nonexist: No such file or directory
Sep 05 20:51:57 myhost systemd[1]: redis#6379.service: control process exited, code=exited status=203
Sep 05 20:51:57 myhost systemd[1]: Unit redis#6379.service entered failed state.
Sep 05 20:51:57 myhost systemd[1]: redis#6379.service failed.
But the running redis process was still killed.

Related

Unable to run mongoDB after installation

I'm On fedora 34 and trying to install mongoDB on this machine.
I followed installation instruction from official Docs from here. Everything installed correctly, but now i'm unable to start the service.
I executed sudo systemctl start mongod and it showed
root#fedora /v/l/mongodb [100]# systemctl start mongod
Job for mongod.service failed because the control process exited with error code.
See "systemctl status mongod.service" and "journalctl -xeu mongod.service" for details.
This is the source for mongod.service in /usr/lib/systemd/system/mongod.service
[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network-online.target
Wants=network-online.target
[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongod.conf"
EnvironmentFile=-/etc/sysconfig/mongod
ExecStart=/usr/bin/mongod $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb
PermissionsStartOnly=true
PIDFile=/var/run/mongodb/mongod.pid
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for mongod as specified in
# https://docs.mongodb.com/manual/reference/ulimit/#recommended-ulimit-settings
[Install]
WantedBy=multi-user.target
I think problem begins here
12 ExecStart=/usr/bin/mongod $OPTIONS
I'm a noob in systemd and daemons and services
This is the log from systemctl status mongod.service
root#fedora /v/l/mongodb [1]# systemctl status mongod.service
× mongod.service - MongoDB Database Server
Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sun 2021-08-01 11:18:36 IST; 16min ago
Docs: https://docs.mongodb.org/manual
Process: 8116 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 8117 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 8118 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 8119 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=14)
CPU: 149ms
Aug 01 11:18:36 fedora systemd[1]: Starting MongoDB Database Server...
Aug 01 11:18:36 fedora mongod[8119]: about to fork child process, waiting until server is ready for connections.
Aug 01 11:18:36 fedora mongod[8122]: forked process: 8122
Aug 01 11:18:36 fedora mongod[8119]: ERROR: child process failed, exited with 14
Aug 01 11:18:36 fedora mongod[8119]: To see additional information in this output, start without the "--fork" option.
Aug 01 11:18:36 fedora systemd[1]: mongod.service: Control process exited, code=exited, status=14/n/a
Aug 01 11:18:36 fedora systemd[1]: mongod.service: Failed with result 'exit-code'.
Aug 01 11:18:36 fedora systemd[1]: Failed to start MongoDB Database Server.
Your mongodb exited with status code 14 as systemctl status says.
Here you can find the answer

unable to configure the Docker daemon with file /etc/docker/daemon.json: EOF

I am new to docker and cannot understand these errors. So, Please let me know if any more information is needed.
`$ docker --version`
Docker version 1.12.6, build 88a4867/1.12.6
`$ docker info`
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
`$sudo dockerd`
FATA[0000] unable to configure the Docker daemon with file /etc/docker/daemon.json: EOF
`$sudo systemctl start docker`
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
`$sudo systemctl status docker.service -l`
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2017-07-26 14:30:21 EDT; 8min ago
Docs: http://docs.docker.com
Process: 5835 ExecStart=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY (code=exited, status=1/FAILURE)
Main PID: 5835 (code=exited, status=1/FAILURE)
Jul 26 14:30:21: Starting Docker Application Container Engine...
Jul 26 14:30:21 dockerd-current[5835]: time="2017-07-26T14:30:21-04:00" level=fatal msg="unable to configure the Docker daemon with file /etc/docker/daemon.json: EOF\n"
Jul 26 14:30:21 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Jul 26 14:30:21 systemd[1]: Failed to start Docker Application Container Engine.
Jul 26 14:30:21 systemd[1]: Unit docker.service entered failed state.
Jul 26 14:30:21 systemd[1]: docker.service failed.
Please let me know if I need to check anything else.
The file /etc/docker/daemon.json should not be present or if it is present then it should have a valid JSON object. A blank file would create an error. Either delete the file or if you want to have a blank file then have it with below content
{
}
This will create a blank json object
I have same problem. But I am edited the file /etc/docker/daemon.json and added to it some options. If string with option is not last it have to end with comma character(,).
In root user, type :
$ nano /etc/docker/daemon.json
Ff the file show blank or nothing text, then you just add :
{
}
then save and exit.
Then try to restart docker using
$ service docker restart
In my case just remove that's file using this command
$sudo rm /etc/docker/daemon.json
and then restar the service
$sudo systemctl restart docker.service
$sudo systemctl status docker.service

Systemd script fail

I want to run a script at system startup in a Debian 9 box. My script works when run standalone, but fails under systemd.
My script just copies a backup file from a remote server to the local machine:
#!/bin/sh
set -e
/usr/bin/sshpass -p "PASSWORD" /usr/bin/scp -p USER#10.0.0.2:ORIGINPATH/backupserver.zip DESTINATIONPATH/backupserver/
Just for privacy I replaced password, user, and paths above.
I wrote the following systemd service unit:
[Unit]
Description=backup script
[Service]
Type=oneshot
ExecStart=PATH/backup.sh
[Install]
WantedBy=default.target
Then I set permissions for the script:
chmod 744 PATH/backup.sh
And installed the service:
chmod 664 /etc/systemd/system/backup.service
systemctl daemon-reload
systemctl enable backup.service
When I reboot the script fails:
● backup.service - backup script
Loaded: loaded (/etc/systemd/system/backup.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2017-05-13 13:39:54 -03; 47min ago
Main PID: 591 (code=exited, status=1/FAILURE)
Result of journalctl -xe:
mai 16 23:34:27 rodrigo-acer systemd[1]: backup.service: Main process exited, code=exited, status=6/NOTCONFIGURED
mai 16 23:34:27 rodrigo-acer systemd[1]: Failed to start backup script.
mai 16 23:34:27 rodrigo-acer systemd[1]: backup.service: Unit entered failed state.
mai 16 23:34:27 rodrigo-acer systemd[1]: backup.service: Failed with result 'exit-code'.
What could be wrong?
Solved guys. There was 2 problems:
1 - I had to change the service unit file to make the service run only after network was up. The unit section was changed to:
[Unit]
Description = World server backup
Wants = network-online.target
After = network.target network-online.target
2 - The root user did not have the remote host added to the known host list, unlike the ordinary user I used to test the script.
Failed with result 'exit-code' you could try this on your last line:
# REQUIRED FOR SYSTEMD: 0 means clean no error
exit 0
You may also need to add:
Type=forking
to the systemd entry similar to: https://serverfault.com/questions/751030/systemd-ignores-return-code-while-starting-service
If your service or script does not fork add a & at the end to run it in the background, and exit with 0 fast. Otherwise it will be like a startup that times out and takes forever / seems like frozen service.

systemd cannot run service after running commands

I tried to run systemd using the commands systemctl enable photogrid.service & systemctl start photogrid.service in ubuntu 16
The nodejs app itself can run as expected. The service is to ensure that application will auto-start when application crash or server reboot.
The service apparently did not start. So I key in systemctl status photogrid.service to see what happened, the below is what I got from the terminal.
● photogrid.service - Photogrid
Loaded: loaded (/lib/systemd/system/photogrid.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Wed 2016-11-09 04:35:36 UTC; 7s ago
Process: 27523 ExecStart=/usr/local/bin/node /home/ubuntu/photogrid/app.js (code=exited, status=203/EXEC)
Main PID: 27523 (code=exited, status=203/EXEC)
Nov 09 04:35:36 ip-172-31-34-151 systemd[1]: photogrid.service: Main process exited, code=exited, status=203/EXEC
Nov 09 04:35:36 ip-172-31-34-151 systemd[1]: photogrid.service: Unit entered failed state.
Nov 09 04:35:36 ip-172-31-34-151 systemd[1]: photogrid.service: Failed with result 'exit-code'.
This the script that I wrote for the service under the path /lib/systemd/system/photogrid.service
[Unit]
Description=Photogrid
[Service]
Type=simple
Restart=always
RestartSec=10
Environment=NODE_ENV=production
ExecStart=/usr/local/bin/node /home/ubuntu/photogrid/app.js
[Install]
WantedBy=multi-user.target
Basically under ExecStart make sure you point to the correct nodejs executable. For my case it was in a different folder and not /usr/local/bin/node, to check where is your node executable. (Assuming you confirm you have downloaded and install it correctly in linux) use command which node to give you path direction.

Systemd Service for jar file gets "operation timed out" error after few minues or stay in "activating mode"

the service unit is:
[Unit]
Description=test
After=syslog.target
After=network.target
[Service]
Type=forking
ExecStart=/bin/java -jar /home/ec2-user/test.jar
TimeoutSec=300
[Install]
WantedBy=multi-user.target
it starts fine for 1-4 minues. But later it fails:
tail /var/log/messages:
Feb 27 18:43:44 ip-172-31-40-48 systemd: Reloading.
Feb 27 18:44:06 ip-172-31-40-48 systemd: Starting test...
Feb 27 18:44:06 ip-172-31-40-48 java: 5.1.73
Feb 27 18:44:06 ip-172-31-40-48 java: Starting the internal [HTTP/1.1] server on port 8182
Feb 27 18:49:06 ip-172-31-40-48 systemd: test.service operation timed out.Terminating.
Feb 27 18:49:06 ip-172-31-40-48 systemd: test.service: control process exited, code=exited status=143
Feb 27 18:49:06 ip-172-31-40-48 systemd: Failed to start test.
Feb 27 18:49:06 ip-172-31-40-48 systemd: Unit test.service entered failed state.
systemctl status test.service (while restarting- stays in activating mode):
test.service - Setsnew
Loaded: loaded (/etc/systemd/system/test.service; enabled)
Active: activating (start) since Sun 2015-03-01 14:29:36 EST; 2min 30s ago
Control: 32462 (java)
CGroup: /system.slice/test.service
systemctl status test.service (after fail):
test.service - test
Loaded: loaded (/etc/systemd/system/test.service; enabled)
Active: failed (Result: exit-code) since Fri 2015-02-27 18:49:06 EST; 18min ago
Process: 27954 ExecStart=/bin/java -jar /home/ec2-user/test.jar (code=exited, status=143)
when running the jar in command line it works just fine.
tried changing the jar location because I thought it's a permissions problem
selinux is off
How can i fix this issue so I could start the jar on boot? there any alternatives? (RHEL7 do not include service command)
You made the service type forking, but this service does not fork. It just runs directly. Thus systemd waited five minutes for the program to daemonize itself, and it never did. The correct type for such a service is simple.
You also disabled SELinux, which is another problem you should resolve.

Resources