I need to leave a service on systemd running because it doesn't activate? For what reason this happens since I follow the recommendation of the documentation, below are the codes:
Code of the Service :
# Contents of /etc/systemd/system/quark.service
[Unit]
Description=Quark
After=network.target
[Service]
Type=simple
User=cto
ExecStart=/usr/local/bin/python3.9 /var/net/
Restart=always
[Install]
WantedBy=multi-user.target
Status Code :
● quark.service - Quark
Loaded: loaded (/etc/systemd/system/quark.service; enabled; vendor preset: en
Active: failed (Result: exit-code) since Mon 2021-06-21 15:20:34 UTC; 8s ago
Process: 1467 ExecStart=/usr/local/bin/python3.9 /var/net/ (code=exited, statu
Main PID: 1467 (code=exited, status=1/FAILURE)
Jun 21 15:20:34 webstrucs systemd[1]: quark.service: Main process exited, code=e
Jun 21 15:20:34 webstrucs systemd[1]: quark.service: Failed with result 'exit-co
Jun 21 15:20:34 webstrucs systemd[1]: quark.service: Service RestartSec=100ms ex
Jun 21 15:20:34 webstrucs systemd[1]: quark.service: Scheduled restart job, rest
Jun 21 15:20:34 webstrucs systemd[1]: Stopped Quark.
Jun 21 15:20:34 webstrucs systemd[1]: quark.service: Start request repeated too
Jun 21 15:20:34 webstrucs systemd[1]: quark.service: Failed with result 'exit-co
Jun 21 15:20:34 webstrucs systemd[1]: Failed to start Quark.
The ExecStart should be the command to be executed:
systemd manpages:
ExecStart=
Commands with their arguments that are executed when this service is started.
This stanza:
ExecStart=/usr/local/bin/python3.9 /var/net/
Should be:
ExecStart=/usr/local/bin/python3.9 path_to_python_script.py
Related
Thanks!
I am trying to setup x11vnc service on raspberry pi 4B platform, but it's failing. Below are the status logs:
root#raspberrypi4-64:~# systemctl status x11vnc.service
x x11vnc.service - x11vnc service
Loaded: loaded (/lib/systemd/system/x11vnc.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2022-11-23 09:44:49 UTC; 10s ago
Process: 1947 ExecStart=/usr/bin/x11vnc -forever -display :0 -auth guess -passwdfile /home/root/.vnc/passwd (code=exited, status=1/FAILURE)
Main PID: 1947 (code=exited, status=1/FAILURE)
Nov 23 09:44:49 raspberrypi4-64 systemd[1]: x11vnc.service: Failed with re...e'.
Nov 23 09:44:49 raspberrypi4-64 systemd[1]: x11vnc.service: Scheduled rest... 5.
Nov 23 09:44:49 raspberrypi4-64 systemd[1]: Stopped x11vnc service.
Nov 23 09:44:49 raspberrypi4-64 systemd[1]: x11vnc.service: Start request ...ly.
Nov 23 09:44:49 raspberrypi4-64 systemd[1]: x11vnc.service: Failed with re...e'.
Nov 23 09:44:49 raspberrypi4-64 systemd[1]: Failed to start x11vnc service.
Hint: Some lines were ellipsized, use -l to show in full.
Any suggestion on this will be really helpful.
I am expecting to enable x11vnc service on raspberry pi 4B.
I created /etc/systemd/system/veracrypt-automount-devices.service:
[Unit]
Description=VeraCrypt auto-mount device-hosted volumes
[Service]
Type=forking
ExecStartPre=/bin/sleep 300
ExecStart=/usr/bin/veracrypt --auto-mount=devices /media/veracrypt1
[Install]
WantedBy=multi-user.target
Then I did:
sudo systemctl daemon-reload
sudo systemctl enable veracrypt-automount-devices
sudo systemctl status veracrypt-automount-devices
● veracrypt-automount-devices.service - VeraCrypt auto-mount device-hosted volumes
Loaded: loaded (/etc/systemd/system/veracrypt-automount-devices.service; enabled; vendor preset: enabled)
Active: failed (Result: timeout) since Sat 2020-06-06 17:28:59 CEST; 8min ago
Process: 967 ExecStartPre=/bin/sleep 300 (code=killed, signal=TERM)
Jun 06 17:27:29 username-computername systemd[1]: Starting VeraCrypt auto-mount device-hosted volumes...
Jun 06 17:28:59 username-computername systemd[1]: veracrypt-automount-devices.service: Start-pre operation timed out. Terminating.
Jun 06 17:28:59 username-computername systemd[1]: veracrypt-automount-devices.service: Failed with result 'timeout'.
Jun 06 17:28:59 username-computername systemd[1]: Failed to start VeraCrypt auto-mount device-hosted volumes.
As you can see, it doesn't work.
If I grep syslog, here is what I find:
Jun 6 16:56:08 username-computername systemd[1]: veracrypt-automount-devices.service: Control process exited, code=exited status=1
Jun 6 16:56:08 username-computername veracrypt[969]: Enter password:
Jun 6 16:56:08 username-computername systemd[1]: veracrypt-automount-devices.service: Failed with result 'exit-code'.
Jun 6 17:28:59 username-computername systemd[1]: veracrypt-automount-devices.service: Start-pre operation timed out. Terminating.
Jun 6 17:28:59 username-computername systemd[1]: veracrypt-automount-devices.service: Failed with result 'timeout'.
Basically, what I want is to be asked for the password to decrypt the device-hosted volume after I logged in with my username and password in Linux Mint.
I found how to do it. I put the veracrypt command in ~/.profile to execute the program on login. See https://askubuntu.com/a/270050/787567.
I am running this systemd command but when I screen -ls I don't see the screen.
The status is active and seems well.
But it isn't actually running when I check.
This is the .service file
[Unit]
Description=webhookdaemon
[Service]
ExecStart=/bin/bash path/to/script
RemainAfterExit=yes
Type=forking
Restart=on-failure
RestartSec=30
[Install]
WantedBy=multi-user.target
Here is the script (path/to/script)
screen -S docker-hub-daemon -d -m npm run start --prefix /root/nodeserver/
Here is the status output
webookdaemon.service - webhookdaemon
Loaded: loaded (/etc/systemd/system/webookdaemon.service; enabled; vendor preset: enabled)
Active: active (exited) since Tue 2018-03-13 19:55:15 UTC; 57min ago
Main PID: 2144 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/webookdaemon.service
Mar 13 19:58:29 aggregate-terminal-logs-tor1-01 systemd[1]: Started webhookdaemon.
Mar 13 19:59:03 aggregate-terminal-logs-tor1-01 systemd[1]: Started webhookdaemon.
Mar 13 20:00:22 aggregate-terminal-logs-tor1-01 systemd[1]: Started webhookdaemon.
Mar 13 20:01:21 aggregate-terminal-logs-tor1-01 systemd[1]: Started webhookdaemon.
Mar 13 20:02:26 aggregate-terminal-logs-tor1-01 systemd[1]: Started webhookdaemon.
Mar 13 20:04:41 aggregate-terminal-logs-tor1-01 systemd[1]: Started webhookdaemon.
Mar 13 20:47:41 aggregate-terminal-logs-tor1-01 systemd[1]: Started webhookdaemon.
Mar 13 20:49:53 aggregate-terminal-logs-tor1-01 systemd[1]: Started webhookdaemon.
Mar 13 20:52:53 aggregate-terminal-logs-tor1-01 systemd[1]: Started webhookdaemon.
root#aggregate-terminal-logs-tor1-01:~#
You should not be using screen to manage services. Just use systemd directly.
Make sure that Type= is set match the behavior of the service you are lauching. I could not find references to docker-hub-daemon, so I'm not sure the appropriate value for it. See man systemd.service for the documentation for Type=.
Instead of using screen -ls to check the status of the service, use systemctl status webookdaemon.
You may also wish to update the spelling of this service to be webhoookdaemon to match the spelling in the description.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
here is my problem. I have CentOS and java process running on it. Java process is operated by the start/stop script. It creates a .pid file of java instance too.
My unit file is looking like:
[Unit]
After=syslog.target network.target
Description=Someservice
[Service]
User=xxxuser
Type=forking
WorkingDirectory=/srv/apps/someservice
ExecStart=/srv/apps/someservice/server.sh start
ExecStop=/srv/apps/someservice/server.sh stop
PIDFile=/srv/apps/someservice/application.pid
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target
When I call stop function, script terminates java process with SIGTERM and returns 0 code:
kill $OPT_FORCEKILL `cat $PID_FILE`
<...>
return 0
After that, if I check the status of my unit, I get something like that (status=143):
● someservice.service - Someservice
Loaded: loaded (/usr/lib/systemd/system/someservice.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2017-08-30 09:17:40 EEST; 4s ago
Process: 48365 ExecStop=/srv/apps/someservice/server.sh stop (code=exited, status=0/SUCCESS)
Main PID: 46115 (code=exited, status=143)
Aug 29 17:10:02 whatever.domain.com systemd[1]: Starting Someservice...
Aug 29 17:10:02 whatever.domain.com systemd[1]: PID file /srv/apps/someservice/application.pid not readable (yet?) after start.
Aug 29 17:10:04 whatever.domain.com systemd[1]: Started Someservice.
Aug 30 09:17:39 whatever.domain.com systemd[1]: Stopping Someservice...
Aug 30 09:17:39 whatever.domain.com server.sh[48365]: Stopping someservice - PID [46115]
Aug 30 09:17:40 whatever.domain.com systemd[1]: someservice.service: main process exited, code=exited, status=143/n/a
Aug 30 09:17:40 whatever.domain.com systemd[1]: Stopped Someservice.
Aug 30 09:17:40 whatever.domain.com systemd[1]: Unit someservice.service entered failed state.
Aug 30 09:17:40 whatever.domain.com systemd[1]: someservice.service failed.
When I haven't got the return value in my start/stop script, it acts absolutely the same.
Adding into the unit file something like:
[Service]
SuccessExitStatus=143
is not good idea for me. Why the systemctl acting so and doesn't show me normal service state?
When I try to modify my start/stop script and instead of return 0 I put return 10 it acts the same, but I can see that exit 10 is passed.
Here is an example:
● someservice.service - Someservice
Loaded: loaded (/usr/lib/systemd/system/someservice.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2017-08-30 09:36:22 EEST; 5s ago
Process: 48460 ExecStop=/srv/apps/someservice/server.sh stop (code=exited, status=10)
Process: 48424 ExecStart=/srv/apps/someservice/server.sh start (code=exited, status=0/SUCCESS)
Main PID: 48430 (code=exited, status=143)
Aug 30 09:36:11 whatever.domain.com systemd[1]: Starting Someservice...
Aug 30 09:36:11 whatever.domain.com systemd[1]: PID file /srv/apps/someservice/application.pid not readable (yet?) after start.
Aug 30 09:36:13 whatever.domain.com systemd[1]: Started Someservice.
Aug 30 09:36:17 whatever.domain.com systemd[1]: Stopping Someservice...
Aug 30 09:36:17 whatever.domain.com server.sh[48460]: Stopping someservice - PID [48430]
Aug 30 09:36:21 whatever.domain.com systemd[1]: someservice.service: main process exited, code=exited, status=143/n/a
Aug 30 09:36:22 whatever.domain.com systemd[1]: someservice.service: control process exited, code=exited status=10
Aug 30 09:36:22 whatever.domain.com systemd[1]: Stopped Someservice.
Aug 30 09:36:22 whatever.domain.com systemd[1]: Unit someservice.service entered failed state.
Aug 30 09:36:22 whatever.domain.com systemd[1]: someservice.service failed.
From the journalctl log I can see that systemctl firstly returns the status=143 and then my return value of 10. So i guess that my mistake is somewhere in start/stop script (because error code 143 is passed before function returns 0)?
You should be able to suppress this by adding the exit code into the unit file as a "success" exit status:
[Service]
SuccessExitStatus=143
source
I have a CoreOS beta (1185.2.0) installed.
I have the following systemd service file to start calico-node:
[Unit]
Description=Calico per-host agent
Requires=network-online.target
After=network-online.target
[Service]
Slice=machine.slice
PermissionsStartOnly=true
Environment=ETCD_CA_CERT_FILE=/etc/ssl/etcd/ca.pem
Environment=ETCD_CERT_FILE=/etc/ssl/etcd/etcd1.pem
Environment=ETCD_KEY_FILE=/etc/ssl/etcd/etcd1-key.pem
Environment=CALICO_DISABLE_FILE_LOGGING=true
Environment=HOSTNAME=10.79.218.2
Environment=IP=10.79.218.2
Environment=FELIX_FELIXHOSTNAME=10.79.218.2
Environment=CALICO_NETWORKING=true
Environment=NO_DEFAULT_POOLS=true
Environment=ETCD_ENDPOINTS=https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379
ExecStartPre=/bin/mkdir /var/run/calico
ExecStart=/usr/bin/rkt run --inherit-env --stage1-from-dir=stage1-fly.aci --volume=var-run-calico,kind=host,source=/var/run/calico --volume=modules,kind=host,source=/lib/modules,readOnly=false --mount=volume=modules,target=/lib/modules --volume=dns,kind=host,source=/etc/resolv.conf,readOnly=true --volume=etcd-tls-certs,kind=host,source=/etc/ssl/etcd,readOnly=true --mount=volume=dns,target=/etc/resolv.conf --mount=volume=etcd-tls-certs,target=/etc/ssl/etcd --mount=volume=var-run-calico,target=/var/run/calico --trust-keys-from-https quay.io/calico/node:v0.22.0
KillMode=mixed
Restart=always
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target
welp.. the systemd fails with:
● calico-node.service - Calico per-host agent
Loaded: loaded (/etc/systemd/system/calico-node.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit-hit) since Tue 2016-10-25 04:51:15 UTC; 9min ago
Process: 1970 ExecStart=/usr/bin/rkt run --inherit-env --stage1-from-dir=stage1-fly.aci --volume=var-run-calico,kind=host,source=/var/
Process: 4307 ExecStartPre=/bin/mkdir /var/run/calico (code=exited, status=1/FAILURE)
Main PID: 1970 (code=exited, status=1/FAILURE)
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: Failed to start Calico per-host agent.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Unit entered failed state.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Failed with result 'exit-code'.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Service hold-off time over, scheduling restart.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: Stopped Calico per-host agent.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Start request repeated too quickly.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: Failed to start Calico per-host agent.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Unit entered failed state.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Failed with result 'start-limit-hit'.
I tried setting the environment variables on terminal and running the rkt command and I got the error message
image: using image from file /usr/lib/rkt/stage1-images/stage1-fly.aci
run: open /usr/lib/rkt/stage1-images/stage1-fly.aci.asc: no such file or directory
I think that error may relate to the following configuration file at /etc/rkt/paths.d/paths.json
{
"rktKind": "paths",
"rktVersion": "v1",
"stage1-images": "/usr/lib/rkt/stage1-images"
}
I need the paths configuration file later on for kubernetes.
any ideas? the asc file really doesn't exist there.
/usr/lib is a dynamic link to /usr/lib64. rkt configured there not to search for certificates for container images at /usr/lib64 and not /usr/lib.
it seems that by default this configuration is already set properly, so just removing the file /etc/rkt/paths.d/paths.json resolves the issue.
full answer at https://github.com/coreos/rkt/issues/3320