I've installed Virtuoso and I want to create a service so this service can start with the OS. To start Virtuoso normaly I go to the directory cd /usr/local/var/lib/virtuoso/db and run virtuoso-t -f. This command has to be executed inside the directory I've mentioned otherwise it does not read the config files. So I created the script at /etc/init.d/virtuoso.
#!/bin/bash
# Virtuoso Startup script for the Openlink Virtuoso
# Source function library.
. /etc/rc.d/init.d/functions
prog="virtuoso"
lockfile=/var/lock/subsys/virtuoso.lock
RETVAL=0
start() {
echo -n $"Starting $prog: "
cd /usr/local/var/lib/virtuoso/db/ | virtuoso-t -f
RETVAL=$?
echo
[ $RETVAL = 0 ] && touch ${lockfile}
return $RETVAL
}
stop() {
echo -n $"Stopping $prog: "
killproc $prog -TERM
RETVAL=$?
echo
[ $RETVAL = 0 ] && rm -f ${lockfile} ${pidfile}
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
# status -p $pidfile && exit 0 || exit $?
status $prog
RETVAL=$?
;;
*)
echo $"Usage: $prog {start|stop|restart|status}"
RETVAL=2
esac
exit $RETVAL
But the service complains that it is not finding the command virtuoso-t. If I create a script to start manually without service start virtuoso it works. But if I call the script inside the /etc/init.d/virtuoso does not work.
Any clue? Thanks
# service virtuoso status
● virtuoso.service - SYSV: The Openlink Virtuoso is a high-performance object-relational SQL database.
Loaded: loaded (/etc/rc.d/init.d/virtuoso; bad; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2017-03-09 11:17:25 EST; 28s ago
Docs: man:systemd-sysv-generator(8)
Process: 10905 ExecStart=/etc/rc.d/init.d/virtuoso start (code=exited, status=127)
Mar 09 11:17:25 irodsprodvm.ebioscience.amc.nl systemd[1]: Starting SYSV: The Openlink Virtuoso is a high-performance object-relational SQL database....
Mar 09 11:17:25 irodsprodvm.ebioscience.amc.nl virtuoso[10905]: Starting virtuoso: /etc/rc.d/init.d/virtuoso: line 19: virtuoso-t: command not found
Mar 09 11:17:25 irodsprodvm.ebioscience.amc.nl systemd[1]: virtuoso.service: control process exited, code=exited status=127
Mar 09 11:17:25 irodsprodvm.ebioscience.amc.nl systemd[1]: Failed to start SYSV: The Openlink Virtuoso is a high-performance object-relational SQL database..
Mar 09 11:17:25 irodsprodvm.ebioscience.amc.nl systemd[1]: Unit virtuoso.service entered failed state.
Mar 09 11:17:25 irodsprodvm.ebioscience.amc.nl systemd[1]: virtuoso.service failed.
Warning: virtuoso.service changed on disk. Run 'systemctl daemon-reload' to reload units.
I fixed. Added this line on the begin of the script
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin
Related
I've been trying to set up celery as a daemon for three days now and I am still struggling with it. Every time I start the service and check the status, it shows this,
The documentation had Type=forking, and when using it, the service fails to start. Status shows,
● celery.service - Celery Service
Loaded: loaded (/etc/systemd/system/celery.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2020-07-23 05:36:18 UTC; 7s ago
Process: 4256 ExecStart=/bin/sh -c ${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELER>
Main PID: 4272 (code=exited, status=1/FAILURE)
... systemd[1]: Starting Celery Service...
... sh[4257]: celery multi v4.4.6 (cliffs)
... sh[4257]: > Starting nodes...
... sh[4257]: > w1#dexter-server: OK
... systemd[1]: Started Celery Service.
... systemd[1]: celery.service: Main process exited, code=exited, status=1/FAILURE
... systemd[1]: celery.service: Failed with result 'exit-code'.
Nothing in the logs. And when I remove Type, I get this,
● celery.service - Celery Service
Loaded: loaded (/etc/systemd/system/celery.service; disabled; vendor preset: enabled)
Active: inactive (dead)
... systemd[1]: Started Celery Service.
... sh[2683]: celery multi v4.4.6 (cliffs)
... sh[2683]: > Starting nodes...
... sh[2683]: > w1#dexter-server: OK
... sh[2690]: celery multi v4.4.6 (cliffs)
... sh[2690]: > w1#dexter-server: DOWN
... systemd[1]: celery.service: Succeeded.
Nothing in the log as well. Here are my current files.
/etc/systemd/system/celery.service
[Unit]
Description=Celery Service
After=network.target
[Service]
User=dexter
Group=dexter
RuntimeDirectory=celery
RuntimeDirectoryMode=0755
EnvironmentFile=/etc/conf.d/celery
WorkingDirectory=/var/www/example.com/myapp
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
/etc/tmpfiles.d/celery.conf
d /var/run/celery 0755 dexter dexter -
d /var/log/celery 0755 dexter dexter -
/etc/conf.d/celery
# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/var/www/example.com/venv/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="myapp.celery:app"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# How to call manage.py
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
# you may wish to add these options for Celery Beat
CELERYBEAT_PID_FILE="/var/run/celery/beat.pid"
CELERYBEAT_LOG_FILE="/var/log/celery/beat.log"
CELERY_CREATE_DIRS=1
I have no idea how to configure beyond this or where to look for errors at this point. Could someone help me identify the issue and solve this? I have a very basic understanding of concepts like forking, so please go easy on me.
I solved this by replacing the block in celery.service with the following.
ExecStart='${CELERY_BIN} ${CELERy_NODES} \
-A ${CELERY_APP} worker --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop='${CELERY_BIN} stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}'
ExecReload='${CELERY_BIN} restart ${CELERYD_NODES} \
-A ${CELERY_APP} worker --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
I got help from #celery IRC.
I have a docker container.During the restart of the linux server docker container is stopped so in systemd, script file is added for the container restart.This script is also stopping the chef-client. But script is executing only half of the commands.I don't know why it is stopping after chef-client stop.After that it is not proceeding.
Restart script:
[root#server01 user1]# more /apps/service-scripts/docker-container-restart.sh
#!/bin/bash
echo "Starting to stop the chef-client automatic running.."
service chef-client stop
echo "Completed the stopping the chef-client automatic running"
echo "Restart check... the Applicaiton Container $(date)"
IsAppRunning=$(docker inspect -f '{{.State.Running}}' app-prod)
echo "IsAppRunning state $IsAppRunning"
if [ "$IsAppRunning" != "true" ]; then
IsAppRunning=$(docker inspect -f '{{.State.Running}}' app-prod)
echo "Restarting.... the Applicaiton Container $(date)"
docker restart app-prod
IsAppRunning=$(docker inspect -f '{{.State.Running}}' app-prod)
echo "Restart completed($IsAppRunning) the Applicaiton Container $(date)"
else
echo "Restart is not required, app is already up and running"
fi
Systemctl log:
[root#server01 user1] systemctl status app-docker-container.service
● app-docker-container.service - Application start
Loaded: loaded (/etc/systemd/system/app-docker-container.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2017-09-24 16:00:40 CDT; 18h ago
Main PID: 1187 (docker-container-restart)
Memory: 16.4M
CGroup: /system.slice/app-docker-container.service
├─1187 /bin/bash /apps/service-scripts/docker-container-restart.sh
└─1220 docker inspect -f {{.State.Running}} app-prod
Sep 24 16:00:40 server01 systemd[1]: Started Application start.
Sep 24 16:00:40 server01 systemd[1]: Starting Application start...
Sep 24 16:00:40 server01 docker-container-restart.sh[1187]: Starting to stop the chef-client automatic running..
Sep 24 16:00:41 server01 docker-container-restart.sh[1187]: Redirecting to /bin/systemctl stop chef-client.service
Sep 24 16:00:41 server01 docker-container-restart.sh[1187]: Completed the stopping the chef-client automatic running
Sep 24 16:00:41 server01 docker-container-restart.sh[1187]: Restart check... the Applicaiton Container Sun Sep 24 16:00:41 CDT 2017
SystemD:
[root#server01 user1]# more /etc/systemd/system/app-docker-container.service
[Unit]
Description=Application start
After=docker.service,chef-client.service
[Service]
Type=simple
ExecStart=/apps/service-scripts/docker-container-restart.sh
[Install]
WantedBy=multi-user.target
I'm running tinkerOS which is a distribution of debian. But for some reason the cwhservice that works on raspbian (also debian based) doesn't run on tinkerOS.
The script is placed in /etc/init.d/ and is called cwhservice, systemctl deamon-reload has been done and the code is as follows :
#!/bin/sh
### BEGIN INIT INFO
# Provides: CWH
# Required-Start: $all
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Starts the CWH
# Description: Starts the CWH
### END INIT INFO
case "$1" in
start)
/opt/cwh/start.sh > /opt/cwh/log.scrout 2> /opt/cwh/log.screrr
;;
stop)
/opt/cwh/stop.sh
;;
restart)
/opt/cwh/stop.sh
/opt/cwh/start.sh
;;
*)
echo "Usage: $0 {start|stop|restart}"
esac
exit 0
when I run : sudo service cwhservice start I get the following error :
Job for cwhservice.service failed because the control process exited with error code.
See "systemctl status cwhservice.service" and "journalctl -xe" for details.
systemctl status cwhservice.service gives :
● cwhservice.service - LSB: Starts the CWH
Loaded: loaded (/etc/init.d/cwhservice; generated; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2017-08-24 13:36:22 UTC; 1min 21s ago
Docs: man:systemd-sysv-generator(8)
Process: 15431 ExecStart=/etc/init.d/cwhservice start (code=exited, status=203/EXEC)
Aug 24 13:36:22 linaro-alip systemd[1]: Failed to start LSB: Starts the CWH.
Aug 24 13:36:22 linaro-alip systemd[1]: cwhservice.service: Failed with result 'exit-code'.
So after fiddling with all code and values I still didn't got it too work so I tried to remodel the reboot script which ended up currently as :
#! /bin/sh
### BEGIN INIT INFO
# Provides: kaas2
# Required-Start:
# Required-Stop:
# Default-Start:
# Default-Stop: 6
# Short-Description: Execute the reboot command.
# Description:
### END INIT INFO
case "$1" in
start)
# No-op
/opt/cwh/start.sh
echo "foo" >&2
;;
restart|reload|force-reload)
echo "Error: argument '$1' not supported" >&2
exit 3
;;
stop)
;;
status)
exit 0
;;
*)
echo "Usage: $0 start|stop" >&2
exit 3
;;
esac
sudo service cwhservice start doesn't return an error but just does nothing. But for some strange reason sudo service cwhservicer restart actually starts the start.sh script but doesn't return the echo... So I'm totally lost at this point and wasted 2 days...
Any ideas on how to create a deamon which I can start on boot and starts the start.sh script on debian?
I've got a shell script as follows
ss.sh
#!/bin/bash
opFile="custom.data"
sourceFile="TestOutput"
./fc app test > $sourceFile
grep -oP '[0-9.]+(?=%)|[0-9.]+(?=[A-Z]+ of)' "$sourceFile" | tr '\n' ',' > $opFile
sed -i 's/,$//' $opFile
The requirement is that I need to use this script with the watch command. And I'd like to make this into a systemctl service. I did it as so.
sc.sh
#!/bin/bash
watch -n 60 /root/ss.sh
And in my /etc/systemd/system,
log_info.service
[Unit]
Description="Test Desc"
After=network.target
[Service]
ExecStart=/root/sc.sh
Type=simple
[Install]
WantedBy=default.target
When I run systemctl start log_info.service, It runs but not continuously the way I'd like it to.
On running sytemctl status log_info.service,
info_log.service - "Test Desc"
Loaded: loaded (/etc/systemd/system/info_log.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2016-09-12 08:17:02 UTC; 2min 18s ago
Process: 35555 ExecStart=/root/sc.sh (code=exited, status=1/FAILURE)
Main PID: 35555 (code=exited, status=1/FAILURE)
Sep 12 08:17:02 mo-b428aa6b4 systemd[1]: Started "Test Desc".
Sep 12 08:17:02 mo-b428aa6b4 sc.sh[35654]: Error opening terminal: unknown.
Sep 12 08:17:02 mo-b428aa6b4 systemd[1]: info_log.service: Main process exited, code=exited, status=1/FAILURE
Sep 12 08:17:02 mo-b428aa6b4 systemd[1]: info_log.service: Unit entered failed state.
Sep 12 08:17:02 mo-b428aa6b4 systemd[1]: info_log.service: Failed with result 'exit-code'.
Any ideas as to why it's not running right? Any help would be appreciated!
So the reason I learnt (from superuser) for this failing was exactly what was in my error console, i.e,
Error opening terminal: unknown
Watch can only be executed from the terminal because it requires access to a terminal, while services don't have that access.
A possible alternative to watch could be using a command that doesn't require the terminal, like screen or tmux. Or, another alternative which worked for me, that was suggested by grawity on superuser, was
# foo.timer
[Unit]
Description=Do whatever
[Timer]
OnActiveSec=60
OnUnitActiveSec=60
[Install]
WantedBy=timers.target
The logic behind this was that, the need was to run the script every 60 seconds, not to use watch. Hence, grawity suggested that I use a timer unit file that calls the service file every 60 seconds instead. If the service unit was a different name from the timer unit, [Timer] Unit= can be used.
Hope this helped you and +1 to grawity and Eric Renouf from superuser for the answers!
I have an application that after it's finished and exited normally should not be restarted. After this app has done its business I'd like to shutdown the instance (ec2). I was thinking of doing this using systemd unit files with the options
Restart=on-failure
ExecStopPost=/path/to/script.sh
The script that should run on ExecStopPost:
#!/usr/bin/env bash
# sleep 1; adding sleep didn't help
# this always comes out deactivating
service_status=$(systemctl is-failed app-importer)
# could also do the other way round and check for failed
if [ $service_status = "inactive" ]
then
echo "Service exited normally: $service_status . Shutting down..."
#shutdown -t 5
else
echo "Service did not exit normally - $service_status"
fi
exit 0
The problem is that when post stop runs I can't seem to detect whether the service ended normally or not, the status then is deactivating, only after do I know if it enters a failed state or not.
Your problem is that systemd considers the service to be deactivating until the ExecPostStop process finishes. Putting sleeps in doesn't help since it's just going to wait longer. The idea for an ExecPostStop was to clean up anything the service might leave behind, like temp files, UNIX sockets, etc. The service is not done, and ready to start again, until the cleanup is finished. So what systemd is doing does make sense if you look at it that way.
What you should do is check $SERVICE_RESULT, $EXIT_CODE and/or $EXIT_STATUS in your script, which will tell you how the service stopped. Example:
#!/bin/sh
echo running exec post script | logger
systemctl is-failed foobar.service | logger
echo $SERVICE_RESULT, $EXIT_CODE and $EXIT_STATUS | logger
When service is allowed to to run to completion:
Sep 17 05:58:14 systemd[1]: Started foobar.
Sep 17 05:58:17 root[1663]: foobar service will now exit
Sep 17 05:58:17 root[1669]: running exec post script
Sep 17 05:58:17 root[1671]: deactivating
Sep 17 05:58:17 root[1673]: success, exited and 0
And when the service is stopped before it finishes:
Sep 17 05:57:22 systemd[1]: Started foobar.
Sep 17 05:57:24 systemd[1]: Stopping foobar...
Sep 17 05:57:24 root[1643]: running exec post script
Sep 17 05:57:24 root[1645]: deactivating
Sep 17 05:57:24 root[1647]: success, killed and TERM
Sep 17 05:57:24 systemd[1]: Stopped foobar.