I've been trying to set up celery as a daemon for three days now and I am still struggling with it. Every time I start the service and check the status, it shows this,
The documentation had Type=forking, and when using it, the service fails to start. Status shows,
● celery.service - Celery Service
Loaded: loaded (/etc/systemd/system/celery.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2020-07-23 05:36:18 UTC; 7s ago
Process: 4256 ExecStart=/bin/sh -c ${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELER>
Main PID: 4272 (code=exited, status=1/FAILURE)
... systemd[1]: Starting Celery Service...
... sh[4257]: celery multi v4.4.6 (cliffs)
... sh[4257]: > Starting nodes...
... sh[4257]: > w1#dexter-server: OK
... systemd[1]: Started Celery Service.
... systemd[1]: celery.service: Main process exited, code=exited, status=1/FAILURE
... systemd[1]: celery.service: Failed with result 'exit-code'.
Nothing in the logs. And when I remove Type, I get this,
● celery.service - Celery Service
Loaded: loaded (/etc/systemd/system/celery.service; disabled; vendor preset: enabled)
Active: inactive (dead)
... systemd[1]: Started Celery Service.
... sh[2683]: celery multi v4.4.6 (cliffs)
... sh[2683]: > Starting nodes...
... sh[2683]: > w1#dexter-server: OK
... sh[2690]: celery multi v4.4.6 (cliffs)
... sh[2690]: > w1#dexter-server: DOWN
... systemd[1]: celery.service: Succeeded.
Nothing in the log as well. Here are my current files.
/etc/systemd/system/celery.service
[Unit]
Description=Celery Service
After=network.target
[Service]
User=dexter
Group=dexter
RuntimeDirectory=celery
RuntimeDirectoryMode=0755
EnvironmentFile=/etc/conf.d/celery
WorkingDirectory=/var/www/example.com/myapp
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
/etc/tmpfiles.d/celery.conf
d /var/run/celery 0755 dexter dexter -
d /var/log/celery 0755 dexter dexter -
/etc/conf.d/celery
# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/var/www/example.com/venv/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="myapp.celery:app"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# How to call manage.py
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
# you may wish to add these options for Celery Beat
CELERYBEAT_PID_FILE="/var/run/celery/beat.pid"
CELERYBEAT_LOG_FILE="/var/log/celery/beat.log"
CELERY_CREATE_DIRS=1
I have no idea how to configure beyond this or where to look for errors at this point. Could someone help me identify the issue and solve this? I have a very basic understanding of concepts like forking, so please go easy on me.
I solved this by replacing the block in celery.service with the following.
ExecStart='${CELERY_BIN} ${CELERy_NODES} \
-A ${CELERY_APP} worker --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop='${CELERY_BIN} stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}'
ExecReload='${CELERY_BIN} restart ${CELERYD_NODES} \
-A ${CELERY_APP} worker --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
I got help from #celery IRC.
Related
I have a render application written in Python that uses the blender library (bpy) and renders itself using Eevee. It implements the execution of tasks using Celery. The application runs on Linux ubuntu 20.04, the nvidia-grid 510.47.03-grid-aws drivers were installed on the server and a virtual display was launched for the rendering process.
While running Celery manually from the application's virtual environment with the command: celery -A tasks worker -Q feed -l info --concurrency=3, rendering takes 15-17 seconds.
Next, I set up services for systemd to start Celery automatically. But in this way, the rendering process of exactly the same scene takes 34-36 seconds.
I tried to set the maximum priority for the processor and RAM and still do not understand why the rendering, started automatically, takes twice as long as the same one, started manually.
The systemd configuration files look like this:
/etc/conf.d/celery
CELERYD_NODES="render_server"
CELERY_BIN="/home/ubuntu/workflow/renddly-backend-instance/venv/bin/celery"
CELERY_APP="tasks"
CELERYD_MULTI="multi"
CELERYD_OPTS="-Q feed --time-limit=300 --concurrency=3"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
/etc/systemd/system/celery.service
[Unit]
Description=Celery service for rendering
After=network.target
After=display.service
[Service]
Type=forking
EnvironmentFile=/etc/conf.d/celery
Environment=DISPLAY=:1
CPUSchedulingPriority=99
RestrictRealtime=false
OOMScoreAdjust=-1000
LimitMEMLOCK=infinity
WorkingDirectory=/home/ubuntu/workflow/renddly-backend-instance
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
I'm new in raspberry pi programming, and i want to be able to launch a minecraft server at the start of the pi.
For that, I've already loocked at Systemd files and screen command.
I manage to make them work separately, but not together, it's why I'm looking for help there.
Firstly, I'm using a Raspberry pi 4 4Go with raspbian v10, and forge 1.12.2 with java 8.
I did a .sh file to launch easier the server:
#!/bin/bash
screen -S mcserver -dm java -Xms1024M -Xmx2048M -jar /home/pi/MinecraftServer/server/forge-1.12.2-14.23.5.2854.jar nogui
When I run the file, the server start perfectly in a socket as I want.
Secondly, I have a systemd file (auto-run-server.service):
[Unit]
Description=Auto run mc server
[Service]
ExecStart=/home/pi/MinecraftServer/server/minecraft.sh
[Install]
WantedBy=multi-user.target
But when I execute the service, nothing is happening, the status of the service shows a sucess, but there is nothing in screens (screen -list)
And when i replace the ExecStart value by
ExecStart=java -Xms1024M -Xmx2048M -jar /home/pi/MinecraftServer/server/forge-1.12.2-14.23.5.2854.jar nogui
The server starts, but the problem is that I want to access to a terminal to run commands in minecraft server, and i didn't find solution to access from there.
( It's why I want to create a "screen" )
I'm fully open to your answers, even if they don't use "screen", as long as I can access to a server terminal.
Thanks in advance.
I'm using the follow systemd unit for testing:
[Service]
ExecStart=/tmp/screentest.sh
And this screentest.sh shell script:
#!/bin/sh
screen -S mcserver -dm sh -c 'while :; do date; sleep 5; done'
If I start the service (systemctl start screentest) and then run systemctl status screentest, I see:
● screentest.service
Loaded: loaded (/etc/systemd/system/screentest.service; static; vendor preset: enabled)
Active: inactive (dead)
The problem here is that the screen command exits immediately when running with -d, so systemd believes the command has completed and cleans everything up by removing any additional processes spawned by the service.
We can tell systemd that the service spawns a child and exits by setting the service type to forking:
[Service]
Type=forking
ExecStart=/tmp/screentest.sh
With this change in place, after starting the service we see:
● screentest.service
Loaded: loaded (/etc/systemd/system/screentest.service; static; vendor preset: enabled)
Active: active (running) since Sun 2021-01-10 09:58:11 EST; 4s ago
Process: 14461 ExecStart=/tmp/screentest.sh (code=exited, status=0/SUCCESS)
Main PID: 14463 (screen)
Tasks: 3 (limit: 4915)
CGroup: /system.slice/screentest.service
├─14463 SCREEN -S mcserver -dm sh -c while :; do date; sleep 5; done
├─14464 sh -c while :; do date; sleep 5; done
└─14466 sleep 5
And screen -list shows:
root#raspberrypi:/etc/systemd/system# screen -list
There is a screen on:
14612.mcserver (01/10/2021 10:01:55 AM) (Detached)
1 Socket in /run/screen/S-root.
I have a docker container.During the restart of the linux server docker container is stopped so in systemd, script file is added for the container restart.This script is also stopping the chef-client. But script is executing only half of the commands.I don't know why it is stopping after chef-client stop.After that it is not proceeding.
Restart script:
[root#server01 user1]# more /apps/service-scripts/docker-container-restart.sh
#!/bin/bash
echo "Starting to stop the chef-client automatic running.."
service chef-client stop
echo "Completed the stopping the chef-client automatic running"
echo "Restart check... the Applicaiton Container $(date)"
IsAppRunning=$(docker inspect -f '{{.State.Running}}' app-prod)
echo "IsAppRunning state $IsAppRunning"
if [ "$IsAppRunning" != "true" ]; then
IsAppRunning=$(docker inspect -f '{{.State.Running}}' app-prod)
echo "Restarting.... the Applicaiton Container $(date)"
docker restart app-prod
IsAppRunning=$(docker inspect -f '{{.State.Running}}' app-prod)
echo "Restart completed($IsAppRunning) the Applicaiton Container $(date)"
else
echo "Restart is not required, app is already up and running"
fi
Systemctl log:
[root#server01 user1] systemctl status app-docker-container.service
● app-docker-container.service - Application start
Loaded: loaded (/etc/systemd/system/app-docker-container.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2017-09-24 16:00:40 CDT; 18h ago
Main PID: 1187 (docker-container-restart)
Memory: 16.4M
CGroup: /system.slice/app-docker-container.service
├─1187 /bin/bash /apps/service-scripts/docker-container-restart.sh
└─1220 docker inspect -f {{.State.Running}} app-prod
Sep 24 16:00:40 server01 systemd[1]: Started Application start.
Sep 24 16:00:40 server01 systemd[1]: Starting Application start...
Sep 24 16:00:40 server01 docker-container-restart.sh[1187]: Starting to stop the chef-client automatic running..
Sep 24 16:00:41 server01 docker-container-restart.sh[1187]: Redirecting to /bin/systemctl stop chef-client.service
Sep 24 16:00:41 server01 docker-container-restart.sh[1187]: Completed the stopping the chef-client automatic running
Sep 24 16:00:41 server01 docker-container-restart.sh[1187]: Restart check... the Applicaiton Container Sun Sep 24 16:00:41 CDT 2017
SystemD:
[root#server01 user1]# more /etc/systemd/system/app-docker-container.service
[Unit]
Description=Application start
After=docker.service,chef-client.service
[Service]
Type=simple
ExecStart=/apps/service-scripts/docker-container-restart.sh
[Install]
WantedBy=multi-user.target
I've got a shell script as follows
ss.sh
#!/bin/bash
opFile="custom.data"
sourceFile="TestOutput"
./fc app test > $sourceFile
grep -oP '[0-9.]+(?=%)|[0-9.]+(?=[A-Z]+ of)' "$sourceFile" | tr '\n' ',' > $opFile
sed -i 's/,$//' $opFile
The requirement is that I need to use this script with the watch command. And I'd like to make this into a systemctl service. I did it as so.
sc.sh
#!/bin/bash
watch -n 60 /root/ss.sh
And in my /etc/systemd/system,
log_info.service
[Unit]
Description="Test Desc"
After=network.target
[Service]
ExecStart=/root/sc.sh
Type=simple
[Install]
WantedBy=default.target
When I run systemctl start log_info.service, It runs but not continuously the way I'd like it to.
On running sytemctl status log_info.service,
info_log.service - "Test Desc"
Loaded: loaded (/etc/systemd/system/info_log.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2016-09-12 08:17:02 UTC; 2min 18s ago
Process: 35555 ExecStart=/root/sc.sh (code=exited, status=1/FAILURE)
Main PID: 35555 (code=exited, status=1/FAILURE)
Sep 12 08:17:02 mo-b428aa6b4 systemd[1]: Started "Test Desc".
Sep 12 08:17:02 mo-b428aa6b4 sc.sh[35654]: Error opening terminal: unknown.
Sep 12 08:17:02 mo-b428aa6b4 systemd[1]: info_log.service: Main process exited, code=exited, status=1/FAILURE
Sep 12 08:17:02 mo-b428aa6b4 systemd[1]: info_log.service: Unit entered failed state.
Sep 12 08:17:02 mo-b428aa6b4 systemd[1]: info_log.service: Failed with result 'exit-code'.
Any ideas as to why it's not running right? Any help would be appreciated!
So the reason I learnt (from superuser) for this failing was exactly what was in my error console, i.e,
Error opening terminal: unknown
Watch can only be executed from the terminal because it requires access to a terminal, while services don't have that access.
A possible alternative to watch could be using a command that doesn't require the terminal, like screen or tmux. Or, another alternative which worked for me, that was suggested by grawity on superuser, was
# foo.timer
[Unit]
Description=Do whatever
[Timer]
OnActiveSec=60
OnUnitActiveSec=60
[Install]
WantedBy=timers.target
The logic behind this was that, the need was to run the script every 60 seconds, not to use watch. Hence, grawity suggested that I use a timer unit file that calls the service file every 60 seconds instead. If the service unit was a different name from the timer unit, [Timer] Unit= can be used.
Hope this helped you and +1 to grawity and Eric Renouf from superuser for the answers!
How can I redirect output of script which is run by systemd ExecStart script to boot console?
I need to debug what is wrong with script until boot but I can't use journalctl because it's embedded linux with ROM rootfs.
Now my .service file looks like:
[Unit]
Description=Init script
After=network.target
Before=getty#tty1.service
[Service]
Type=oneshot
ExecStart=/usr/lib/systemd/test_init_script
ExecStartPre=/usr/bin/echo -e \033%G
ExecReload=/bin/kill -HUP $MAINPID
WorkingDirectory=/
Enviroment=TERM=xterm
[Install]
WantedBy=multi-user.target
test_init_script:
#!/bin/sh -
echo "Test!"
And it didn't work, after boot I receive message:
#systemctl status test_init_script.service
test_init_script.service - Init script
Loaded: loaded (/usr/lib/systemd/test_init_script.service)
Active: failed (Result: exit-code) Since Thu 1970-01-01 08:26:03 CST; 19s ago
Process: 170 ExecStartOre=/usr/bin/echo -e -G (code=exited, status=203/EXEC)
Did anyone know how to redirect script output to terminal?
Ok, so I resolve this problem by building image with changed file:
/etc/journald.conf
Where I change storage option to volatile:
Storage=volatile
This option means that all journald data will be store in RAM so this is a workaround for read only file system.
Please refer to journald manual page to see more options.