ubuntu crontab celery beat - linux

I have several celery tasks that execute via beat. In development, I used a single command to set this up, like:
celery worker -A my_tasks -n XXXXX#%h -Q for_mytasks -c 1 -E -l INFO -B -s ./config/celerybeat-schedule --pidfile ./config/celerybeat.pid
On moving to production, I inserted this into a script that activated my venv, set the PYTHONPATH, removed old beat files, cd to correct directory and then run celery. This works absolutely fine. However, in production I want to separate the worker from the beat scheduler, like:
celery worker -A my_tasks -n XXXXX#%h -Q for_mytasks -c 1 -E -l INFO -f ./logs/celeryworker.log
celery beat -A my_tasks -s ./config/celerybeat-schedule --pidfile ./config/celerybeat.pid -l INFO -f ./logs/celerybeat.log
Now this all works fine when put into the relevant bash scripts. However, I need these to be run on server start-up. I encountered several problems:
1) in crontab -e #reboot my_script will not work. I have to insert a delay to allow rabbitmq to fully start, i.e. #reboot sleep 60 && my_script. Now this seems a little 'messy' to me but I can live with it.
2) celery worker takes several seconds to finish before celery beat can be run properly. I tried all manner of cron directives to accomplish beat being run after worker has executed successfully but couldn't get the beat to run. My current solution in crontab is something like this:
#reboot sleep 60 && my_script_worker
#reboot sleep 120 && my_script_beat
So basically, ubuntu boots, waits 60 seconds and runs celery worker then waits another 60 seconds before running celery beat. This works fine but it seems even more 'messy' to me. In an ideal world I would like to flag when rabbitmq is ready to run worker, then flag when worker has executed successfully so that I can run beat.
My question is : has anybody encountered this problem and if so do they have a more elegant way of kicking off celery worker & beat on server reboot?
EDIT: 24/09/2019
Thanks to DejanLekic & Greenev
I have spent some hours converting from cron to systemd. Yes, I agree totally that this is a far more robust solution. My celery worker & beat are now started as services by systemd on reboot.
There is one tip I have for people trying this that is not mentioned in the celery documentation. The template beat command will create a 'celery beat database' file called celerybeat-schedule in your working directory. If you restart your beat service, this file will cause spurious celery tasks to be spawned that don't seem to fit with your actual celery schedule. The solution is to delete this file each time the beat service starts. I also delete the pid file, if it's there. I did this by adding 2 ExecStartPre and a -s option to the beat service :
ExecStartPre=/bin/sh -c 'rm -f ${CELERYBEAT_DB_FILE}'
ExecStartPre=/bin/sh -c 'rm -f ${CELERYBEAT_PID_FILE}'
ExecStart=/bin/sh -c '${CELERY_BIN} beat \
-A ${CELERY_APP} --pidfile=${CELERYBEAT_PID_FILE} \
-s ${CELERYBEAT_DB_FILE} \
--logfile=${CELERYBEAT_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL}'
Thanks guys.

To daemonize celery worker we are using systemd, so the worker and the beat could be getting to run as separate services and configured to start on the server reboot via just making these services enabled

What you really want is to run Celery beat process as a systemd or SysV service. It is described in depth in the Daemonization section of the Celery documentation. In fact, same goes for the worker process too.
Why? - Unlike your solution, which involves crontab with #reboot line, systemd for an example can check the health of the service and restart it if needed. All Linux services on your Linux boxes are started this way because it has been made for this particular purpose.

Related

Is there a way to make crontab run a gnu screen session?

I have a discord bot running on a raspberry pi that i need to restart every day, I'm trying to do this through crontab but with no luck.
It manages to kill the active screen processes but never starts an instance, not that I can see with "screen -ls" and I can tell that it doesn't create one that I can't see as the bot itself does not come online.
Here is the script:
#!/bin/bash
sudo pkill screen
sleep 2
screen -dmS discordBot
sleep 2
screen -S "discordBot" -X stuff "node discordBot/NEWNEWNEWN\n"
Here is also the crontab:
0 0 * * * /bin/bash /home/admin/discordBot/script.sh
Is it possible to have crontab run a screen session? And if so how?
Previously I tried putting the screen command stright into cron but now I have it in a bash script instead.
If I run the script in the terminal it works perfectly, it’s just cron where it fails. Also replacing "screen" with the full path "/usr/bin/screen" does not change anything.
So the best way of doing it, without investigating the underlying problem would be to create a systemd service and putting a restart command into cron.
 
/etc/systemd/system/mydiscordbot.service:
[Unit]
Description=A very simple discord bot
[Service]
Type=simple
ExecStart=node /path/to/my/discordBot.js
[Install]
WantedBy=multi-user.target
Now you can run your bot with systemctl start mydiscordbot and can view your logs with journalctl --follow -u mydiscord bot
Now you only need to put
45 2 * * * systemctl restart discordbot
into root's crontab and you should be good to go.
You probably should also write the logs into a logfile, maybe /var/log/mydiscordbot.log, but how and if you do that is up to you.
OLD/WRONG ANSWER:
cron is run with a minimal environment, and depending on your os, /usr/bin/ is probably not in the $PATH var. screen is mostlikely located at /usr/bin/screen, so cron can't run it because it can't find the screen binary. try replacing screen in your script with /usr/bin/screen
But the other question here is: Why does your discord bot need to be restarted every day. I run multiple bots with 100k+ users, and they don't need to be restarted at all. Maybe you should open a new question with the error and some code snippets.
I don't know what os your rpi is running, but I had a similar issue earlier today trying to get vms on a server to launch a terminal and run a python script with crontab. I found a very easily solution to automatically restart it on crashes, and to run something simply in the background. Don't rely on crontab or an init service. If your rpi as an x server running, or anything that can be launched on session start, there is a simple shell solution. Make a bash script like
while :; do
/my/script/to/keep/running/here -foo -bar -baz >> /var/log/myscriptlog 2>&1
done
and then you would start it on .xprofile or some similar startup operation like
/path/to/launcher.sh &
to have it run the background. It will restart automatically everytime it closes if started in something like .xprofile, .xinitrc, or anything ran at startup.
Then maybe make a cronjob to restart the raspberry pi or whatever system is running the script but this script wil restart the service whenever it's closed anyway. Maybe you can put that cronjob on a script but I don't think it would launch the GUI.
Some other things you can do to launch a GUI terminal in my research with cronjob that you can try, though they didn't work for my situation on these custom linux vms, and that I read was a security risk to do this from a cronjob, is you can specify the display.
* * * * * DISPLAY=:0 /your/gui/stuff/here
but you would would need to make sure the user has the right permissions and it's considered an unsafe hack to even do this as far as I have read.
for my issue, I had to launch a terminal that stayed open, and then changed to the directory of a python script and ran the script, so that the local files in directory would be called in the python script. here is a rough example of the "launcher.sh" I called from the startup method this strange linux distro used lol.
#!/bin/sh
while :; do
/usr/bin/urxvt -e /bin/bash -c "cd /path/to/project && python loader.py"
done
Also check this source for process management
https://mywiki.wooledge.org/ProcessManagement

Automatically starting Celery from within Django app

I am getting a Django 1.6 set up started on a Linux (Debian Whiskey) server on Google Compute Engine. I've got Celery 3.1 running in the background to help with some processes. When I start a new instance (using a snapshot I've created), I always need to start Celery. I am looking for a way to start Celery automatically on server-load. This is particularly helpful if the server decides to restart, as they seem to do now and then. To achieve this, I've edited the rc.local file:
$ sudo nano /etc/rc.local
It used to contain the following:
exit 0
[ -x /sbin/initctl ] && initctl emit --no-wait google-rc-local-has-run || true
I've edited the file such that it now reads:
cd /home/user/gce_app celery -A myapp.tasks --concurrency=1 --loglevel=info worker > output.log 2> errors.log &
exit 0
[ -x /sbin/initctl ] && initctl emit --no-wait google-rc-local-has-run || true
The directory:
/home/user/gce_app
is where my Django project resides and the directory I need to be in to start Celery. However, after restarting the instance, when I type in:
$ celery status
Error: No nodes replied within time constraint.
Opening the errors.log file, I see:
/etc/rc.local: 14: /etc/rc.local: celery: not found
Surely the cd at the start of that code string should address this? Is there a way (within the Django project itself) to start the Celery instance when the project is started to make the code more platform-independent and immune to inevitable OS updates?
I think you're missing a semicolon between your 'cd' and celery invocations. Also, I suspect rc.local may not be searching your path, so you may need to give an absolute path to celery. e.g.
cd /home/user/gce_app; /usr/bin/celery ...
Alternatively, you might look at using a startup script from the GCE metadata to avoid needing to modify rc.local.
Since you seem to be using upstart this might help you:
description "runs celery"
start on runlevel [2345]
stop on runlevel [!2345]
console log
env VENV='/srv/myvirtualenv'
env PROJECT='/srv/run/mydjangoproject'
exec su -s /bin/sh -c 'exec "$0" "$#"' www-data -- /usr/bin/env PATH=$VENV:$PATH $VENV/python $PROJECT/manage.py celeryd
respawn
respawn limit 10 5

Script not starting on boot with start-stop-daemon

My script (located in /etc/init.d) is creating a pid file ($PIDFILE), but there is no process running. My daemon script includes:
start-stop-daemon --start --quiet --pidfile $PIDFILE -m -b --startas $DAEMON --test > /dev/null || return 1
The script works fine when executing it manually.
You need to create startup links.
sudo update-rc.d SCRIPT_NAME defaults
then reboot. SCRIPT_NAME is the name of the script in /etc/init.d (Without the path)
Was able to get it working, but tried so many things, don't know exactly what fixed it (probably an error in script or config). However, learned a lot and wanted to share since I can't find much of the same in the internet abyss.
It seems Ubuntu (and many other distros based on Ubuntu, including Mint) has migrated to Upstart for job and service management. Upstart includes SysVinit (using /etc/init.d daemons) compatibility that still can use update-rc.d to manage daemons (so if you are familiar with that usage, you can keep on using it). The Upstart method is to use a single .conf file in the /etc/init folder. My SCRIPT.conf file is very simple (I'm using a python script):
start on filesystem or runlevel [2345]
stop on runlevel [016]
exec python /usr/share/python-support/SCRIPT/SCRIPT.py
This simple file completely replaces the standard script in /etc/init.d with the case statement to provide [start|stop|restart|reload] functions and the pointer to /usr/bin/SCRIPT. You can see that it includes runlevel control that would normally be found in the /etc/rc*.d files (thus eliminating several files).
I tried update-rc.d to create the necessary /etc/rc*.d/ files for my daemon. My daemon bash script is located in /etc/init.d and includes the start-stop-daemon command as in my original question. (That command also works fine from terminal.)
I had /etc/rc*.d/ files, the bash script in /etc/init.d and /etc/init/SCRIPT.conf file during boot and it seems that Upstart likely first looks for the .conf file for its direction because the SysVinit command service SCRIPT [start|stop|restart|reload] returns Unknown Instance, however you can find the process is running with ps -elf | grep SCRIPT_FILE.
One interesting thing to note is the forking of your daemon when using .conf. The script as written above only spawns one fork of the daemon. However, total independence of the original script is possible by using expect fork or expect daemon and respawn (see the Upstart Cookbook for reference). Using these will ensure that your daemon will never be killed (at least by using the kill command).
I continued to test both my daemon and the boot process by utilizing the sudo initctl reload-configuration command. This reloads the conf files where you can test your daemon by the sudo [start|stop|restart] SCRIPT command. The result of the start command is:
$ sudo start SCRIPT
SCRIPT start/running, process xxxx
$ sudo restart SCRIPT
SCRIPT start/running, process xxxx
$ sudo stop SCRIPT
SCRIPT stop/waiting
Also, there is a nice log in /var/log/upstart/SCRIPT.log that gives you useful information for your daemon during boot. Mine still has a very annoying bug that prevents root from displaying osd messages with notify-send from my daemon. My log file includes a gtk warning (I will open another question to solicit help).
Hope this helps others in developing their daemons.

How can I trigger a delayed system shutdown from in a shell script?

On my private network I have a backup server, which runs a bacula backup every night. To save energy I use a cron job to wake the server, but I haven't found out, how to properly shut it down after the backup is done.
By the means of the bacula-director configuration I can call a script during the processing of the last backup job (i.e. the backup of the file catalog). I tried to use this script:
#!/bin/bash
# shutdown server in 10 minutes
#
# ps, 17.11.2013
bash -c "nohup /sbin/shutdown -h 10" &
exit 0
The script shuts down the server - but apparently it returns just during the shutdown,
and as a consequence that last backup job hangs just until the shutdown. How can I make the script to file the shutdown and return immediately?
Update: After an extensive search I came up with a (albeit pretty ugly) solution:
The script run by bacula looks like this:
#!/bin/bash
at -f /root/scripts/shutdown_now.sh now + 10 minutes
And the second script (shutdown_now.sh) looks like this:
#!/bin/bash
shutdown -h now
Actually I found no obvious method to add the required parameters of shutdown in the syntax of the 'at' command. Maybe someone can give me some advice here.
Depending on your backup server’s OS, the implementation of shutdown might behave differently. I have tested the following two solutions on Ubuntu 12.04 and they both worked for me:
As the root user I have created a shell script with the following content and called it in a bash shell:
shutdown -h 10 &
exit 0
The exit code of the script in the shell was correct (tested with echo $?). The shutdown was still in progress (tested with shutdown -c).
This bash function call in a second shell script worked equally well:
my_shutdown() {
shutdown -h 10
}
my_shutdown &
exit 0
No need to create a second BASH script to run the shutdown command. Just replace the following line in your backup script:
bash -c "nohup /sbin/shutdown -h 10" &
with this:
echo "/sbin/poweroff" | /usr/bin/at now + 10 min >/dev/null 2>&1
Feel free to adjust the time interval to suit your preference.
If you can become root: either log in as, or sudo -i this works (tested on ubuntu 14.04):
# shutdown -h 20:00 & //halts the machine at 8pm
No shell script needed. I can then log out, and log back in, and the process is still there. Interestingly, if I tried this with sudo in the command line, then when I log out, the process does go away!
BTW, just to note, that I also use this command to do occasional reboots after everyone has gone home.
# shutdown -r 20:00 & //re-boots the machine at 8pm

How to make sure an application keeps running on Linux

I'm trying to ensure a script remains running on a development server. It collates stats and provides a web service so it's supposed to persist, yet a few times a day, it dies off for unknown reasons. When we notice we just launch it again, but it's a pain in the rear and some users don't have permission (or the knowhow) to launch it up.
The programmer in me wants to spend a few hours getting to the bottom of the problem but the busy person in me thinks there must be an easy way to detect if an app is not running, and launch it again.
I know I could cron-script ps through grep:
ps -A | grep appname
But again, that's another hour of my life wasted on doing something that must already exist... Is there not a pre-made app that I can pass an executable (optionally with arguments) and that will keep a process running indefinitely?
In case it makes any difference, it's Ubuntu.
I have used a simple script with cron to make sure that the program is running. If it is not, then it will start it up. This may not be the perfect solution you are looking for, but it is simple and works rather well.
#!/bin/bash
#make-run.sh
#make sure a process is always running.
export DISPLAY=:0 #needed if you are running a simple gui app.
process=YourProcessName
makerun="/usr/bin/program"
if ps ax | grep -v grep | grep $process > /dev/null
then
exit
else
$makerun &
fi
exit
Then add a cron job every minute, or every 5 minutes.
Monit is perfect for this :)
You can write simple config files which tell monit to watch e.g. a TCP port, a PID file etc
monit will run a command you specify when the process it is monitoring is unavailable/using too much memory/is pegging the CPU for too long/etc. It will also pop out an email alert telling you what happened and whether it could do anything about it.
We use it to keep a load of our websites running while giving us early warning when something's going wrong.
-- Your faithful employee, Monit
Notice: Upstart is in maintenance mode and was abandoned by Ubuntu which uses systemd. One should check the systemd' manual for details how to write service definition.
Since you're using Ubuntu, you may be interested in Upstart, which has replaced the traditional sysV init. One key feature is that it can restart a service if it dies unexpectedly. Fedora has moved to upstart, and Debian is in experimental, so it may be worth looking into.
This may be overkill for this situation though, as a cron script will take 2 minutes to implement.
#!/bin/bash
if [[ ! `pidof -s yourapp` ]]; then
invoke-rc.d yourapp start
fi
If you are using a systemd-based distro such as Fedora and recent Ubuntu releases, you can use systemd's "Restart" capability for services. It can be setup as a system service or as a user service if it needs to be managed by, and run as, a particular user, which is more likely the case in OP's particular situation.
The Restart option takes one of no, on-success, on-failure, on-abnormal, on-watchdog, on-abort, or always.
To run it as a user, simply place a file like the following into ~/.config/systemd/user/something.service:
[Unit]
Description=Something
[Service]
ExecStart=/path/to/something
Restart=on-failure
[Install]
WantedBy=graphical.target
then:
systemctl --user daemon-reload
systemctl --user [status|start|stop|restart] something
No root privilege / modification of system files needed, no cron jobs needed, nothing to install, flexible as hell (see all the related service options in the documentation).
See also https://wiki.archlinux.org/index.php/Systemd/User for more information about using the per-user systemd instance.
I have used from cron "killall -0 programname || /etc/init.d/programname start". kill will error if the process doesn't exist. If it does exist, it'll deliver a null signal to the process (which the kernel will ignore and not bother passing on.)
This idiom is simple to remember (IMHO). Generally I use this while I'm still trying to discover why the service itself is failing. IMHO a program shouldn't just disappear unexpectedly :)
Put your run in a loop- so when it exits, it runs again... while(true){ run my app.. }
I couldn't get Chris Wendt solution to work for some reason, and it was hard to debug. This one is pretty much the same but easier to debug, excludes bash from the pattern matching. To debug just run: bash ./root/makerun-mysql.sh. In the following example with mysql-server just replace the value of the variables for process and makerun for your process.
Create a BASH-script like this (nano /root/makerun-mysql.sh):
#!/bin/bash
process="mysql"
makerun="/etc/init.d/mysql restart"
if ps ax | grep -v grep | grep -v bash | grep --quiet $process
then
printf "Process '%s' is running.\n" "$process"
exit
else
printf "Starting process '%s' with command '%s'.\n" "$process" "$makerun"
$makerun
fi
exit
Make sure it's executable by adding proper file permissions (i.e. chmod 700 /root/makerun-mysql.sh)
Then add this to your crontab (crontab -e):
# Keep processes running every 5 minutes
*/5 * * * * bash /root/makerun-mysql.sh
The supervise tool from daemontools would be my preference - but then everything Dan J Bernstein writes is my preference :)
http://cr.yp.to/daemontools/supervise.html
You have to create a particular directory structure for your application startup script, but it's very simple to use.
first of all, how do you start this app? Does it fork itself to the background? Is it started with nohup .. & etc? If it's the latter, check why it died in nohup.out, if it's the first, build logging.
As for your main question: you could cron it, or run another process on the background (not the best choice) and use pidof in a bashscript, easy enough:
if [ `pidof -s app` -eq 0 ]; then
nohup app &
fi
You could make it a service launched from inittab (although some Linuxes have moved on to something newer in /etc/event.d). These built in systems make sure your service keeps running without writing your own scripts or installing something new.
It's a job for a DMD (daemon monitoring daemon). there are a few around; but I usually just write a script that checks if the daemon is running, and run if not, and put it in cron to run every minute.
Check out 'nanny' referenced in Chapter 9 (p197 or thereabouts) of "Unix Hater's Handbook" (one of several sources for the book in PDF).
A nice, simple way to do this is as follows:
Write your server to die if it can't listen on the port it expects
Set a cronjob to try to launch your server every minute
If it isn't running it'll start, and if it is running it won't. In any case, your server will always be up.
I think a better solution is if you test the function, too. For example, if you had to test an apache, it is not enough only to test, if "apache" processes on the systems exist.
If you want to test if apache OK is, then try to download a simple web page, and test if your unique code is in the output.
If not, kill the apache with -9 and then do a restart. And send a mail to the root (which is a forwarded mail address to the roots of the company/server/project).
It's even simplier:
#!/bin/bash
export DISPLAY=:0
process=processname
makerun="/usr/bin/processname"
if ! pgrep $process > /dev/null
then
$makerun &
fi
You have to remember though to make sure processname is unique.
One can install minutely monitoring cronjob like this:
crontab -l > crontab;echo -e '* * * * * export DISPLAY=":0.0" && for
app in "eiskaltdcpp-qt" "transmission-gtk" "nicotine";do ps aux|grep
-v grep|grep "$app";done||"$app" &' >> crontab;crontab crontab
disadvantage is that the app names you enter have to be found in ps aux|grep "appname" output and at same time being able to be launched using that name: "appname" &
also you can use the pm2 library.
sudo apt-get pm2
And if its a node app can install.
Sudo npm install pm2 -g
them can run the service.
linux service:
sudo pm2 start [service_name]
npm service app:
pm2 start index.js

Resources