Handling cron jobs in docker? - cron

How are people generally handling cron jobs with docker? The most common case I've seen is there will be a sidekick image running just crond and the code base, however when using cronie I'm not able to read in any environmental variables that are passed in on the docker command line.
Specifically I'll do this:
docker run -d --name cron -e VAR1=val1 -e VAR2=val2 cron_image start
Inside the image we'll have this:
[root#dae7207bf10e /]# yum info cronie
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.ash.fastserv.com
* epel: mirror.cs.pitt.edu
* extras: mirror.vcu.edu
* updates: mirror.us.leaseweb.net
Installed Packages
Name : cronie
Arch : x86_64
Version : 1.4.11
Release : 13.el7
Size : 211 k
Repo : installed
Summary : Cron daemon for executing programs at set times
URL : https://fedorahosted.org/cronie
License : MIT and BSD and ISC and GPLv2+
Description : Cronie contains the standard UNIX daemon crond that runs specified programs at
: scheduled times and related tools. It is a fork of the original vixie-cron and
: has security and configuration enhancements like the ability to use pam and
: SELinux.
[root#dae7207bf10e /]# cat /usr/local/bin/start
#!/bin/bash
/usr/bin/env > /var/tmp/docker_env
/usr/sbin/crond -n
And my crontabs will look like this:
SHELL=/bin/bash
5 16 * * * source /var/tmp/docker_env; /usr/local/bin/randomchallenge &> /var/log/randomchallenge.log
Originally I didn't have the source bits at all and tried to use the variables directly however it doesn't look like cronie presents them to called jobs (which does make sense in the vast majority of use cases). I've tried pulling in this env file a variety of ways without luck, my program can never read the variables. Even wrapping the whole thing in a shell script that pulls in env doesn't do the job.
How are people handling this kind of thing? Hard coding values is not an option. I suppose I could make the start script generate the crontab on the fly but that seems really ugly.

Sourcing the env file does not work, I'm not sure why (originally I was chmod +xing the env file, I removed it for this answer so it isn't that). I ended up finding this wonky env kludge to do it. env can set variables from stdin so we're just catting our env file, sending it to env, and then using that environment for our actual job.
[root#b7886c463928 /]# cat /usr/local/bin/start
#!/bin/bash
env > /var/tmp/docker_env
/usr/sbin/crond -n
[root#b7886c463928 /]# crontab -l
*/1 * * * * env - `cat /var/tmp/docker_env` env > /tmp/cron.check
You'll need to add this bit before every job
env - `cat /var/tmp/docker_env`
I'm going to write a lightweight crond clone that can handle standard job formats but passes the environment along and outputs to stdout.

Cron in docker world (due to whatever reasons) seem to have received lesser love compared to other facilities in a standard Linux environment. I found it not very obvious how to do it correctly.
Here is my take on the problem and a solution for it. Have a look at docker-vixie-cron and its docker image redmatter/cron to see if it helps your scenario. It did take a bit of trial and error to arrive at the current solution, but please feel free to air your thoughts.
It is quite different to what you have done with cronie; here is how. In your project you have to add the Dockerfile that has the below lines and a crontab.txt with your cron definition.
Dockerfile
FROM redmatter/cron
ADD randomchallenge /usr/local/bin/
crontab.txt
*/1 * * * * /usr/local/bin/randomchallenge >>/var/log/randomchallenge.log 2>&1
If you want to use a different user to root (say because you have another container sharing the cron container), then you can additionally define RUN_USER=another.user and then add the user using a built-in script called cron-user add; as in the below version of Dockerfile.
Dockerfile with another.user
FROM redmatter/cron
ENV RUN_USER=another.user
RUN cron-user add -u another.user
ADD randomchallenge /usr/local/bin/
Run
In both cases you can run the container using the command as below.
docker run -d --name cron \
-e PRESERVE_ENV_VARS="VAR1 VAR2" \
-e VAR1=val1 -e VAR2=val2 \
cron_image start
Here it is important to specify PRESERVE_ENV_VARS="VAR1 VAR2" so that VAR1 and VAR2 are preserved for randomchallenge to see.

Related

How to run a cron job as a non-root user and log the job's output?

Docker best practices state:
If a service can run without privileges, use USER to change to a non-root user.
In the case of cron, that doesn't seem practical as cron needs root privileges to function properly. The executable that cron runs, however, does NOT need root privileges. Therefore, I run cron itself as the root user, but call my crontab script to run the executable (in this case, a simple Python FTP download script I wrote) as a non-root user via the crontab -u <user> command.
The cron/Docker interactability and community experience still seems to be in its infancy, but there are some pretty good solutions out there. Utilizing lessons gleaned from this and this great posts, I arrived at a Dockerfile that looks something like this:
FROM python:3.7.4-alpine
RUN adduser -S riptusk331
WORKDIR /home/riptusk331
... boilerplate not necessary to post here ...
COPY mycron /etc/cron.d/mycron
RUN chmod 644 /etc/cron.d/mycron
RUN crontab -u riptusk331 /etc/cron.d/mycron
CMD ["crond", "-f", "-l", "0"]
and the mycron file is just a simple python execution running every minute
* * * * * /home/riptusk331/venv/bin/python3 /home/riptusk331/ftp.py
This works perfectly fine, but I am unsure of how exactly logging is being handled here. I do not see anything saved in /var/log/cron. I can see the output of cron and ftp.py on my terminal, as well as in the container logs if I pull it up in Kitematic. But I have no idea what is actually going on here.
So my first question(s) are: how is logging & output being handled here (without any redirects after the cron job), and is this implementation method ok & secure?
VonC's answer to this post suggests appending > /proc/1/fd/1 2>/proc/1/fd/2 to your cron job to redirect output to Docker's stdout and stderr. This is where I both get a little confused, and run into trouble.
My crontab file now looks like this
* * * * * /home/riptusk331/venv/bin/python3 /home/riptusk331/ftp.py > /proc/1/fd/1 2>/proc/1/fd/2
The output without any redirection appeared to be going to stdout/stderr already, but I am not entirely sure. I just know it was showing up on my terminal. So why would this redirect be needed?
When I add this redirect, I run into permissioning issues. Recall that this crontab is being called as the non-root user riptusk331. Because of this, I don't have root access and get the following error:
/bin/ash: can't create /proc/1/fd/1: Permission denied
The Alpine base images are based on a compact tool set called BusyBox and when you run crond here you're getting the BusyBox cron and not any other implementation. Its documentation is a little sparse, but if you look at the crond source (in C) what you'll find is that there is not any redirection at all when it goes to run a job (see the non-sendmail version of start_one_job); the job's stdout and stderr are crond's stdout and stderr. In Docker, since crond is the container primary process, that in turn becomes the container's output stream.
Anything that shows up in docker logs definitionally went to stdout or stderr or the container's main process. If this cron implementation wrote your job's output directly there, there's nothing wrong or insecure with taking advantage of that.
In heavier-weight container orchestration systems, there is some way to run a container on a schedule (Kubernetes CronJobs, Nomad periodic jobs). You might find it easier and more consistent with these systems to set up a container that runs your job once and then exits, and then to set up the host's cron to run your container (necessarily, as root).
You need to allow the CAP_SETGID to run crond as user, this can be a security risk if it is set to all busybox binary but you can use dcron package instead of busybox's builtin crond and set the CAP_SETGID just on that program. Here is what you need to add for Alpine, using riptusk331 as running user
USER root
# crond needs root, so install dcron and cap package and set the capabilities
# on dcron binary https://github.com/inter169/systs/blob/master/alpine/crond/README.md
RUN apk add --no-cache dcron libcap && \
chown riptusk331:riptusk331 /usr/sbin/crond && \
setcap cap_setgid=ep /usr/sbin/crond
USER riptusk331

How to automate rsync in linux?

I want to run rsync(in remote linux system) automatically in every minute. so whatever the changes (in test.txt, as mentioned below) are done in one system, it should be affected in another system at the same minute interval.
For this purpose, I have changed in sudo crontab -e , and added
*/1 * * * * /home/john/rsync.sh
rsync.sh contains two commands:
sudo rsync -av /home/john1/test.txt remote#remote_ip:
sudo rsync -av --update /home/john1/test.txt remote#remote_ip:
when I am running rsync.sh manually, all the changes are affected successfully.
If you added this in the root crontab, you don't need to start the rsync commands with sudo.
Things that run in crontab will probably not have the same environment variables. You can add the absolute path to rsync if you're unsure, for example /usr/bin/rsync. Also check other environment variables, for example running set.
When you run it manually, you're already in a specific shell which is probably able to run it. But when cron runs it, it doesn't know which interpreter to use. Always start your scripts with #!/usr/bin/bash (or whatever is your favorite shell). And/or call your cron job specifying which shell to use, for example:
*/1 * * * * /bin/bash /home/john/rsync.sh
I hope this helps. ;)

Cronfile did not execute sudo -u line?

I have made the following cronjob sh file :
Vi RestartServices.sh
/etc/init.d/b1s stop
sleep 10
/etc/init.d/sapb1servertools stop
sleep 10
sudo -u ndbadm /usr/sap/NDB/HDB00/HDB stop
sleep 20
sudo -u ndbadm /usr/sap/NDB/HDB00/HDB start
sleep 10
/etc/init.d/sapb1servertools start
sleep 10
/etc/init.d/b1s start
When I run this file manually the job runs correctly.
When scheduled in crontab (root user)
Crontab content:
# srvmagtCron: restarts daemons that died
0,5,10,15,20,25,30,35,40,45,50,55 * * * * /bin/sh -c "[ -x /etc/srvmagt/srvmagtCron ] && /etc/srvmagt/srvmagtCron"
0 2 * * * /hanamnt/shared/NDB/HDB00/backup/scripts/VGRbackup.sh
#RESTARTS SERVICE LAYER , SAPB1ServerTools service , HDB
0 3 * * * /hanamnt/shared/NDB/HDB00/backup/scripts/RestartServices.sh
It does get started at the requested time but I think it failed to execute the sudo line as the HDB service has not been restarted.
I'm trying to find out why?
Is it because sudo cannot be executed in a cronjob?
(service needs to start using user ndbadm)
path:
/opt/sap/sapjvm_6//bin:/opt/fujitsu/bwai/bin:/sbin:/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/usr/lib64/jvm/jre/bin:/usr/lib/mit/bin:/usr/lib/mit/sbin
You have a non-standard $PATH and crond(8) is running your crontab(5) entries with a shorter $PATH. See also environ(7), credentials(7) and execvp(3) with execve(2)
My recommendation would be to write a complete shell script, and put only that in crontab. So don't use sh -c in crontab entries, and set explicitly the PATH (either, and preferably, in the shell scripts your crontab entry is firing, or maybe in your crontab file).
You could for example have
0,5,10,15,20,25,30,35,40,45,50,55 * * * * /hanamnt/shared/srvmagt.sh
in your crontab, and have an executable /hanamnt/shared/srvmagt.sh file starting with
#!/bin/bash
export PATH=/opt/sap/sapjvm_6//bin:/opt/fujitsu/bwai/bin:/sbin:\
/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:\
/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:\
/usr/lib64/jvm/jre/bin:/usr/lib/mit/bin:/usr/lib/mit/sbin
# log a starting message
logger start of $0
Notice the use of logger(1) - and you should use it more wisely to get appropriate log messages under /var/log
BTW, your PATH is ridiculously too long. Such a long PATH is messy (and might slow down your shells) and could be a security risk; my recommendation would be to have a much shorter one (perhaps as short as $HOME/bin:/usr/local/bin:/bin:/usr/bin) and add appropriate symlinks or scripts in e.g. $HOME/bin/ or /usr/local/bin/ using explicit program paths.
Notice that sudo could be used in a crontab job (but that is often unwise) and then probably should be configured in /etc/sudoers ; perhaps you should prefer /bin/su (see su(1)...) in some shell script.
Read also more about setuid. Sometimes it is wiser to write in C a wrapper setuid- program using it (with setreuid(2)), but be careful (you could open huge security holes by mistake).
Read also Advanced Linux Programming (freely downloadable, a bit old) then syscalls(2) to understand better how Linux works internally. You need to have a better and more clear picture of your system in your head.

Getting Crontab to work on Nitrous

I am quite new to nitrous and programming in general. However, I wanted to see why my crontab job isn't doing anything on Nitrous.io.
I am using a virtualenv and I am in the understanding that you can run them on crontab. THis is my cronline:
10 6,19 * * * /home/action/susteq/bin/activate /home/action/susteq/start.py 2>&1 >> /home/action/susteq/log/start.log
Crontab should work on Nitrous.IO as long as you are actively logged into the box (or using tmux) and the box doesn't shutdown from inactivity. Paid boxes will stay running indefinitely.
Looking at this command you may want to ensure it runs as expected outside of the crontab. Try running the process first:
$ /home/action/susteq/bin/activate /home/action/susteq/http://start.py 2>&1 >> /home/action/susteq/log/start.log
If not then you may actually want to try placing 2>%1 at the end of the line (further explained on this redirection tutorial). The following command may be what you are looking for:
$ /home/action/susteq/bin/activate /home/action/susteq/start.py >> /home/action/susteq/log/start.log 2>&1
If that works, try adding it to your crontab:
$ 10 6,19 * * * /home/action/susteq/bin/activate /home/action/susteq/start.py >> /home/action/susteq/log/start.log 2>&1
Update: To run cron on Nitrous Pro you need to enable privileged mode on your container, which requires that you enable advanced container management. More details can be found here:
https://community.nitrous.io/docs/running-cron-on-nitrous

Shell script to log server checks runs manually, but not from cron

I'm using a basic shell script to log the results of top, netstat, ps and free every minute.
This is the script:
/scripts/logtop:
TERM=vt100
export TERM
time=$(date)
min=${time:14:2}
top -b -n 1 > /var/log/systemCheckLogs/$min
netstat -an >> /var/log/systemCheckLogs/$min
ps aux >> /var/log/systemCheckLogs/$min
free >> /var/log/systemCheckLogs/$min
echo "Message Content: $min" | mail -s "Ran System Check script" email#domain.com
exit 0
When I run this script directly it works fine. It creates the files and puts them in /var/log/systemCheckLogs/ and then sends me an email.
I can't, however, get it to work when trying to get cron to do it every minute.
I tried putting it in /var/spool/cron/root like so:
* * * * * /scripts/logtop > /dev/null 2>&1
and it never executes
I also tried putting it in /var/spool/cron/myservername and also like so:
* * * * * /scripts/logtop > /dev/null 2>&1
it'll run every minute, but nothing gets created in systemCheckLogs.
Is there a reason it works when I run it but not when cron runs it?
Also, here's what the permissions look like:
-rwxrwxrwx 1 root root 326 Jul 21 01:53 logtop
drwxr-xr-x 2 root root 4096 Jul 21 01:51 systemCheckLogs
Normally crontabs are kept in "/var/spool/cron/crontabs/". Also, normally, you update it with the crontab command as this HUPs crond after you're done and it'll make sure the file gets in the correct place.
Are you using the crontab command to create the cron entry? crontab to import a file directly. crontab -e to edit the current crontab with $EDITOR.
All jobs run by cron need the interpreter listed at the top, so cron knows how to run them.
I can't tell if you just omitted that line or if it is not in your script.
For example,
#!/bin/bash
echo "Test cron jon"
When running from /var/spool/cron/root, it may be failing because cron is not configured to run for root. On linux, root cron jobs are typically run from /etc/crontab rather than from /var/spool/cron.
When running from /var/spool/cron/myservername, you probably have a permissions problem. Don't redirect the error to /dev/null -- capture them and examine.
Something else to be aware of, cron doesn't initialize the full run environment, which can sometimes mean you can run it just fine from a fully logged-in shell, but it doesn't behave the same from cron.
In the case of above, you don't have a "#!/bin/shell" up top in your script. If root is configured to use something like a regular bourne shell or cshell, the syntax you use to populate your variables will not work. This would explain why it would run, but not populate your files. So if you need it to be ksh, "#!/bin/ksh". It's generally best not to trust the environment to keep these things sane. If you need your profile run the do a ". ~/.profile" up front as well. Or a quick and dirty way to get your relatively full env is to do it from su as such "* * * * * su - root -c "/path/to/script" > /dev/null 2>&1
Just some things I've picked up over the years. You're definitely expecting a ksh based on your syntax, so you might want to be sure it's using it.
Thanks for the tips... used a little bit of each answer to get to the bottom of this.
I did have the interpreter at the top (wasn't shown here), but may have been wrong.
Am using #!/bin/bash now and that works.
Also had to tinker with the permissions of the directory the log files are being dumped in to get things working.

Resources