elastic beanstalk cron run twice - cron

I have one application on elastic beanstalk and cron jobs for it.
The code of setting cron is
container_commands:
01_some_cron_job:
command: "echo '*/5 * * * * wget -O - -q -t 1 http://site.com/cronscript/' | crontab"
leader_only: true
This script calls the mail sender. And I'm receive two message per time.
code of http://site.com/cronscript/ looks like (php code)
require_once('ses.php');
$ses = new SimpleEmailService(EMAIL_SHORTKEY, EMAIL_LONGKEY);
$m = new SimpleEmailServiceMessage();
$m->addTo('user#domain.com');
$m->setFrom('response_service#domain.com');
$m->setSubject('test message');
$m->setMessageFromString('', 'message content');
$send_emails=($ses->sendEmail($m));
When I call http://site.com/cronscript/ from browser's address bar, I receive one message as I want.

I believe what's happening is that first time you deploy your app, AutoScaling picks one instance to be a leader and new cron job is created on that instance. Next time you deploy your app, AutoScaling picks another instance to be a leader. So you end up with the same cron job on two instances.
So the basic test would be to ssh to all instances and check their crontab contents with crontab -l
You can avoid duplicate cron jobs by removing old cron jobs on the instance regardless whether it is a leader or not.
container_commands:
00_remove_old_cron_jobs:
command: "crontab -r || exit 0"
01_some_cron_job:
command: "echo '*/5 * * * * wget -O - -q -t 1 http://example.com/cronscript/' | crontab"
leader_only: true
As mentioned in Running Cron In Elastic Beanstalk Auto-Scaling Environment: || exit 0 is mandatory because if there is no crontab in the machine the crontab -r command will return a status code > 0 (an error). Elastic Beanstalk stop the deploy process if one of the container_commands fail.
Although I personally never experienced a situation when crontab was not found on Elastic Beanstalk Instance.
You can run /opt/elasticbeanstalk/bin/leader-test.sh to test whether it is a leader instance or not.
Hope it helps.

I had the same problem, only change the user from root to the one which I'm logging in with eb ssh and it works.
My code looks like this.
files:
"/etc/cron.d/mycron":
mode: "000644"
owner: ec2-user
group: ec2-user
content: |
30 1 * * * echo $(date) >> /tmp/cron_start.log; /usr/local/bin/daily_script.sh >> /tmp/crons.log 2>&1;
"/usr/local/bin/daily_script.sh":
mode: "000755"
owner: ec2-user
group: ec2-user
content: |
#!/bin/bash
date > /tmp/date
# Your actual script content
/opt/python/run/venv/bin/python3 /opt/python/current/app/cronjob_files/email_data.py >> /opt/python/current/app/cron.logs 2>&1
exit 0
...

Related

How to configure supervisor not to kill jobs started by cron in docker container

I wanted to run cron and run a few script started at a time set in crontab. I've installed cron on my docker container and wanted to add some crontab lines and cron starting in separate script. Here are fragments of my configuration
supervisord.conf
[program:cron]
command=/stats/run-crontabs.sh
/stats/run-crontabs.sh
#!/bin/bash
crontab -l | { cat; echo "11 1 * * * /stats/generate-year-rank.sh"; } | crontab -
crontab -l | { cat; echo "12 1 * * * /stats/generate-week-rank.sh"; } | crontab -
cron -f -L 15
and when it is time to run script by cron, I can see only that error in container logs
2022-01-29 01:12:01,920 INFO reaped unknown pid 691343
I wonder how I can run script by cron on docker container. Do I need supervisor?
EDIT: As #david-maze suggested I've done it like he commented and run cron as container entrypoint and problem is the same
Thank you for your help
Ok, I have to post an answer. I've realized that scripts working well, but it saved reports in system root directory, not on directories that I wanted.
It were because of lack of environment variables
More you can read those topic,
Where can I set environment variables that crontab will use?
but I've resolved my problem with adding that line at the start of 'run-crontabs.sh' script
crontab -l | { cat; echo "$(env)"; } | crontab -

Best way to run cron in docker container [duplicate]

I am trying to run a cronjob inside a docker container that invokes a shell script.
Yesterday I have been searching all over the web and stack overflow, but I could not really find a solution that works.
How can I do this?
You can copy your crontab into an image, in order for the container launched from said image to run the job.
Important: as noted in docker-cron issue 3: use LF, not CRLF for your cron file.
See "Run a cron job with Docker" from Julien Boulay in his Ekito/docker-cron:
Let’s create a new file called "hello-cron" to describe our job.
# must be ended with a new line "LF" (Unix) and not "CRLF" (Windows)
* * * * * echo "Hello world" >> /var/log/cron.log 2>&1
# An empty line is required at the end of this file for a valid cron file.
If you are wondering what is 2>&1, Ayman Hourieh explains.
The following Dockerfile describes all the steps to build your image
FROM ubuntu:latest
MAINTAINER docker#ekito.fr
RUN apt-get update && apt-get -y install cron
# Copy hello-cron file to the cron.d directory
COPY hello-cron /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Apply cron job
RUN crontab /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
But: if cron dies, the container keeps running.
(see Gaafar's comment and How do I make apt-get install less noisy?:
apt-get -y install -qq --force-yes cron can work too)
As noted by Nathan Lloyd in the comments:
Quick note about a gotcha:
If you're adding a script file and telling cron to run it, remember to
RUN chmod 0744 /the_script
Cron fails silently if you forget.
OR, make sure your job itself redirect directly to stdout/stderr instead of a log file, as described in hugoShaka's answer:
* * * * * root echo hello > /proc/1/fd/1 2>/proc/1/fd/2
Replace the last Dockerfile line with
CMD ["cron", "-f"]
But: it doesn't work if you want to run tasks as a non-root.
See also (about cron -f, which is to say cron "foreground") "docker ubuntu cron -f is not working"
Build and run it:
sudo docker build --rm -t ekito/cron-example .
sudo docker run -t -i ekito/cron-example
Be patient, wait for 2 minutes and your command-line should display:
Hello world
Hello world
Eric adds in the comments:
Do note that tail may not display the correct file if it is created during image build.
If that is the case, you need to create or touch the file during container runtime in order for tail to pick up the correct file.
See "Output of tail -f at the end of a docker CMD is not showing".
See more in "Running Cron in Docker" (Apr. 2021) from Jason Kulatunga, as he commented below
See Jason's image AnalogJ/docker-cron based on:
Dockerfile installing cronie/crond, depending on distribution.
an entrypoint initializing /etc/environment and then calling
cron -f -l 2
The accepted answer may be dangerous in a production environment.
In docker you should only execute one process per container because if you don't, the process that forked and went background is not monitored and may stop without you knowing it.
When you use CMD cron && tail -f /var/log/cron.log the cron process basically fork in order to execute cron in background, the main process exits and let you execute tailf in foreground. The background cron process could stop or fail you won't notice, your container will still run silently and your orchestration tool will not restart it.
You can avoid such a thing by redirecting directly the cron's commands output into your docker stdout and stderr which are located respectively in /proc/1/fd/1 and /proc/1/fd/2.
Using basic shell redirects you may want to do something like this :
* * * * * root echo hello > /proc/1/fd/1 2>/proc/1/fd/2
And your CMD will be : CMD ["cron", "-f"]
But: this doesn't work if you want to run tasks as a non-root.
For those who wants to use a simple and lightweight image:
FROM alpine:3.6
# copy crontabs for root user
COPY config/cronjobs /etc/crontabs/root
# start crond with log level 8 in foreground, output to stderr
CMD ["crond", "-f", "-d", "8"]
Where cronjobs is the file that contains your cronjobs, in this form:
* * * * * echo "hello stackoverflow" >> /test_file 2>&1
# remember to end this file with an empty new line
But apparently you won't see hello stackoverflow in docker logs.
What #VonC has suggested is nice but I prefer doing all cron job configuration in one line. This would avoid cross platform issues like cronjob location and you don't need a separate cron file.
FROM ubuntu:latest
# Install cron
RUN apt-get -y install cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Setup cron job
RUN (crontab -l ; echo "* * * * * echo "Hello world" >> /var/log/cron.log") | crontab
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
After running your docker container, you can make sure if cron service is working by:
# To check if the job is scheduled
docker exec -ti <your-container-id> bash -c "crontab -l"
# To check if the cron service is running
docker exec -ti <your-container-id> bash -c "pgrep cron"
If you prefer to have ENTRYPOINT instead of CMD, then you can substitute the CMD above with
ENTRYPOINT cron start && tail -f /var/log/cron.log
But: if cron dies, the container keeps running.
There is another way to do it, is to use Tasker, a task runner that has cron (a scheduler) support.
Why ? Sometimes to run a cron job, you have to mix, your base image (python, java, nodejs, ruby) with the crond. That means another image to maintain. Tasker avoid that by decoupling the crond and you container. You can just focus on the image that you want to execute your commands, and configure Tasker to use it.
Here an docker-compose.yml file, that will run some tasks for you
version: "2"
services:
tasker:
image: strm/tasker
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
environment:
configuration: |
logging:
level:
ROOT: WARN
org.springframework.web: WARN
sh.strm: DEBUG
schedule:
- every: minute
task: hello
- every: minute
task: helloFromPython
- every: minute
task: helloFromNode
tasks:
docker:
- name: hello
image: debian:jessie
script:
- echo Hello world from Tasker
- name: helloFromPython
image: python:3-slim
script:
- python -c 'print("Hello world from python")'
- name: helloFromNode
image: node:8
script:
- node -e 'console.log("Hello from node")'
There are 3 tasks there, all of them will run every minute (every: minute), and each of them will execute the script code, inside the image defined in image section.
Just run docker-compose up, and see it working. Here is the Tasker repo with the full documentation:
http://github.com/opsxcq/tasker
Though this aims to run jobs beside a running process in a container via Docker's exec interface, this may be of interest for you.
I've written a daemon that observes containers and schedules jobs, defined in their metadata, on them. Example:
version: '2'
services:
wordpress:
image: wordpress
mysql:
image: mariadb
volumes:
- ./database_dumps:/dumps
labels:
deck-chores.dump.command: sh -c "mysqldump --all-databases > /dumps/dump-$$(date -Idate)"
deck-chores.dump.interval: daily
'Classic', cron-like configuration is also possible.
Here are the docs, here's the image repository.
VonC's answer is pretty thorough. In addition I'd like to add one thing that helped me. If you just want to run a cron job without tailing a file, you'd be tempted to just remove the && tail -f /var/log/cron.log from the cron command.
However this will cause the Docker container to exit shortly after running because when the cron command completes, Docker thinks the last command has exited and hence kills the container. This can be avoided by running cron in the foreground via cron -f.
If you're using docker for windows, remember that you have to change your line-ending format from CRLF to LF (i.e. from dos to unix) if you intend on importing your crontab file from windows to your ubuntu container. If not, your cron-job won't work. Here's a working example:
FROM ubuntu:latest
RUN apt-get update && apt-get -y install cron
RUN apt-get update && apt-get install -y dos2unix
# Add crontab file (from your windows host) to the cron directory
ADD cron/hello-cron /etc/cron.d/hello-cron
# Change line ending format to LF
RUN dos2unix /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Apply cron job
RUN crontab /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/hello-cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/hello-cron.log
This actually took me hours to figure out, as debugging cron jobs in docker containers is a tedious task. Hope it helps anyone else out there that can't get their code to work!
But: if cron dies, the container keeps running.
I created a Docker image based on the other answers, which can be used like
docker run -v "/path/to/cron:/etc/cron.d/crontab" gaafar/cron
where /path/to/cron: absolute path to crontab file, or you can use it as a base in a Dockerfile:
FROM gaafar/cron
# COPY crontab file in the cron directory
COPY crontab /etc/cron.d/crontab
# Add your commands here
For reference, the image is here.
Unfortunately, none of the above answers worked for me, although all answers lead to the solution and eventually to my solution, here is the snippet if it helps someone. Thanks
This can be solved with the bash file, due to the layered architecture of the Docker, cron service doesn't get initiated with RUN/CMD/ENTRYPOINT commands.
Simply add a bash file which will initiate the cron and other services (if required)
DockerFile
FROM gradle:6.5.1-jdk11 AS build
# apt
RUN apt-get update
RUN apt-get -y install cron
# Setup cron to run every minute to print (you can add/update your cron here)
RUN touch /var/log/cron-1.log
RUN (crontab -l ; echo "* * * * * echo testing cron.... >> /var/log/cron-1.log 2>&1") | crontab
# entrypoint.sh
RUN chmod +x entrypoint.sh
CMD ["bash","entrypoint.sh"]
entrypoint.sh
#!/bin/sh
service cron start & tail -f /var/log/cron-2.log
If any other service is also required to run along with cron then add that service with & in the same command, for example: /opt/wildfly/bin/standalone.sh & service cron start & tail -f /var/log/cron-2.log
Once you will get into the docker container there you can see that testing cron.... will be getting printed every minute in file: /var/log/cron-1.log
But, if cron dies, the container keeps running.
Define the cronjob in a dedicated container which runs the command via docker exec to your service.
This is higher cohesion and the running script will have access to the environment variables you have defined for your service.
#docker-compose.yml
version: "3.3"
services:
myservice:
environment:
MSG: i'm being cronjobbed, every minute!
image: alpine
container_name: myservice
command: tail -f /dev/null
cronjobber:
image: docker:edge
volumes:
- /var/run/docker.sock:/var/run/docker.sock
container_name: cronjobber
command: >
sh -c "
echo '* * * * * docker exec myservice printenv | grep MSG' > /etc/crontabs/root
&& crond -f"
I decided to use busybox, as it is one of the smallest images.
crond is executed in foreground (-f), logging is send to stderr (-d), I didn't choose to change the loglevel.
crontab file is copied to the default path: /var/spool/cron/crontabs
FROM busybox:1.33.1
# Usage: crond [-fbS] [-l N] [-d N] [-L LOGFILE] [-c DIR]
#
# -f Foreground
# -b Background (default)
# -S Log to syslog (default)
# -l N Set log level. Most verbose 0, default 8
# -d N Set log level, log to stderr
# -L FILE Log to FILE
# -c DIR Cron dir. Default:/var/spool/cron/crontabs
COPY crontab /var/spool/cron/crontabs/root
CMD [ "crond", "-f", "-d" ]
But output of the tasks apparently can't be seen in docker logs.
When you deploy your container on another host, just note that it won't start any processes automatically. You need to make sure that 'cron' service is running inside your container.
In our case, I am using Supervisord with other services to start cron service.
[program:misc]
command=/etc/init.d/cron restart
user=root
autostart=true
autorestart=true
stderr_logfile=/var/log/misc-cron.err.log
stdout_logfile=/var/log/misc-cron.out.log
priority=998
From above examples I created this combination:
Alpine Image & Edit Using Crontab in Nano (I hate vi)
FROM alpine
RUN apk update
RUN apk add curl nano
ENV EDITOR=/usr/bin/nano
# start crond with log level 8 in foreground, output to stderr
CMD ["crond", "-f", "-d", "8"]
# Shell Access
# docker exec -it <CONTAINERID> /bin/sh
# Example Cron Entry
# crontab -e
# * * * * * echo hello > /proc/1/fd/1 2>/proc/1/fd/2
# DATE/TIME WILL BE IN UTC
Setup a cron in parallel to a one-time job
Create a script file, say run.sh, with the job that is supposed to run periodically.
#!/bin/bash
timestamp=`date +%Y/%m/%d-%H:%M:%S`
echo "System path is $PATH at $timestamp"
Save and exit.
Use Entrypoint instead of CMD
f you have multiple jobs to kick in during docker containerization, use the entrypoint file to run them all.
Entrypoint file is a script file that comes into action when a docker run command is issued. So, all the steps that we want to run can be put in this script file.
For instance, we have 2 jobs to run:
Run once job: echo “Docker container has been started”
Run periodic job: run.sh
Create entrypoint.sh
#!/bin/bash
# Start the run once job.
echo "Docker container has been started"
# Setup a cron schedule
echo "* * * * * /run.sh >> /var/log/cron.log 2>&1
# This extra line makes it a valid cron" > scheduler.txt
crontab scheduler.txt
cron -f
Let’s understand the crontab that has been set up in the file
* * * * *: Cron schedule; the job must run every minute. You can update the schedule based on your requirement.
/run.sh: The path to the script file which is to be run periodically
/var/log/cron.log: The filename to save the output of the scheduled cron job.
2>&1: The error logs(if any) also will be redirected to the same output file used above.
Note: Do not forget to add an extra new line, as it makes it a valid cron.
Scheduler.txt: the complete cron setup will be redirected to a file.
Using System/User specific environment variables in cron
My actual cron job was expecting most of the arguments as the environment variables passed to the docker run command. But, with bash, I was not able to use any of the environment variables that belongs to the system or the docker container.
Then, this came up as a walkaround to this problem:
Add the following line in the entrypoint.sh
declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env
Update the cron setup and specify-
SHELL=/bin/bash
BASH_ENV=/container.env
At last, your entrypoint.sh should look like
#!/bin/bash
# Start the run once job.
echo "Docker container has been started"
declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env
# Setup a cron schedule
echo "SHELL=/bin/bash
BASH_ENV=/container.env
* * * * * /run.sh >> /var/log/cron.log 2>&1
# This extra line makes it a valid cron" > scheduler.txt
crontab scheduler.txt
cron -f
Last but not the least: Create a Dockerfile
FROM ubuntu:16.04
MAINTAINER Himanshu Gupta
# Install cron
RUN apt-get update && apt-get install -y cron
# Add files
ADD run.sh /run.sh
ADD entrypoint.sh /entrypoint.sh
RUN chmod +x /run.sh /entrypoint.sh
ENTRYPOINT /entrypoint.sh
That’s it. Build and run the Docker image!
Here's my docker-compose based solution:
cron:
image: alpine:3.10
command: crond -f -d 8
depends_on:
- servicename
volumes:
- './conf/cron:/etc/crontabs/root:z'
restart: unless-stopped
the lines with cron entries are on the ./conf/cron file.
Note: this won't run commands that aren't in the alpine image.
Also, output of the tasks apparently won't appear in docker logs.
This question have a lot of answers, but some are complicated and another has some drawbacks. I try to explain the problems and try to deliver a solution.
cron-entrypoint.sh:
#!/bin/bash
# copy machine environment variables to cron environment
printenv | cat - /etc/crontab > temp && mv temp /etc/crontab
## validate cron file
crontab /etc/crontab
# cron service with SIGTERM support
service cron start
trap "service cron stop; exit" SIGINT SIGTERM
# just dump your logs to std output
tail -f \
/app/storage/logs/laravel.log \
/var/log/cron.log \
& wait $!
Problems solved
environment variables are not available on cron environment (like env vars or kubernetes secrets)
stop when crontab file is not valid
stop gracefully cron jobs when machine receive an SIGTERM signal
For context, I use previous script on Kubernetes with Laravel app.
this line was the one that helped me run my pre-scheduled task.
ADD mycron/root /etc/cron.d/root
RUN chmod 0644 /etc/cron.d/root
RUN crontab /etc/cron.d/root
RUN touch /var/log/cron.log
CMD ( cron -f -l 8 & ) && apache2-foreground # <-- run cron
--> My project run inside: FROM php:7.2-apache
But: if cron dies, the container keeps running.
So, my problem was the same. The fix was to change the command section in the docker-compose.yml.
From
command: crontab /etc/crontab && tail -f /etc/crontab
To
command: crontab /etc/crontab
command: tail -f /etc/crontab
The problem was the '&&' between the commands. After deleting this, it was all fine.
Focusing on gracefully stopping the cronjobs when receiving SIGTERM or SIGQUIT signals (e.g. when running docker stop).
That's not too easy. By default, the cron process just got killed without paying attention to running cronjobs. I'm elaborating on pablorsk's answer:
Dockerfile:
FROM ubuntu:latest
RUN apt-get update \
&& apt-get -y install cron procps \
&& rm -rf /var/lib/apt/lists/*
# Copy cronjobs file to the cron.d directory
COPY cronjobs /etc/cron.d/cronjobs
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/cronjobs
# similarly prepare the default cronjob scripts
COPY run_cronjob.sh /root/run_cronjob.sh
RUN chmod +x /root/run_cronjob.sh
COPY run_cronjob_without_log.sh /root/run_cronjob_without_log.sh
RUN chmod +x /root/run_cronjob_without_log.sh
# Apply cron job
RUN crontab /etc/cron.d/cronjobs
# to gain access to environment variables, we need this additional entrypoint script
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
# optionally, change received signal from SIGTERM TO SIGQUIT
#STOPSIGNAL SIGQUIT
# Run the command on container startup
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/bash
# make global environment variables available within crond, too
printenv | grep -v "no_proxy" >> /etc/environment
# SIGQUIT/SIGTERM-handler
term_handler() {
echo 'stopping cron'
service cron stop
echo 'stopped'
echo 'waiting'
x=$(($(ps u -C run_cronjob.sh | wc -l)-1))
xold=0
while [ "$x" -gt 0 ]
do
if [ "$x" != "$xold" ]; then
echo "Waiting for $x running cronjob(s):"
ps u -C run_cronjob.sh
xold=$x
sleep 1
fi
x=$(($(ps u -C run_cronjob.sh | wc -l)-1))
done
echo 'done waiting'
exit 143; # 128 + 15 -- SIGTERM
}
# cron service with SIGTERM and SIGQUIT support
service cron start
trap "term_handler" QUIT TERM
# endless loop
while true
do
tail -f /dev/null & wait ${!}
done
cronjobs
* * * * * ./run_cronjob.sh cron1
*/2 * * * * ./run_cronjob.sh cron2
*/3 * * * * ./run_cronjob.sh cron3
Assuming you wrap all your cronjobs in a run_cronjob.sh script. That way, you can execute arbitrary code for which shutdown will wait gracefully.
run_cronjobs.sh (optional helper script to keep cronjob definitions clean)
#!/bin/bash
DIR_INCL="${BASH_SOURCE%/*}"
if [[ ! -d "$DIR_INCL" ]]; then DIR_INCL="$PWD"; fi
cd "$DIR_INCL"
# redirect all cronjob output to docker
./run_cronjob_without_log.sh "$#" > /proc/1/fd/1 2>/proc/1/fd/2
run_cronjob_without_log.sh
your_actual_cronjob_src()
Btw, when receiving a SIGKILL the container still shut downs immediately. That way you can use a command like docker-compose stop -t 60 cron-container to wait 60s for cronjobs to finish gracefully, but still terminate them for sure after the timeout.
All the answers require root access inside the container because 'cron' itself requests for UID 0.
To request root acces (e.g. via sudo) is against docker best practices.
I used https://github.com/gjcarneiro/yacron to manage scheduled tasks.
When running on some trimmed down images that restrict root access, I had to add my user to the sudoers and run as sudo cron
FROM node:8.6.0
RUN apt-get update && apt-get install -y cron sudo
COPY crontab /etc/cron.d/my-cron
RUN chmod 0644 /etc/cron.d/my-cron
RUN touch /var/log/cron.log
# Allow node user to start cron daemon with sudo
RUN echo 'node ALL=NOPASSWD: /usr/sbin/cron' >>/etc/sudoers
ENTRYPOINT sudo cron && tail -f /var/log/cron.log
Maybe that helps someone
But: if cron dies, the container keeps running.
I occasionally tried to find a docker-friendly cron implementation. And this last time I tried, I've found a couple.
By docker-friendly I mean, "output of the tasks can be seen in docker logs w/o resorting to tricks."
The most promising I see at the moment is supercronic. It can be fed a crontab file, all while being docker-friendly. To make use of it:
docker-compose.yml:
services:
supercronic:
build: .
command: supercronic crontab
Dockerfile:
FROM alpine:3.17
RUN set -x \
&& apk add --no-cache supercronic shadow \
&& useradd -m app
USER app
COPY crontab .
crontab:
* * * * * date
A gist with a bit more info.
Another good one is yacron, but it uses YAML.
ofelia can be used, but they seem to focus on running tasks in separate containers. Which is probably not a downside, but I'm not sure why I'd want to do that.
And there's also a number of traditional cron implementations: dcron, fcron, cronie. But they come with "no easy way to see output of the tasks."
Just adding to the list of answers that you can also use this image:
https://hub.docker.com/repository/docker/cronit/simple-cron
And use it as a basis to start cron jobs, using it like this:
FROM cronit/simple-cron # Inherit from the base image
#Set up all your dependencies
COPY jobs.cron ./ # Copy your local config
Evidently, it is possible to run cron as a process inside the container (under root user) alongside other processes , using ENTRYPOINT statement in Dockerfile with start.sh script what includes line process cron start. More info here
#!/bin/bash
# copy environment variables for local use
env >> etc/environment
# start cron service
service cron start
# start other service
service other start
#...
If your image doesn't contain any daemon (so it's only the short-running script or process), you may also consider starting your cron from outside, by simply defining a LABEL with the cron information, plus the scheduler itself. This way, your default container state is "exited". If you have multiple scripts, this may result in a lower footprint on your system than having multiple parallel-running cron instances.
See: https://github.com/JaciBrunning/docker-cron-label
Example docker-compose.yaml:
version: '3.8'
# Example application of the cron image
services:
cron:
image: jaci/cron-label:latest
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/etc/localtime:/etc/localtime:ro"
hello:
image: hello-world
restart: "no"
labels:
- "cron.schedule=* * * * * "
I wanted to share a modification to the typical off of some of these other suggestions that I found more flexible. I wanted to enable changing the cron time with an environment variable and ended up adding an additional script that runs within my entrypoint.sh, but before the call to cron -f
*updatecron.sh*
#!/bin/sh
#remove old cron files
rm -rf /etc/cron.*/*
#create a new formatted cron definition
echo "$crondef [appname] >/proc/1/fd/1 2>/proc/1/fd/2" >> /etc/cron.d/restart-cron
echo \ >> /etc/cron.d/restart-cron
chmod 0644 /etc/cron.d/restart-cron
crontab /etc/cron.d/restart-cron
This removes any existing cron files, creates a new cronfile using an ENV variable of crondef, and then loads it.
Our's was a nodejs application to be run as cron job and it was also dependent on environment variables.
The below solution worked for us.
Docker file:
# syntax=docker/dockerfile:1
FROM node:12.18.1
ENV NODE_ENV=production
COPY ["startup.sh", "./"]
# Removed steps to build the node js application
#--------------- Setup cron ------------------
# Install Cron
RUN apt-get update
RUN apt-get -y install cron
# Run every day at 1AM
#/proc/1/fd/1 2>/proc/1/fd/2 is used to redirect cron logs to standard output and standard error
RUN (crontab -l ; echo "0 1 * * * /usr/local/bin/node /app/dist/index.js > /proc/1/fd/1 2>/proc/1/fd/2") | crontab
#--------------- Start Cron ------------------
# Grant execution rights
RUN chmod 755 startup.sh
CMD ["./startup.sh"]
startup.sh:
!/bin/bash
echo "Copying env variables to /etc/environment so that it is available for cron jobs"
printenv >> /etc/environment
echo "Starting cron"
cron -f
With multiple jobs and various dependencies like zsh and curl, this is a good approach while also combining the best practices from other answers. Bonus: This does NOT require you to set +x execution permissions on myScript.sh, which can be easy to miss in a new environment.
cron.dockerfile
FROM ubuntu:latest
# Install dependencies
RUN apt-get update && apt-get -y install \
cron \
zsh \
curl;
# Setup multiple jobs with zsh and redirect outputs to docker logs
RUN (echo "\
* * * * * zsh -c 'echo "Hello World"' 1> /proc/1/fd/1 2>/proc/1/fd/2 \n\
* * * * * zsh /myScript.sh 1> /proc/1/fd/1 2>/proc/1/fd/2 \n\
") | crontab
# Run cron in forground, so docker knows the task is running
CMD ["cron", "-f"]
Integrate this with docker compose like so:
docker-compose.yml
services:
cron:
build:
context: .
dockerfile: ./cron.dockerfile
volumes:
- ./myScript.sh:/myScript.sh
Keep in mind that you need to docker compose build cron when you change contents of the cron.dockerfile, but changes to myScript.sh will be reflected right away as it's mounted in compose.

service restart doesn't happen when using cron

I have a script that I call from a cron job. The script is
#!/bin/bash
python /home/ubuntu/gateway-haproxy-config.py | tee /etc/haproxy/haproxy.cfg.new
DIFF=$(diff /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.new)
if [ "$DIFF" != "" ]
then
mv /etc/haproxy/haproxy.cfg.new /etc/haproxy/haproxy.cfg
service haproxy restart
else
echo "unmodified"
fi
The script works exactly as expected when I run it from a shell prompt.
I installed it as a cron job as follows (for root using sudo crontab -e):
* * * * * cd /home/ubuntu && ./gateway-config-cron
When the cron runs, the script successfully writes a new configuration file, does the diff and even replaces the old one with the new one when the diff is not empty.
The service haproxy restart never happens when running as a cron job. I am forced to manually restart the service.
This might have been a path related problem I was able to make it work as expected by providing the full path to service.
#!/bin/bash
python /home/ubuntu/gateway-haproxy-config.py | tee /etc/haproxy/haproxy.cfg.new
DIFF=$(diff /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.new)
if [ "$DIFF" != "" ]
then
mv /etc/haproxy/haproxy.cfg.new /etc/haproxy/haproxy.cfg
/usr/sbin/service haproxy restart
else
echo "unmodified"
fi

Crontab cannot find AWS Credentials - linuxbox EC2

I've created a linux box that has a very simple make bucket command : was s3 mb s3://bucket running this from the prompt works fine.
I've run AWS configure as both the user I'm logged in as and sudo. The details are definitely correct as the above wouldn't create the bucket.
The error message I'm getting from cron is :make_bucket failed: s3://cronbucket/ Unable to locate credentials
I've tried various things thus far with the crontab in trying to tell it where the credentials are, some of this is an amalgamation of other solutions which may be a cause of the issue.
My crontab look like :
AWS_CONFIG_FILE="/home/ec2-user/.aws/config"
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binx
0 0 * * * /usr/bin/env bash /opt/foo.sh &>> /tmp/foo.log
* * * * * /usr/bin/uptime > /tmp/uptime
* * * * * /bin/scripts/script.sh >> /bin/scripts/cronlogs/cronscript.log 2>&1
initially I just had the two jobs that were making the bucket and then creating the uptime (as a sanity check), the rest of the crontab are solutions from other posts that do not seem to be working.
Any advice is much appreciated, thank you.
The issue is that cron doesn't get your env. There are several ways of approaching this. Either running a bash script that includes your profile. Or a nice simple solution would be to include it with crontab. (change profile to whatever you are using)
0 5 * * * . $HOME/.profile; /path/to/command/to/run
check out this thread
If you have attached IAM role for ECS Fargate task role then this solution will work
Add the following line in the entrypoint.sh
declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env
Add below line in crontab or cron file
SHELL=/bin/bash
BASH_ENV=/container.env
It worked for me.
In my case it was much trickier, because I was running a CRON job in Fargate instance, and I could access S3 from shell, but it did not work from CRON.
In Dockerfile configure the CRON job
RUN echo -e \
"SHELL=/bin/bash\n\
BASH_ENV=/app/cron/container.env\n\n\
30 0 * * * /app/cron/log_backup.sh >> /app/cron/cron.log 2>&1" | crontab -
In entrypoint script configure AWS credentials
creds=`curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI`
AWS_ACCESS_KEY_ID=`echo $creds | jq .'AccessKeyId' | tr -d '"'`
AWS_SECRET_ACCESS_KEY=`echo $creds | jq '.SecretAccessKey' | tr -d '"'`
AWS_SESSION_TOKEN=`echo $creds | jq '.Token' | tr -d '"'`
After that in same entrypoint script create container.env file as #Tailor Devendra suggested in previous solution:
declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /app/cron/container.env
I can't say that I am happy with this solution, but it works.

How to delete a cron job with Ansible?

I have about 50 Debian Linux servers with a bad cron job:
0 * * * * ntpdate 10.20.0.1
I want to configure ntp sync with ntpd and so I need to delete this cron job. For configuring I use Ansible. I have tried to delete the cron entry with this play:
tasks:
- cron: name="ntpdate" minute="0" job="ntpdate 10.20.0.1" state=absent user="root"
Nothing happened.
Then I run this play:
tasks:
- cron: name="ntpdate" minute="0" job="ntpdate pool.ntp.org" state=present
I see new cron job in output of "crontab -l":
...
# m h dom mon dow command
0 * * * * ntpdate 10.20.0.1
#Ansible: ntpdate
0 * * * * ntpdate pool.ntp.org
but /etc/cron.d is empty! I don't understand how the Ansible cron module works.
How can I delete my manually configured cron job with Ansible's cron module?
User's crontab entries are held under /var/spool/cron/crontab/$USER, as mentioned in the crontab man page:
Crontab is the program used to install, remove or list the tables used to drive the cron(8) daemon. Each user can have their own crontab, and though these are files in /var/spool/ , they are not intended to be edited directly. For SELinux in mls mode can be even more crontabs - for each range. For more see selinux(8).
As mentioned in the man page, and the above quote, you should not be editing/using these files directly and instead should use the available crontab commands such as crontab -l to list the user's crontab entries, crontab -r to remove the user's crontab or crontab -e to edit the user's crontab entries.
To remove a crontab entry by hand you can either use crontab -r to remove all the user's crontab entries or crontab -e to edit the crontab directly.
With Ansible this can be done by using the cron module's state: absent like so:
hosts : all
tasks :
- name : remove ntpdate cron entry
cron :
name : ntpdate
state : absent
However, this relies on the comment that Ansible puts above the crontab entry that can be seen from this simple task:
hosts : all
tasks :
- name : add crontab test entry
cron :
name : crontab test
job : echo 'Testing!' > /var/log/crontest.log
state : present
Which then sets up a crontab entry that looks like:
#Ansible: crontab test
* * * * * echo Testing > /var/log/crontest.log
Unfortunately if you have crontab entries that have been set up outside of Ansible's cron module then you are going to have to take a less clean approach to tidying up your crontab entries.
For this we will simply have to throw away our user's crontab using crontab -r and we can invoke this via the shell with a play that looks something like following:
hosts : all
tasks :
- name : remove user's crontab
shell : crontab -r
We can then use further tasks to set the tasks that you wanted to keep or add that properly use Ansible's cron module.
If you have very complicated crontab entries so you can also delete it by shell module of ansible as shown in below example.
---
- name: Deleting contab entry
hosts: ecx
become: true
tasks:
- name: "decroning entry"
shell:
"crontab -l -u root |grep -v mybot |crontab -u root -"
register: cronout
- debug: msg="{{cronout.stdout_lines}}"
Explanation:- You have to just replace "mybot" string on line 8 with your unique identity of crontab entry. that's it. for "how to delete multiple crontab entries by ansible" you can use multiple strings in grep as shown below
"crontab -l -u root |grep -v 'strin1\|string2\|string3\|string4' |crontab -u root -"

Resources