I am using docker and OS is Ubuntu.
If i use crontab -e and place data in there then cron runs fine.
* * * * * /var/www/daily.sh
But if remove the container then that crontab is also gone. I want to somehow place crontab in some file like crontabs.sh then mount that inside container so that if i create container then my cron is still there.
I don't know at what location i need to mount that so that cron runs normally. something like
/myhost/code/crontabs.sh: /etc/crons.daily
As mentioned in this answer, you can copy your file, adding to your Dockerfile:
FROM ubuntu:latest
MAINTAINER docker#ekito.fr
# Add crontab file in the cron directory
COPY crontab /etc/cron.d/crons.daily
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
(Source: example "Run a cron job with Docker" (by Julien Boulay)
That way, your image will always include the right cron definition.
You can initialize the content of 'crontab', the local file you are copying to your image, with cronsandbox.com.
In your case: 0 23 * * *
If you don't want to make a new image at each change, you remove the COPY line, and mount that file at runtime:
docker run -v crontab:/etc/cron.d/hello-cron -n mycontainer myimage
That way, the local file crontab is mounted as in the container as /etc/cron.d/hello-cron (or any other name you want).
Whenever you change it, stop and restart your container.
Related
There is a shell script in the location /file/location/azcopy/, and an Azcopy binary is also located there
The below command runs successfully when I run it manually
./azcopy cp "/abc/def/Goa.csv" "https://.blob.core.windows.net/abc\home\xyz?"
However, when I scheduled it in crontab, the "./azcopy" command didn't execute.
below is the script
#!/bin/bash
./azcopy cp "/abc/def/Goa.csv" "https://<Blobaccount>.blob.core.windows.net/abc\home\xyz?<SAS- Token>"
below is the crontab entry
00 17 * * * root /file/location/azcopy/script.sh
Is there something I'm doing wrong?
Could someone please help me figure out what's wrong.
When you use root to execute the /file/location/azcopy/script.sh,you work directory is /root ,so you need to add cd /file/location/azcopy/ in your script.sh script to change work directory. You can add pwd in your script to see the current work directory.
I am new to docker. Managed to create a script which gets info from online source and populates an SQL DB. All works well within docker container.
However, I need to make this to run for example every minute.
So I amended my working Dockerfile and added the following:
Dockerfile:
RUN apt-get install -y cron
COPY cronms /etc/cron.d/cronms
RUN chmod 0644 /etc/cron.d/cronms
RUN crontab /etc/cron.d/cronms
RUN touch /var/log/cron.log
CMD cron && tail -f /var/log/cron.log
#CMD [ "python3", "./my_script.py" ] -- command before cron
my cronms file:
*/1 * * * * /usr/bin/python3 /my_script.py
Image builds without errors however when running it data is not being downloaded.
What I am missing please?
Thanks
Your crontab file has incorrect syntax. From the cron man page:
The system crontab (/etc/crontab) and the packages crontabs (/etc/cron.d/*) use the same format, except that the username for the command is specified after the time and date fields and before the command.
So your cronms file should look like:
*/1 * * * * root /usr/bin/python3 /my_script.py
I assume the Dockerfile in your question is truncated, but it's worth noting that for this to work you may need to explicitly install python3 as well as cron, depending on which base image you're using.
If I build an image using this Dockerfile:
FROM ubuntu:20.04
RUN apt-get update
RUN apt-get install -y cron python3
COPY cronms /etc/cron.d/cronms
RUN chmod 0644 /etc/cron.d/cronms
RUN crontab /etc/cron.d/cronms
RUN touch /var/log/cron.log
COPY my_script.py /my_script.py
CMD cron && tail -f /var/log/cron.log
And this cronms:
*/1 * * * * root /usr/bin/python3 /my_script.py
And this my_script.py:
#!/usr/bin/python3
import time
with open("/tmp/datafile", "a") as fd:
fd.write(time.ctime())
fd.write("\n")
Then I can confirm that everything works as expected: the script is
executed once a minute.
Note that nothing is written to /var/log/cron.log, however,
because cron logs to syslog, not to a file. To see output from
cron, you would need to arrange to run a syslog daemon, or use a
different cron daemon (for example, the busybox crond command can
log to a file or stderr).
I want to extends the postgres:10.2 Dockerfile in order to add a cron job doing some SQL queries at specific dates:
FROM postgres:10.2
COPY task-purge.sh /usr/local/share/
RUN chown postgres:postgres /usr/local/share/task-purge.sh
RUN chmod 700 /usr/local/share/task-purge.sh
COPY query-task-purge.sql /usr/local/share/
RUN chown postgres:postgres /usr/local/share/query-task-purge.sql
RUN chmod 700 /usr/local/share/query-task-purge.sql
The problem is: the cron service is not started:
Inside the docker container:
root#5c17ce88c333:/# service cron status
[FAIL] cron is not running ... failed!
root#5c17ce88c333:/# pgrep cron
root#5c17ce88c333:/#
I have difficulties to start it ...
In the Dockerfile, I tried :
To add RUN service cron start: nothing change
To add CMD service cron start: when the container starts, it ends with Starting periodic command scheduler: cron without starting the DB.
To add CMD postgres && service cron start: when the container starts, it ends with "root" execution of the PostgreSQL server is not permitted. without starting the DB.
To add a wrappere CMD script like https://docs.docker.com/config/containers/multi-service_container/: same behaviour.
To add ENTRYPOINT "docker-entrypoint.sh" && service cron start: idem
To add service cron start in a new docker-entrypoint.sh (modified from the official postgres:10.2 Dockerfile https://hub.docker.com/layers/postgres/library/postgres/10.2/images/sha256-4b6b7bd361a3b7b69531b2c16766a38b0f3a89e9243f5a49ff16180dd2d42273?context=explore): Starting periodic command scheduler: croncron: can't open or create /var/run/crond.pid: Permission denied failed!
To add update-rc.d cron defaults && update-rc.d cron enable to the docker-entrypoint.sh: nothing change.
To add set -- su-exec root:root /bin/bash -c "service cron start": nothing change
To add set -- su-exec root:root /bin/bash -c "update-rc.d cron defaults && update-rc.d cron enable": nothing change
To add gosu root:root /bin/bash -c "service cron start": the container ends with error: failed switching to "root:root": operation not permitted.
To add exec gosu root:root /bin/bash -c "service cron start": the container ends with Starting periodic command scheduler: cron.
Do you have any idea how I can run a system service before postgres start ? And I want to extends postgres:10.2.
Thanks !
Ok, I have understand why ... I answer my own question to help everybody : the docker-entrypoint.sh script runs a exec gosu postgres "$BASH_SOURCE" "$#" command (even in the last postgres version: https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh) which call again this script but as postgres user.
So every system operations needs to be executed before this command.
For example: write a function called system_configure and call it before that line:
# ... (outside main)
system_configure() {
echo "[x] Crontab service start ..."
service cron start
echo "[x] Crontab service started"
}
# ... (inside main function path)
system_configure
exec gosu postgres "$BASH_SOURCE" "$#"
# ... (end main)
You could also use any other docker image like ubuntu with supervisord, but prefers distroless image for security reasons.
Just run it once in your Dockerfile using:
RUN service cron start
I am currently running a cronjob from a host machine (Linux Redhat) executing a script in a docker container. The problem I have is that when I redirect the standard output to a file with path inside the docker container, the cronjob threw an exception basically saying that the path of the log file cannot be found. But if I change the output log file path to be a path that is on the host machine, it works fine.
Below does not work
0 9 * * 1-5 sudo docker exec -i /path/in/docker/container/script.sh > /path/in/docker/container/script.shout
But this one works
0 9 * * 1-5 sudo docker exec -i /path/in/docker/container/script.sh > /path/in/host/script.shout
How do I get the first cronjob working so I can have the output file in the docker container using the path in the docker container?
I don't want to run the cronjob as root and that's why I need sudo before docker exec. Please note, only root has access to the docker volume path in the host machine, which is why I can't use the docker volume path either.
Cron runs your command with a shell, so the output redirect is handled by the shell running on your host, not inside your container. To get shell commands like this to run inside the container, you need to run a shell as your docker command, and escape or quote your any of those shell options to avoid having them interpreted until you are inside the container. E.g.
0 9 * * 1-5 sudo docker exec -i container_name /bin/sh -c \
"/path/in/docker/container/script.sh > /path/in/docker/container/script.shout"
I would rather try and path the redirection path as a parameter to the script (so remove the '>'), and make the script itself redirect its output to that parameter file.
Since the script is executed in a docker container, it would see that path (as opposed to the cron job, which sees by default host paths)
We can use bach -c and put the redirect command between double quotes as in this command:
docker exec ${CONTAINER_ID} bash -c "./path/in/container/script.sh > /path/in/container/out"
And we have to be sure /path/in/container/script.sh is an executable file either by using this command from the container:
chmod +x /path/in/container/script.sh
or by using this command from the host machine:
docker exec ${CONTAINER_ID} chmod +x /path/in/container/script.sh
You can use tee: a program that reads stdin and writes the same to both stdout AND the file specified as an arg.
echo 'foo' | tee file.txt
will write the text 'foo' in file.txt
Your desired command becomes:
0 9 * * 1-5 sudo docker exec -i /path/in/docker/container/script.sh | tee /path/in/docker/container/script.shout
The drawback is that you also dump to stdout.
You may check this SO question for further possibilities and workarounds.
I have made a Docker image, from a Dockerfile, and I want a cronjob executed periodically when a container based on this image is running. My Dockerfile is this (the relevant parts):
FROM l3iggs/archlinux:latest
COPY source /srv/visitor
WORKDIR /srv/visitor
RUN pacman -Syyu --needed --noconfirm \
&& pacman -S --needed --noconfirm make gcc cronie python2 nodejs phantomjs \
&& printf "*/2 * * * * node /srv/visitor/visitor.js \n" >> cronJobs \
&& crontab cronJobs \
&& rm cronJobs \
&& npm install -g node-gyp \
&& PYTHON=/usr/sbin/python2 && export PYTHON \
&& npm install
EXPOSE 80
CMD ["/bin/sh", "-c"]
After creation of the image I run a container and verify that indeed the cronjob has been added:
crontab -l
*/2 * * * * node /srv/visitor/visitor.js
Now, the problem is that the cronjob is never executed. I have, of course, tested that "node /srv/visitor/visitor.js" executes properly when run manually from the console.
Any ideas?
One option is to use the host's crontab in the following way:
0 5 * * * docker exec mysql mysqldump --databases myDatabase -u myUsername -pmyPassword > /backups/myDatabase.sql
The above will periodically take a daily backup of a MySQL database.
If you need to chain complicated commands you can also use this format:
0 5 * * * docker exec mysql sh -c 'mkdir -p /backups/`date +\%d` && for DB in myDB1 myDB2 myDB3; do mysqldump --databases $DB -u myUser -pmyPassword > /backups/`date +\%d`/$DB.sql; done'
The above takes a 30 day rolling backup of multiple databases and does a bash for loop in a single line rather than writing and calling a shell script to do the same. So it's pretty flexible.
Or you could also put complicated scripts inside the docker container and run them like so:
0 5 * * * docker exec mysql /dailyCron.sh
It's a little tricky to answer this definitively, as I don't have time to test, but you have various options open to you:
You could use the Phusion base image, which comes with an init system and cron installed. It is based on Ubuntu and is comparatively heavyweight (at least compared to archlinux) https://registry.hub.docker.com/u/phusion/baseimage/
If you're happy to have everything started from cron jobs, you could just start cron from your CMD and keep it in the foreground (cron -f).
You can use lightweight process manager to start cron and whatever other processes you need (Phusion use runit, Docker seem to recommend supervisor).
You could write your own CMD or ENTRYPOINT script that starts cron and your process. The only issue with this is that you will need to be careful to handle signals properly or you may end up with zombie processes.
In your case, if your just playing around, I'd go with the last option, if it's anything more serious, I'd go with a process manager.
If you're running your Docker container with --net=host, see this thread:
https://github.com/docker/docker/issues/5899
I had the same issue, and my cron tasks started running when I included --pid=host in the docker run command line arguments.