cron job doesn't execute docker command - linux

I'm facing weird behavior on setting up cronjob.
I'm trying to execute simple docker run command at specific time every day, everything except docker is working, but docker command like just ignored or skipped.
I've tried to put command directly in crontab
6 4 * * * docker run -it --rm -e EMAIL=email#gmail.com -e PASS=SOMEPASS docker_image
Doesn't work. Then I tried to echo to some file to see if it actually triggered
6 4 * * * echo "works" > file.txt && docker run -it --rm -e EMAIL=email#gmail.com -e PASS=SOMEPASS docker_image
Echo is working, but docker still not executed. I also tried to save output from the docker to the file, but file is empty.
6 4 * * * docker run -it --rm -e EMAIL=email#gmail.com -e PASS=SOMEPASS docker_image > file.log
Then I moved the entire command to the bash script and run in manually to confirm command is correct and it works, so I've put this script file to cron, also added echo
40 4 * * * /root/script.sh >> /root/file.log (/root is a home folder)
Nothing is working, I tried to update bash file and put echo before and after docker command, and I see the result of those 2 (START,DONE), but docker still not executed and skipped.
#!/bin/bash
echo "START"
docker run -it --rm -e EMAIL=email#gmail.com -e PASS=SOMEPASS docker_image
echo "DONE"
I've tried with sudo in crontab, still no result.
I've checked /var/log/syslog file and see that cron triggered
Oct 16 04:40:01 ubuntu-8gb-nbg1-1 CRON[433288]: (root) CMD (sudo /root/script.sh >> /root/file.log)
Here's output of permissions on the file
-rwxr-xr-x 1 root root 176 Oct 16 04:38 script.sh
Please help with understanding what I missed or how to troubleshoot the problem.

Try to remove -it flags from docker command since the Crontab doesn't support terminal (you may add -d flag to your docker command optionally if you want the container run and continue in the background).
6 4 * * * docker run --rm -e EMAIL=email#gmail.com -e PASS=SOMEPASS docker_image
or
6 4 * * * docker run -d --rm -e EMAIL=email#gmail.com -e PASS=SOMEPASS docker_image

Is your user in docker group? Can you run docker --ps on bash. Does cronjob belong to root?

Related

Crontab not run command in docker

* * * * * mkdir /Documents/my_folder && docker exec -it $(docker ps -qf "name=my_container_name") bundle exec rake batch:run_job --trace >>/dev/null 2>&1
I set crontab on local machine.
"my_folder" folder is created, but command in docker did not.
I tested run :
docker exec -it $(docker ps -qf "name=my_container_name") bundle exec rake batch:run_job --trace
It's still work.

docker-entrypoint.sh : Only exec "$#" not working

I was struggling to fix this issue from couple of days.
File 1 : docker-entrypoint.sh
#!/bin/bash
set -e
source /home/${HADOOP_INSTALL_USERNAME}/.bashrc
kinit -kt /home/${HADOOP_INSTALL_USERNAME}/keytab/dept-test-hadoop.${HADOOP_INSTALL_ENV}.keytab dept-test-hadoop
mkdir /tmp/test222222
exec "$#"
Dockerfile :
ENTRYPOINT ["/home/dept-test-hadoop/docker-entrypoint.sh"]
docker run command :
docker run -it hadoop-hive:v1 /bin/mkdir /tmp/test1
The challenge or what I am trying is to execute what ever the command that pass as command-line argument to docker run command.Please note these commands required kerberos authentication
1) I have noticed /tmp/test222222 but I could not see a directly like /tmp/test1 with above docker run command. I think my exec "$#" in docker-entrypoint.sh not executing. But I can confirm the script is executing as I can see the /tmp/test222222
2) Is there way that we can assign the values from environment variables ?
ENTRYPOINT ["/home/dept-test-hadoop/docker-entrypoint.sh"]
You container will exit as long as it creates the directory. You container life is the life of exec command or docker-entrypoint, so your container will die soon after exec "$#".
If you are looking for a way to create a directory from env then you can try this
#!/bin/bash
set -x
source /home/${HADOOP_INSTALL_USERNAME}/.bashrc
kinit -kt /home/${HADOOP_INSTALL_USERNAME}/keytab/dept-test-hadoop.${HADOOP_INSTALL_ENV}.keytab dept-test-hadoop
mkdir $MY_DIR
ls
exec "$#"
so now pass MY_DIR to env but keep the long process in mind.
docker run -it -e MY_DIR=abcd hadoop-hive:v1 "your_long_running_process_to_exec"
for example
docker run -it -e MY_DIR=abcd hadoop-hive:v1 "<hadoop command>"
If you run any process from ENV in exec then also you can try
#!/bin/sh
set -x
mkdir $MY_DIR
ls
exec ${START_PROCESS}
so you can pass during run time
docker run -it -e MY_DIR=abcd -e START_PROCESS=my_process hadoop-hive:v1

docker exec with standard output logged in a file inside the docker container

I am currently running a cronjob from a host machine (Linux Redhat) executing a script in a docker container. The problem I have is that when I redirect the standard output to a file with path inside the docker container, the cronjob threw an exception basically saying that the path of the log file cannot be found. But if I change the output log file path to be a path that is on the host machine, it works fine.
Below does not work
0 9 * * 1-5 sudo docker exec -i /path/in/docker/container/script.sh > /path/in/docker/container/script.shout
But this one works
0 9 * * 1-5 sudo docker exec -i /path/in/docker/container/script.sh > /path/in/host/script.shout
How do I get the first cronjob working so I can have the output file in the docker container using the path in the docker container?
I don't want to run the cronjob as root and that's why I need sudo before docker exec. Please note, only root has access to the docker volume path in the host machine, which is why I can't use the docker volume path either.
Cron runs your command with a shell, so the output redirect is handled by the shell running on your host, not inside your container. To get shell commands like this to run inside the container, you need to run a shell as your docker command, and escape or quote your any of those shell options to avoid having them interpreted until you are inside the container. E.g.
0 9 * * 1-5 sudo docker exec -i container_name /bin/sh -c \
"/path/in/docker/container/script.sh > /path/in/docker/container/script.shout"
I would rather try and path the redirection path as a parameter to the script (so remove the '>'), and make the script itself redirect its output to that parameter file.
Since the script is executed in a docker container, it would see that path (as opposed to the cron job, which sees by default host paths)
We can use bach -c and put the redirect command between double quotes as in this command:
docker exec ${CONTAINER_ID} bash -c "./path/in/container/script.sh > /path/in/container/out"
And we have to be sure /path/in/container/script.sh is an executable file either by using this command from the container:
chmod +x /path/in/container/script.sh
or by using this command from the host machine:
docker exec ${CONTAINER_ID} chmod +x /path/in/container/script.sh
You can use tee: a program that reads stdin and writes the same to both stdout AND the file specified as an arg.
echo 'foo' | tee file.txt
will write the text 'foo' in file.txt
Your desired command becomes:
0 9 * * 1-5 sudo docker exec -i /path/in/docker/container/script.sh | tee /path/in/docker/container/script.shout
The drawback is that you also dump to stdout.
You may check this SO question for further possibilities and workarounds.

Enviroment variables in docker containers - how does it work?

I can't understand some thing, namely as we know we can pass to docker run argument -e SOME_VAR=13.
Then each process launched (for example using docker exec ping localhost -c $SOME_VAR=13) can see this variable.
How does it work ? After all, enviroment are about bash, we don't launched bash. I can't understand it. Can you explain me how -e does work without shell ?
For example, let's look at following example:
[user#user~]$ sudo docker run -d -e XYZ=123 ubuntu sleep 10000
2543e7235fa9
[user#user~]$ sudo docker exec -it 2543e7235fa9 echo test
test
[user#user~]$ sudo docker exec -it 2543e7235fa9 echo $XYZ
<empty row>
Why did I get <empty row> instead of 123 ?
The problem is that your $XYZ is getting interpolated in the host shell environment, not your container.
$ export XYZ=456
$ docker run -d -e XYZ=123 ubuntu sleep 10000
$ docker exec -it $(docker ps -ql) echo $XYZ
$ 456
$ docker exec -it $(docker ps -ql) sh -c 'echo $XYZ'
$ 123
You have to quote it so it's passed through as a string literal to the container. Then it works fine.
The environment is not specific to shells. Even ordinary processes have environments. They work the same for both shells and ordinary processes. This is because shells are ordinary processes.
When you do SOMEVAR=13 someBinary you define an environment variable called SOMEVAR for the new process, someBinary. You do this with -e in docker because you ask another process to start your process, the docker daemon.

Inside Docker container, cronjobs are not getting executed

I have made a Docker image, from a Dockerfile, and I want a cronjob executed periodically when a container based on this image is running. My Dockerfile is this (the relevant parts):
FROM l3iggs/archlinux:latest
COPY source /srv/visitor
WORKDIR /srv/visitor
RUN pacman -Syyu --needed --noconfirm \
&& pacman -S --needed --noconfirm make gcc cronie python2 nodejs phantomjs \
&& printf "*/2 * * * * node /srv/visitor/visitor.js \n" >> cronJobs \
&& crontab cronJobs \
&& rm cronJobs \
&& npm install -g node-gyp \
&& PYTHON=/usr/sbin/python2 && export PYTHON \
&& npm install
EXPOSE 80
CMD ["/bin/sh", "-c"]
After creation of the image I run a container and verify that indeed the cronjob has been added:
crontab -l
*/2 * * * * node /srv/visitor/visitor.js
Now, the problem is that the cronjob is never executed. I have, of course, tested that "node /srv/visitor/visitor.js" executes properly when run manually from the console.
Any ideas?
One option is to use the host's crontab in the following way:
0 5 * * * docker exec mysql mysqldump --databases myDatabase -u myUsername -pmyPassword > /backups/myDatabase.sql
The above will periodically take a daily backup of a MySQL database.
If you need to chain complicated commands you can also use this format:
0 5 * * * docker exec mysql sh -c 'mkdir -p /backups/`date +\%d` && for DB in myDB1 myDB2 myDB3; do mysqldump --databases $DB -u myUser -pmyPassword > /backups/`date +\%d`/$DB.sql; done'
The above takes a 30 day rolling backup of multiple databases and does a bash for loop in a single line rather than writing and calling a shell script to do the same. So it's pretty flexible.
Or you could also put complicated scripts inside the docker container and run them like so:
0 5 * * * docker exec mysql /dailyCron.sh
It's a little tricky to answer this definitively, as I don't have time to test, but you have various options open to you:
You could use the Phusion base image, which comes with an init system and cron installed. It is based on Ubuntu and is comparatively heavyweight (at least compared to archlinux) https://registry.hub.docker.com/u/phusion/baseimage/
If you're happy to have everything started from cron jobs, you could just start cron from your CMD and keep it in the foreground (cron -f).
You can use lightweight process manager to start cron and whatever other processes you need (Phusion use runit, Docker seem to recommend supervisor).
You could write your own CMD or ENTRYPOINT script that starts cron and your process. The only issue with this is that you will need to be careful to handle signals properly or you may end up with zombie processes.
In your case, if your just playing around, I'd go with the last option, if it's anything more serious, I'd go with a process manager.
If you're running your Docker container with --net=host, see this thread:
https://github.com/docker/docker/issues/5899
I had the same issue, and my cron tasks started running when I included --pid=host in the docker run command line arguments.

Resources