Docker container crontab not running [duplicate] - linux

This question already has answers here:
How to run a cron job inside a docker container?
(29 answers)
Closed 3 years ago.
I have a dockerfile image based on ubuntu. Iam trying to make a bash script run each day but the cron never runs. When the container is running, i check if cron is running and it is. the bash script works perfectly and the crontab command is well copied inside the container. i can't seem to find where the problem is coming from.
Here is the Dockerfile:
FROM snipe/snipe-it:latest
ENV TZ=America/Toronto
RUN apt-get update \
&& apt-get install awscli -y \
&& apt-get clean \
&& apt-get install cron -y \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir /var/www/html/backups_scripts /var/www/html/config/scripts
COPY config/crontab.txt /var/www/html/backups_scripts
RUN /usr/bin/crontab /var/www/html/backups_scripts/crontab.txt
COPY config/scripts/backups.sh /var/www/html/config/scripts
CMD ["cron","-f"]
The last command CMD doesn't work. And as soon as i remove the cmd command i get this message when i check the cron task inside the container:
root#fcfb6052274a:/var/www/html# /etc/init.d/cron status
* cron is not running
Even if i start the cron process before the crontab, the crontab is still not launched
How can i tackle this problem ??? Thank you

You are running crontab in a RUN statement. But that only runs during image creation, not when you actually use the resulting image.

Related

Run sshd in Docker container

I found this Dockerfile sample here:
// version 1
FROM ubuntu:latest
RUN apt update && apt install ssh -y
RUN service ssh start
CMD ["/usr/sbin/sshd","-D"]
When I build and run this Dockerfile, it runs an SSH server in the foreground, which is great.
If I use the following Dockerfile though:
// version 2
FROM ubuntu:latest
RUN apt update && apt install ssh -y
RUN service ssh start
# CMD ["/usr/sbin/sshd","-D"] // without this line
And then run the container:
~$ docker run -p 2222:22 -it ssh_server
And try to connect to it from another terminal, it doesn't work. Seemingly this call to sshd is necessary. On the other hand, If I just install SSH in the Dockerfile:
// version 3
FROM ubuntu:latest
RUN apt-get update && apt-get install -y ssh
And run the container like this:
~$ docker run -p 2222:22 -it ssh:test
~$ service ssh start
* Starting OpenBSD Secure Shell server sshd
Now I'm able to connect to the container. So I wonder: If the line RUN ssh service start
in version 1 is necessary, why isn't necessary for version 3?
To add more to the confusion, if I build and run version 4:
// version 4
FROM ubuntu:latest
RUN apt update && apt install ssh -y
#RUN service ssh start // without this line
CMD ["/usr/sbin/sshd","-D"]
It also doesn't work either.
Can someone please explain those behaviours? What is the relation between service ssh start and /usr/sbin/sshd?
OK everything is clear now:
Basically running the /usr/sbin/sshd is what runs the ssh server. The reason it didn't work out on it's own (version 4) is because the script that runs when you run service ssh start - which is the script /etc/init.d/ssh - creates a directory /run/sshd which is required for the run of sshd.
This script also calls the executable /usr/sbin/sshd, but since this is run as part of the build, it didn't sustain beyond the temporary container that the layer was made of. W
What did sustain is the /run/sshd directory! That's why when we run /usr/sbin/sshd as the CMD it works!
Thanks all!
To build on #YoavKlein's answer, service ssh start can take arguments which are passed to sshd, so rather than
# Incidentally creates /run/sshd
RUN service ssh start
# Run the service in the foreground when starting the container
CMD ["/usr/sbin/sshd", "-D"]
you can just do
# Run the service in the foreground when starting the container
CMD ["service", "ssh", "start", "-D"]
which will start the SSH server through service, but run it in the foreground, avoiding having to have a separate RUN to do first time setup.
I have taken the idea from #mark-raymond :)
Following docker run command with the -D flag worked for me!:
docker run -itd -p 2222:22 <dockerImageName:Tag> /usr/sbin/sshd -D

docker container exits immediately after run [mosquitto broken container]

hello i have a problem with docker, recently i make dockerfile for create a image of "mosquitto-mqtt" to make my own broken mqtt with ssl protection. i build dockerfile all is good, i don't have a problem but if i run a new container with " docker run -itd --name broken ce69ee4b2f4e" a container run and exit automaticly, and if a check log all is good "[ ok .] Starting network daemon:: mosquitto.". i don't have why ? check my dockerfile. i need help to solve it, thanks you
#Download base image debian
FROM debian:latest
#Update system
RUN apt-get update -y
#Install Wget and gnup2
RUN apt-get install wget -y && apt-get install gnupg2 -y
#Download and add key
RUN wget http://repo.mosquitto.org/debian/mosquitto-repo.gpg.key
RUN apt-key add mosquitto-repo.gpg.key
RUN rm mosquitto-repo.gpg.key
## append apt mirror for debian
RUN echo "# mirror" >> /etc/apt/source.list
RUN echo "deb http://repo.mosquitto.org/debian stretch main" >> /etc/apt/source.list
#Update and upgrade system
RUN apt-get update -y && apt-get upgrade -y
#install mosquitto
RUN apt-get install mosquitto -y
#Copy file configuration
COPY mosquitto.conf /etc/mosquitto
#Copy certificate folder
COPY certs/mosquitto-ca.crt /etc/mosquitto/certs
COPY certs/mosquitto-server.crt /etc/mosquitto/certs
COPY certs/mosquitto-server.key /etc/mosquitto/certs
#Run command
ENTRYPOINT ["/etc/init.d/mosquitto", "start"]
log print
[ ok .] Starting network daemon:: mosquitto.
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d00bd23ae2d6 ce69ee4b2f4e "/etc/init.d/mosquit…" 9 minutes ago Exited (0) 9 minutes ago broken
Containers are a wrapper around a process, and when that process exits, the container exits. In this case:
ENTRYPOINT ["/etc/init.d/mosquitto", "start"]
That process is /etc/init.d/mosquitto which almost certainly runs, spawns a daemon in the background, and exits (standard for anything in init.d). You should instead run mosquito directly with foreground options if available.
If that's some possible, something like supervisord would be a less than optimal fallback, with the ability to watch a background daemon.
And if neither of those work, you can run your command from a script that ends with a tail -f /dev/null, but that would be the worst option since you ignore any errors.
it works ! i found the solution, it just need to add "-C" on command and specify directory
this is a good method
ENTRYPOINT ["mosquitto", "-c", "/etc/mosquitto/mosquitto.conf"]
thanks all to help Me!

Python3 script ends randomly in docker container

I am new to Docker and facing some beginner misunderstandings in the usage/concepts of Docker.
How I start my built image:
docker run -d -p 7070:80 --name mov_container my_image
My Dockerfile:
FROM php:7.1-apache
RUN apt-get update
RUN apt-get install -y python3
RUN apt-get install -y python3-pip
RUN pip3 install requests
RUN pip3 install pymysql
COPY src/ /var/www/html/
COPY Movement_Tracker.py /var/movtrack/
RUN docker-php-ext-install mysqli
RUN docker-php-ext-enable mysqli
apachectl restart
EXPOSE 80
CMD python3 /var/movtrack/Movement_Tracker.py > flog.log
Is this a prober Dockerfile?
The "Movement_Tracker.py" seems to stop every second or third day. But the script is designed to run endlessly.
More precisely:
ps aux | grep python3 at host (where docker is installed) shows the python3 process.
Inside the container (exec -it...ps aux | grep python3) shows NO python3 process.
The task of the script is to write some sensor data to a database, that does not happen anymore. (As mentioned after 2 or 3 days)
My questions:
Do I have an anti pattern, because there is an apache service and a never-ending python3 script running?
Why is the python3 script still visible on the host and not in the container? But it is obviously not working anymore.
Is it allowed to have an apache/php and a python script running in a single container?
Why stops the python3 script randomly (assuming there is no occurrence of script error?)
Thank you in advance.

How do I connect to the localhost of a docker container (from inside the container)

I have a nodejs app that connects to a blockchain on the same server. Normally I use 127.0.0.1 + the port number (each chain gets a different port).
I decided to put the chain and the app in the same container, so that the frontend developers don't have to bother with setting up the chain.
However, When I build the image the chain should start. When I run the image it isn't. Furthermore, when I do go in the container and try to run it manually it says "besluitChain2#xxx.xx.x.2:PORT". So I thought instead of 127.0.0.1 I needed to connect to the port on 127.0.0.2, but that doesn't seem to work.
I'm sure connecting like this isn't new, and should work the same with a database. Can anyone help? The first piece of advice would be how to debug these images, because I have no idea where it goes wrong.
here is my dockerfile
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y apt-utils
RUN apt-get install -y build-essential
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install -y nodejs
ADD workfolder/app /root/applications/app
ADD .multichain /root/.multichain
RUN npm install \
&& apt-get upgrade -q -y \
&& apt-get dist-upgrade -q -y \
&& apt-get install -q -y wget curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& cd /tmp \
&& wget http://www.multichain.com/download/multichain-1.0-beta-1.tar.gz \
&& tar -xvzf multichain-1.0-beta-1.tar.gz \
&& cd multichain-1.0-beta-1 \
&& mv multichaind multichain-cli multichain-util /usr/local/bin \
&& cd /tmp \
&& rm -Rf multichain*
RUN multichaind Chain -daemon
RUN cd /root/applications/app && npm install
CMD cd /root/applications/app && npm start
EXPOSE 8080
btw due to policies I can only connect to the server at port 80 to check if it works. When I run the docker image I can go to my /api-docs but not to any of the endpoints where I start interacting with the blockchain.
I decided to put the chain and the app in the same container
That was a mistake, I think.
Docker is not a virtual machine. It's a virtual application or process instance.
A Docker container runs a linux distro under the hood, but this is a detail that should be ignored when thinking about the purpose of Docker.
You should think of a Docker container as a single application process, not as a full virtual machine to run generally run multiple processes. This is evidenced by the way Docker will shut the container down once the main process shuts down (the process with PID 1).
I've got a longer post about this, here: https://derickbailey.com/2016/08/29/so-youre-saying-docker-isnt-a-virtual-machine/
Additionally, the RUN multichaind instruction in your dockerfile doesn't run the chain in your image / container. It tells the image to run this instruction during the build process.
A Dockerfile is a list of instructions for building an image. The wording here is important. An image is not executed, it is built. An image is a static, immutable template from which a Container is executed.
RUN multichaind Chain -daemon
By putting this RUN instruction in your image, you are temporarily starting the chain, but it is immediately halted (forcefully) when the image layer is done building. It will not remain running, because an image is not executed, it is built.
My advice is to put the chain in a separate image.
You'll have one image for the chain, and one for the node.js app.
You can use docker-compose to make it easier to run containers from both of these at the same time. Or you can run containers manually from them. Either way, you need two images.

how to set supervisor to run a shell script

Setting up a Dockerfile to install node prereqs and then set up supervisor in order to run the final npm install command. Running Docker in CoreOS under VirtualBox.
I have a Dockerfile that sets everything up correctly:
FROM ubuntu
MAINTAINER <<Me>>
# Install docker basics
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get upgrade -y
# Install dependencies and nodejs
RUN apt-get update
RUN apt-get install -y python-software-properties python g++ make
RUN add-apt-repository ppa:chris-lea/node.js
RUN apt-get update
RUN apt-get install -y nodejs
# Install git
RUN apt-get install -y git
# Install supervisor
RUN apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
# Add supervisor config file
ADD ./etc/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Bundle app source
ADD . /src
# create supervisord user
RUN /usr/sbin/useradd --create-home --home-dir /usr/local/nonroot --shell /bin/bash nonroot
RUN chown -R nonroot: /src
# set install script to executable
RUN /bin/chmod +x /src/etc/install.sh
#set up .env file
RUN echo "NODE_ENV=development\nPORT=5000\nRIAK_SERVERS={SERVER}" > /src/.env
#expose the correct port
EXPOSE 5000
# start supervisord when container launches
CMD ["/usr/bin/supervisord"]
And then I want to set up supervisord to launch one of a few possible processes, including an installation shell script that I've confirmed to work correctly, install.sh, which is located in the application's /etc directory:
#!/bin/bash
cd /src; npm install
export PATH=$PATH:node_modules/.bin
However, I'm very new to supervisor syntax, and I can't get it to launch the shell script correctly. This is what I have in my supervisord.conf file:
[supervisord]
nodaemon=true
[program:install]
command=install.sh
directory=/src/etc/
user=nonroot
When I run the Dockerfile, everything runs correctly, but when I launch the image, I get the following:
2014-03-15 07:39:56,854 CRIT Supervisor running as root (no user in config file)
2014-03-15 07:39:56,856 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2014-03-15 07:39:56,913 INFO RPC interface 'supervisor' initialized
2014-03-15 07:39:56,913 WARN cElementTree not installed, using slower XML parser for XML-RPC
2014-03-15 07:39:56,914 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2014-03-15 07:39:56,915 INFO supervisord started with pid 1
2014-03-15 07:39:57,918 INFO spawnerr: can't find command 'install.sh'
2014-03-15 07:39:58,920 INFO spawnerr: can't find command 'install.sh'
Clearly, I have not set up supervisor correctly to run this shell script -- is there part of the syntax that I'm screwing up?
The best way that I found was setting this:
[program:my-program-name]
command = /path/to/my/command.sh
startsecs = 0
autorestart = false
startretries = 1
think I got this sorted: needed the full path in command, and instead of having user=nonroot in the .conf file, I put su nonroot into the install.sh script.
I had a quick look in the source code for supervisor and noticed that if the command does not contain a forward slash /, it will look in the PATH environmental variable for that file. This imitates the behaviour of execution via shell.
The following methods should fix your initial problem:
Specify the full path of the script (like you have done in your own answer)
Prefix the command with ./, i.e. ./install.sh (in theory, but untested)
Prefix the command with the shell executable, i.e. /bin/bash install.sh
I do not understand why user= does not work for you (have you tried it after fixing execution?), but the problem you encountered in your own answer was probably due to the incorrect usage of su which does not work like sudo. su will create its own interactive shell and will therefore hang while waiting for standard input. To run commands with su, use the -c flag, i.e. su -c "some-program" nonroot. An explicit shell can also be specified with the -s flag if necessary.
I had this issue too. For me, the root cause was failing to set the shebang line. Even if the script can run in bash fine, for supervisord to be able to exec() it, it has to begin with e.g. #!/bin/bash.

Resources