run cron using supervisord on docker alpine image - cron

I have a docker image part of a service and I am testing the possibility to add a cron job
I have setup the Dockerfile using with a crontab that should run a script (that for now should just output the date).
supervidord starts and spawns cron, but I see no regular outputs of dates...neither on terminal nor on the log file.
the Dockerfile is:
FROM docker.io/python:3.6-alpine
ENV PYTHONUNBUFFERED 1
WORKDIR /opt/app-root/src
RUN apk update && apk add --no-cache bash supervisor \
&& rm -rf /var/cache/apk/*
# Copy Scripts
COPY mirror/src/ $WORKDIR
COPY mirror/src/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN chmod +x ./entrypoint.sh
RUN chmod +x ./run.sh
RUN touch logs.log
RUN /usr/bin/crontab ./crontab.txt
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
the supervisord.conf file is:
[supervisord]
nodaemon = true
[program:cron]
command=crond -f
user=root
autostart=true
autorestart=true
crontab.txt is:
*/10 * * * * * bash ./run.sh >logs.log 2>&1
and the run.sh script is:
#!/bin/bash
echo `date +%Y%m%d_%H%M%S
The only output I see on terminal is:
crond[7]: USER root pid 8 cmd * bash ./run.sh >logs.log 2>&1
what's wrong with my setup?

found a way, posting my solution:
Dockerfile edited:
RUN chmod +x ./run.sh
# Setup CRON to update databases
RUN touch crontab.tmp \
&& echo '*/10 * * * * /opt/app-root/src/run.sh update > /dev/null' > crontab.tmp \
&& crontab crontab.tmp \
&& rm -rf crontab.tmp
# Start Server
EXPOSE 8080
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]
and supervisord conf:
[supervisord]
nodaemon = true
logfile = /dev/null
logfile_maxbytes = 0
pidfile = /run/supervisord.pid
[program:mirror]
command = /bin/bash -c "/opt/app-root/src/run.sh"
stdout_logfile = /dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile = /dev/stderr
stderr_logfile_maxbytes=0
user = root
autostart = true
autorestart = true
startretries=10
priority = 20
[program:cron]
command = /bin/bash -c "/usr/sbin/crond -f -d 0"
stdout_logfile = /dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile = /dev/stderr
stderr_logfile_maxbytes=0
user = root
autostart = true
autorestart = true
priority = 20
In this example the container was exiting too quickly so added a startretries option.

Related

Cron doesn't run cron task in Docker container

I'm trying to run a script in a Docker container with cron. It looks like the script isn't being run at all at first. crontab -l shows the task. service cron reload doesn't fix the issue. If I do crontab -e, add a space and save the file, it suddenly works. So I can rule out permission issues etc.
FROM node:17
RUN apt-get update && apt-get install -y cron
COPY scripts /app/scripts
COPY package.json /app/package.json
RUN chmod -R +x /app/scripts
WORKDIR /app
RUN touch /var/log/cron.log
RUN npm install
RUN echo "* * * * * /usr/local/bin/node /app/scripts/test.js >> /var/log/cron.log 2>&1" >> /var/spool/cron/crontabs/root
CMD ["cron","-f"]
Adding RUN crontab -u root /var/spool/cron/crontabs/root after creating the file seems to have fixed it.

Running a process with nobody user with gosu

I am trying to run a process with nobody user in Linux, currently this is being run as a root user but since this process doesn't require the root access so I want to use nobody with gosu. The problem is even after activating the nobody user and running the process with that, when I do " ps aux" it shows that all processes are being run by root. Do I need to do something more after activating the nobody user to make it possible to run the process. The process I am trying to run with nobody is rails s -b 0.0.0.0
Below is my dockerfile
FROM ruby:3.0.1
EXPOSE $PORT
WORKDIR /srv
COPY Gemfile Gemfile.lock /srv/
COPY . /srv
RUN apt-get update -qq && apt-get install -y build-essential iproute2 libpq-dev nodejs && apt-
get clean && bundle install --no-cache
#activating the nobody user account
RUN chsh -s /bin/bash nobody
RUN set -eux; \
apt-get install -y gosu; \
rm -rf /var/lib/apt/lists/*; \
gosu nobody true
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["server"]
Here is the docker-entrypoint.sh
#!/bin/sh
export BASH_SHELL=$(cat /etc/shells | grep /bash)
export ASH_SHELL=$(cat /etc/shells | grep /ash)
#Setting available Shell to $SHELL_PROFILE
if [ -n "$BASH_SHELL" ];
then
SHELL_PROFILE=$BASH_SHELL
elif [ -n "$ASH_SHELL" ];
then
SHELL_PROFILE=$ASH_SHELL
else
SHELL_PROFILE=sh
fi
rm -f tmp/pids/puma.5070.pid tmp/pids/server.pid
XRAY_ADDRESS="$(ip route | grep default | cut -d ' ' -f 3):2000"
export AWS_XRAY_DAEMON_ADDRESS=$XRAY_ADDRESS
echo "export AWS_XRAY_DAEMON_ADDRESS=$XRAY_ADDRESS" >> /root/.bashrc
case "$*" in
shell)
exec $SHELL_PROFILE
;;
server)
# gosu command to run rails s -b 0.0.0.0 process as nobody user
gosu nobody:nogroup bundle exec rails s -b 0.0.0.0
;;
*)
exec $#
;;
esac
Don't bother installing gosu or another tool; just set your Docker image to run as the nobody user (or some other non-root user). Do this at the very end of your Dockerfile, where you otherwise declare the CMD.
# Don't install gosu or "activate a user"; but instead
USER nobody
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["rails", "server", "-b", "0.0.0.0"]
In turn, that means you can remove the gosu invocation from the entrypoint script. I might remove most of it and trim it down to
#!/bin/sh
# Clean up stale pid files
rm -f tmp/pids/*.pid
# (Should this environment variable be set via `docker run`?)
export AWS_XRAY_DAEMON_ADDRESS="$(ip route | grep default | cut -d ' ' -f 3):2000"
# Run whatever the provided command was, in a Bundler context
exec bundle exec "$#"
If you need an interactive shell to debug the image, you can docker run --rm -it the-image bash which works on many images (provided they (a) honor CMD and (b) have bash installed); you don't need a special shell artificial command and you don't need to detect what's installed in the (fixed) image.

debug Fast Api with docker compose in pycharm

I am trying to debug mode from pycharm configurations but still no success. From terminal with docker-compose up it runs successfully in run mode and not in debug. Any ideas what is happening and what might be the issue?
Here's the docker file:
FROM python:3.8-slim-buster
EXPOSE 80
WORKDIR api
CMD apt-get --assume-yes update \
&& apt-get --assume-yes upgrade \
&& apt-get --assume-yes install libpq-dev build-essential python3-dev
COPY api/ .
CMD ls
RUN pip install -r requirements.txt
ENTRYPOINT chmod +x ./scripts/start.sh \
&& ./scripts/start.sh
start.sh script is as below:
#! /usr/bin/env sh
set -e
# Unicorn is used for local development, gunicorn for production
# Pre-start script is used to execute commands that need to be run before
# opening the server such as migrations etc
U_EXEC_PATH=${U_EXEC_PATH:-./api/scripts/start-uvicorn.sh}
G_EXEC_PATH=${G_EXEC_PATH:-./api/scripts/start-gunicorn.sh}
PRE_START_PATH=${PRE_START_PATH:-./api/scripts/pre-start.sh}
# Common variables
MODULE_NAME=${MODULE_NAME:-api.app.main}
VARIABLE_NAME=${VARIABLE_NAME:-app}
export APP_MODULE=${APP_MODULE:-"$MODULE_NAME:$VARIABLE_NAME"}
export S_HOST=${S_HOST:-0.0.0.0}
export S_PORT=${S_PORT:-80}
export S_LOG_LEVEL=${S_LOG_LEVEL:-info}
export S_APP_ENV=${S_APP_ENV:-local}
echo "Checking for script in $PRE_START_PATH"
if [ -f $PRE_START_PATH ] ; then
echo "Running script $PRE_START_PATH"
. "$PRE_START_PATH"
else
echo "There is no script $PRE_START_PATH"
fi
cd ..
if [ $S_APP_ENV = "local" ]
then
echo "Local environment."
echo "Starting start-uvicorn.sh"
. "$U_EXEC_PATH"
else
echo "$S_APP_ENV environment."
echo "Starting start-gunicorn.sh"
. "$G_EXEC_PATH"
fi
and start-uvicorn.sh
#! /usr/bin/env sh
set -e
# Start Uvicorn with live reload
echo "host": $S_HOST
echo "port": $S_PORT
echo "module": $APP_MODULE
echo "log level": $S_LOG_LEVEL
exec uvicorn --reload --host $S_HOST --port $S_PORT --log-level $S_LOG_LEVEL "$APP_MODULE"

Docker Node.js Cron

Hello Everyone I just about have my entire app dockerized except my cron jobs here is my dockerFile
FROM nodesource/precise
# Update install os dep
RUN apt-get update && apt-get install -y apt-utils cron
RUN apt-get -y install pwgen python-setuptools curl git unzip vim
# Add code
RUN mkdir /var/sites
ADD /api /var/sites/api
ADD /services /var/sites/services
RUN cd /var/sites/services && npm install
RUN cd /var/sites/api && npm install
# Add crontab file in the cron directory
ADD crontab /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
my cron file
* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1
* * * * * cd /var/sites/services/ldapSync && node index.js >> 2>&1
# An empty line is required at the end of this file for a valid cron file.
if I remove the node cron job just leave the hello world it works fine but when I have the node cron in there it doesn't appear to do anything. If I go into the container and do crontab -e and add it manually it works fine.
Any ideas what I'm doing wrong?
Thanks
In second line of your cron file you are missing username in the format
So instead of
* * * * * cd /var/sites/services/ldapSync && node index.js >> 2>&1
you should have
* * * * * root cd /var/sites/services/ldapSync && node index.js >> 2>&1
For more info see this
Have a look at redmatter/cron image. It took me a while to get crond to behave.
There is an example in the test sub folder on github.
You can also refer to my answer here.

Why doesn't the cron service in Dockerfile run?

While searching for this issue I found that: cron -f should start the service.
So I have:
RUN apt-get install -qq -y git cron
Next I have:
CMD cron -f && crontab -l > pullCron && echo "* * * * * git -C ${HOMEDIR} pull" >> pullCron && crontab pullCron && rm pullCron
My dockerfile deploys without errors but the cron doesn't run. What can I do to start the cron service with an added line?
PS:
I know that the git function in my cron should actually be a hook, but for me (and probably for others) this is about learning how to set crons with Docker :-)
PPS:
Complete Dockerfile (UPDATED):
RUN apt-get update && apt-get upgrade -y
RUN mkdir -p /var/log/supervisor
RUN apt-get install -qq -y nginx git supervisor cron wget
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN wget -O ./supervisord.conf https://raw.githubusercontent.com/..../supervisord.conf
RUN mv ./supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN apt-get install software-properties-common -y && apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0x5a16e7281be7a449 && add-apt-repository 'deb http://dl.hhvm.com/ubuntu utopic main' && apt-get update && apt-get install hhvm -y
RUN cd ${HOMEDIR} && git clone ${GITDIR} && mv ./tybalt/* ./ && rm -r ./tybalt && git init
RUN echo "* * * * * 'cd ${HOMEDIR} && /usr/bin/git pull origin master'" >> pullCron && crontab pullCron && rm pullCron
EXPOSE 80
CMD ["/usr/bin/supervisord"]
PPPS:
Supervisord.conf:
[supervisord]
autostart=true
autorestart=true
nodaemon=true
[program:nginx]
command=/usr/sbin/nginx -c /etc/nginx/nginx.conf
[program:cron]
command = cron -f -L 15
autostart=true
autorestart=true
Having started crond with supervisor, your cron jobs should be executed. Here are the troubleshooting steps you can take to make sure cron is running
Is the cron daemon running in the container? Login to the container and run ps a | grep cron to find out. Use docker exec -ti CONTAINERID /bin/bash to login to the container.
Is supervisord running?
In my setup for instance, the following supervisor configuration works without a problem. The image is ubuntu:14.04. I have CMD ["/usr/bin/supervisord"] in the Dockerfile.
[supervisord]
nodaemon=true
[program:crond]
command = /usr/sbin/cron
user = root
autostart = true
Try another simple cron job to findout whether the problem is your cron entry or the cron daemon. Add this when logged in to the container with crontab -e :
* * * * * echo "hi there" >> /tmp/test
Check the container logs for any further information on cron:
docker logs CONTAINERID | grep -i cron
These are just a few troubleshooting tips you can follow.
Cron is not running because only the last CMD overrides the first one (as #xuhdev said). It's documented here : https://docs.docker.com/reference/builder/#cmd.
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
If you want to have nginx and cron running in the same container, you will need to use some kind of supervisor (like supervisord or others) that will be the pid 1 process of your container and manage the chield processes. I think this project should help : https://github.com/nbraquart/docker-nginx-php5-cron (it seems to do what you're trying to achieve).
Depending on what you're cron is here for, there would be other solution to that — like building a new image for each commit or each tags, etc...
I've used this with CentOS and it works:
CMD service crond start ; tail -f /var/log/cron
The rest of my Dockerfile just yum installs cronie and touches the /var/log/cron file so it will be there when the CMD runs.
On centos 7 this works for me
[program:cron]
command=/usr/sbin/crond -n -s
user = root
autostart = true
stderr_logfile=/var/log/cron.err.log
stdout_logfile=/var/log/cron.log
-n is for foreground
-s is to log to stdout and stderr
In my case, it turns out I needed to run cron start at run time. I can't put it in my Dockerfile nor docker-compose.yml, so I ended up placing in the Makefile I use for deploy.
Something like:
task-name:
# docker-compose down && docker-compose build && docker-compose up -d
docker exec CONTAINERNAME /bin/bash -c cron start

Resources