Start cron and another process in single docker container - linux

I have a docker image that will start a foreground httpd process. I also have some tasks that
I'd like to run as cron jobs via crontab.
Ideally I'd like to start httpd as a non root user, however, to also start cron, will require me to be root
FROM ubuntu:latest
RUN apt-get update
&& DEBIAN_FRONTEND=noninteractive apt-get dist-upgrade -yq \
&& apt-get install -y cron httpd
# Do a bunch of other setup.....
USER 1000
#START PROCESSES....
The problem as I see it is that I cannot start cron as root, keep that process running and also start httpd as a foreground process.
Is there some way to have both these processes started in one docker container?

You can configure user used to run command in docker with something like
ENV USER_NAME dev
RUN echo "${USER_NAME} ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/${USER_NAME} && \
chmod 0440 /etc/sudoers.d/${USER_NAME}
ARG host_uid=1001
ARG host_gid=1001
RUN groupadd -g $host_gid $USER_NAME && useradd -g $host_gid -m -s /bin/bash -u $host_uid $USER_NAME
And build your image using
docker build --build-arg "host_uid=$(id -u)" --build-arg "host_gid=$(id -g)" -t <ur img> .
But why you can't run a bash script to execute the process in the background every time?

Related

Unable to run X11 session on container with namespace remapping enabled

I have an Alpine Linux container that runs a X server (using host devices with --privileged), i've recently enabled the user namespace remapping feature on the daemon (including adding the --userns=host to my docker run command)
Dockerfile:
FROM alpine:3.15
ARG DOCKER_GROUP_ID
RUN apk add --no-cache xhost xinit xorg-server xf86-video-fbdev xf86-video-vesa xf86-input-libinput dwm && \
echo "allowed_users=anybody" > /etc/X11/Xwrapper.config && \
echo "needs_root_rights=yes" >> /etc/X11/Xwrapper.config && \
adduser -S dwm -s /bin/sh && \
addgroup -S -g $DOCKER_GROUP_ID dwm && \
addgroup dwm dwm && \
USER dwm
WORKDIR /home/dwm
ENTRYPOINT ["run"]
run script:
#!/bin/sh
startx /usr/bin/dwm :1
But right after that, my container doesn't work anymore, whenever i try to launch it, it freezes and i have to kill it manually, no errors appear, from what i could find on Xorg logs, it's a permission problem on /dev/tty0, but i still couldn't figure out how to make it work
EDIT: after adding my container user to the tty group, the error changed to xf86OpenConsole: Cannot open virtual console 7 (permission denied)

Running a process with nobody user with gosu

I am trying to run a process with nobody user in Linux, currently this is being run as a root user but since this process doesn't require the root access so I want to use nobody with gosu. The problem is even after activating the nobody user and running the process with that, when I do " ps aux" it shows that all processes are being run by root. Do I need to do something more after activating the nobody user to make it possible to run the process. The process I am trying to run with nobody is rails s -b 0.0.0.0
Below is my dockerfile
FROM ruby:3.0.1
EXPOSE $PORT
WORKDIR /srv
COPY Gemfile Gemfile.lock /srv/
COPY . /srv
RUN apt-get update -qq && apt-get install -y build-essential iproute2 libpq-dev nodejs && apt-
get clean && bundle install --no-cache
#activating the nobody user account
RUN chsh -s /bin/bash nobody
RUN set -eux; \
apt-get install -y gosu; \
rm -rf /var/lib/apt/lists/*; \
gosu nobody true
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["server"]
Here is the docker-entrypoint.sh
#!/bin/sh
export BASH_SHELL=$(cat /etc/shells | grep /bash)
export ASH_SHELL=$(cat /etc/shells | grep /ash)
#Setting available Shell to $SHELL_PROFILE
if [ -n "$BASH_SHELL" ];
then
SHELL_PROFILE=$BASH_SHELL
elif [ -n "$ASH_SHELL" ];
then
SHELL_PROFILE=$ASH_SHELL
else
SHELL_PROFILE=sh
fi
rm -f tmp/pids/puma.5070.pid tmp/pids/server.pid
XRAY_ADDRESS="$(ip route | grep default | cut -d ' ' -f 3):2000"
export AWS_XRAY_DAEMON_ADDRESS=$XRAY_ADDRESS
echo "export AWS_XRAY_DAEMON_ADDRESS=$XRAY_ADDRESS" >> /root/.bashrc
case "$*" in
shell)
exec $SHELL_PROFILE
;;
server)
# gosu command to run rails s -b 0.0.0.0 process as nobody user
gosu nobody:nogroup bundle exec rails s -b 0.0.0.0
;;
*)
exec $#
;;
esac
Don't bother installing gosu or another tool; just set your Docker image to run as the nobody user (or some other non-root user). Do this at the very end of your Dockerfile, where you otherwise declare the CMD.
# Don't install gosu or "activate a user"; but instead
USER nobody
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["rails", "server", "-b", "0.0.0.0"]
In turn, that means you can remove the gosu invocation from the entrypoint script. I might remove most of it and trim it down to
#!/bin/sh
# Clean up stale pid files
rm -f tmp/pids/*.pid
# (Should this environment variable be set via `docker run`?)
export AWS_XRAY_DAEMON_ADDRESS="$(ip route | grep default | cut -d ' ' -f 3):2000"
# Run whatever the provided command was, in a Bundler context
exec bundle exec "$#"
If you need an interactive shell to debug the image, you can docker run --rm -it the-image bash which works on many images (provided they (a) honor CMD and (b) have bash installed); you don't need a special shell artificial command and you don't need to detect what's installed in the (fixed) image.

Is it possible to map a user inside the docker container to an outside user?

I know that one can use the --user option with Docker to run a container as a certain user, but in my case, my Docker image has a user inside it, let us call that user manager. Now is it possible to map that user to a user on host? For example, if there is a user john on the host, can we map john to manager?
Yes, you can set the user from the host, but you should modify your Dockerfile a bit to deal with run time user.
FROM alpine:latest
# Override user name at build. If build-arg is not passed, will create user named `default_user`
ARG DOCKER_USER=default_user
# Create a group and user
RUN addgroup -S $DOCKER_USER && adduser -S $DOCKER_USER -G $DOCKER_USER
# Tell docker that all future commands should run as this user
USER $DOCKER_USER
Now, build the Docker image:
docker build --build-arg DOCKER_USER=$(whoami) -t docker_user .
The new user in Docker will be the Host user.
docker run --rm docker_user ash -c "whoami"
Another way is to pass host user ID and group ID without creating the user in Dockerfile.
export UID=$(id -u)
export GID=$(id -g)
docker run -it \
--user $UID:$GID \
--workdir="/home/$USER" \
--volume="/etc/group:/etc/group:ro" \
--volume="/etc/passwd:/etc/passwd:ro" \
--volume="/etc/shadow:/etc/shadow:ro" \
alpine ash -c "whoami"
You can further read more about the user in docker here and here.
Another way is through an entrypoint.
Example
This example relies on gosu which is present in recent Debian derivatives, not yet in Alpine 3.13 (but is in edge).
You could run this image as follow:
docker run --rm -it \
--env UID=$(id -u) \
--env GID=$(id -g) \
-v "$(pwd):$(pwd)" -w "$(pwd)" \
imagename
tree
.
├── Dockerfile
└── files/
└── entrypoint
Dockerfile
FROM ...
# [...]
ARG DOCKER_USER=default_user
RUN addgroup "$DOCKER_USER" \
&& adduser "$DOCKER_USER" -G "$DOCKER_USER"
RUN wget -O- https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64 |\
install /dev/stdin /usr/local/bin/gosu
COPY files /
RUN chmod 0755 /entrypoint \
&& sed "s/\$DOCKER_USER/$DOCKER_USER/g" -i /entrypoint
ENTRYPOINT ["/entrypoint"]
files/entrypoint
#!/bin/sh
set -e
set -u
: "${UID:=0}"
: "${GID:=${UID}}"
if [ "$#" = 0 ]
then set -- "$(command -v bash 2>/dev/null || command -v sh)" -l
fi
if [ "$UID" != 0 ]
then
usermod -u "$UID" "$DOCKER_USER" 2>/dev/null && {
groupmod -g "$GID" "$DOCKER_USER" 2>/dev/null ||
usermod -a -G "$GID" "$DOCKER_USER"
}
set -- gosu "${UID}:${GID}" "${#}"
fi
exec "$#"
Notes
UID is normally a read-only variable in bash, but it will work as expected if set by the docker --env flag
I choose gosu for it's simplicity, but you could make it work with su or sudo; it will need more configuration however
if you don't want to specify two --env switch, you could do something like: --env user="$(id -u):$(id -g)" and in the entrypoint: uid=${user%:*} gid=${user#*:}; note at this point the UID variable will be read-only in bash that's why I switched to lower-case... rest of the adaptation is left to the reader
There is no simple solution that handles all use cases. Solving these problems is continuous work, a part of life in the containerized world.
There is no magical parameter that you could add to a docker exec or docker run invocation and reliably cause the containerized software to no longer run into permissions issues during operations on host-mapped volumes. Unless your mapped directories are chmod-0777-and-come-what-may (DON'T), you will be running into permissions issues and you will be solving them as you go, and this is the task you should try becoming efficient at, instead of trying to find a miracle once-and-forever solution that will never exist.

Starting a Docker container with a user different from the one on the host

I am trying to deploy an image on a Ubuntu server. The problem is I would like the container to have a user other than root. In other words, I would like to start the container under that user.
What I have tried.
I have successfully created a user in my container which has an image.
I tried to start the container with the docker start command which was unsuccessful.
I tried to create a new container with a user defined inside the dockerfile, it was also unsuccessful.
root#juju_dev_server:/home/dev# sudo docker run -it --user dev d08d53c4d78b
docker: Error response from daemon: linux spec user: unable to find user dev: no matching entries in passwd file
.
Here is my dockerfile
FROM debian
RUN groupadd -g 61000 dev
RUN useradd -g 61000 -l -m -s /bin/false -u 61000 dev
USER dev
CMD ["bash"]
FROM java:8
EXPOSE 8080
ADD /target/juju-0.0.1.jar juju-0.0.1.jar
ENTRYPOINT ["java","-jar","juju-0.0.1.jar"]
How I've done it, I use Alpine not Ubuntu but it should work fine:
Creating and running as a user called "developer"
Dockerfile
RUN /bin/bash -c "adduser -D -u 1000 developer"
RUN passwd -d developer
RUN chown -R developer /home/developer/.bash*
RUN echo "developer ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/developer
ENTRYPOINT ["/entrypoint.sh"]
CMD ["bash"]
entrypoint.sh
# stuff I need running as root here. Then below runs a bash shell as "developer"
sudo -u developer -H bash -c "$#;"
I suppose you'll want to change your ENTRYPOINT to CMD or similar, or write it into your entrypoint.sh however you like to launch your java stuff.
The Dockerfile you show creates two images. The first one is a plain debian image with a non-root user. The second one ignores the first one and is a somewhat routine Java image.
You need to do these two steps in the same image. If I was going to write your Dockerfile it might look like
FROM java:8
EXPOSE 8080
# (Prefer COPY to ADD unless you explicitly want its
# auto-unpacking semantics.)
COPY /target/juju-0.0.1.jar juju-0.0.1.jar
# Set up a non-root user context, after COPYing content
# in. (Prevents the application from overwriting
# itself as a useful security measure.)
RUN groupadd -g 61000 app
RUN useradd -g 61000 -l -m -s /bin/false -u 61000 app
USER app
# Set the main container command. (Generally prefer
# CMD to ENTRYPOINT if you’re only using one; it makes
# both getting debugging shells and later adopting the
# pattern of an initializing entrypoint script easier.)
CMD ["java","-jar","juju-0.0.1.jar"]

Why doesn't the cron service in Dockerfile run?

While searching for this issue I found that: cron -f should start the service.
So I have:
RUN apt-get install -qq -y git cron
Next I have:
CMD cron -f && crontab -l > pullCron && echo "* * * * * git -C ${HOMEDIR} pull" >> pullCron && crontab pullCron && rm pullCron
My dockerfile deploys without errors but the cron doesn't run. What can I do to start the cron service with an added line?
PS:
I know that the git function in my cron should actually be a hook, but for me (and probably for others) this is about learning how to set crons with Docker :-)
PPS:
Complete Dockerfile (UPDATED):
RUN apt-get update && apt-get upgrade -y
RUN mkdir -p /var/log/supervisor
RUN apt-get install -qq -y nginx git supervisor cron wget
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN wget -O ./supervisord.conf https://raw.githubusercontent.com/..../supervisord.conf
RUN mv ./supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN apt-get install software-properties-common -y && apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0x5a16e7281be7a449 && add-apt-repository 'deb http://dl.hhvm.com/ubuntu utopic main' && apt-get update && apt-get install hhvm -y
RUN cd ${HOMEDIR} && git clone ${GITDIR} && mv ./tybalt/* ./ && rm -r ./tybalt && git init
RUN echo "* * * * * 'cd ${HOMEDIR} && /usr/bin/git pull origin master'" >> pullCron && crontab pullCron && rm pullCron
EXPOSE 80
CMD ["/usr/bin/supervisord"]
PPPS:
Supervisord.conf:
[supervisord]
autostart=true
autorestart=true
nodaemon=true
[program:nginx]
command=/usr/sbin/nginx -c /etc/nginx/nginx.conf
[program:cron]
command = cron -f -L 15
autostart=true
autorestart=true
Having started crond with supervisor, your cron jobs should be executed. Here are the troubleshooting steps you can take to make sure cron is running
Is the cron daemon running in the container? Login to the container and run ps a | grep cron to find out. Use docker exec -ti CONTAINERID /bin/bash to login to the container.
Is supervisord running?
In my setup for instance, the following supervisor configuration works without a problem. The image is ubuntu:14.04. I have CMD ["/usr/bin/supervisord"] in the Dockerfile.
[supervisord]
nodaemon=true
[program:crond]
command = /usr/sbin/cron
user = root
autostart = true
Try another simple cron job to findout whether the problem is your cron entry or the cron daemon. Add this when logged in to the container with crontab -e :
* * * * * echo "hi there" >> /tmp/test
Check the container logs for any further information on cron:
docker logs CONTAINERID | grep -i cron
These are just a few troubleshooting tips you can follow.
Cron is not running because only the last CMD overrides the first one (as #xuhdev said). It's documented here : https://docs.docker.com/reference/builder/#cmd.
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
If you want to have nginx and cron running in the same container, you will need to use some kind of supervisor (like supervisord or others) that will be the pid 1 process of your container and manage the chield processes. I think this project should help : https://github.com/nbraquart/docker-nginx-php5-cron (it seems to do what you're trying to achieve).
Depending on what you're cron is here for, there would be other solution to that — like building a new image for each commit or each tags, etc...
I've used this with CentOS and it works:
CMD service crond start ; tail -f /var/log/cron
The rest of my Dockerfile just yum installs cronie and touches the /var/log/cron file so it will be there when the CMD runs.
On centos 7 this works for me
[program:cron]
command=/usr/sbin/crond -n -s
user = root
autostart = true
stderr_logfile=/var/log/cron.err.log
stdout_logfile=/var/log/cron.log
-n is for foreground
-s is to log to stdout and stderr
In my case, it turns out I needed to run cron start at run time. I can't put it in my Dockerfile nor docker-compose.yml, so I ended up placing in the Makefile I use for deploy.
Something like:
task-name:
# docker-compose down && docker-compose build && docker-compose up -d
docker exec CONTAINERNAME /bin/bash -c cron start

Resources