Running a process with nobody user with gosu - linux

I am trying to run a process with nobody user in Linux, currently this is being run as a root user but since this process doesn't require the root access so I want to use nobody with gosu. The problem is even after activating the nobody user and running the process with that, when I do " ps aux" it shows that all processes are being run by root. Do I need to do something more after activating the nobody user to make it possible to run the process. The process I am trying to run with nobody is rails s -b 0.0.0.0
Below is my dockerfile
FROM ruby:3.0.1
EXPOSE $PORT
WORKDIR /srv
COPY Gemfile Gemfile.lock /srv/
COPY . /srv
RUN apt-get update -qq && apt-get install -y build-essential iproute2 libpq-dev nodejs && apt-
get clean && bundle install --no-cache
#activating the nobody user account
RUN chsh -s /bin/bash nobody
RUN set -eux; \
apt-get install -y gosu; \
rm -rf /var/lib/apt/lists/*; \
gosu nobody true
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["server"]
Here is the docker-entrypoint.sh
#!/bin/sh
export BASH_SHELL=$(cat /etc/shells | grep /bash)
export ASH_SHELL=$(cat /etc/shells | grep /ash)
#Setting available Shell to $SHELL_PROFILE
if [ -n "$BASH_SHELL" ];
then
SHELL_PROFILE=$BASH_SHELL
elif [ -n "$ASH_SHELL" ];
then
SHELL_PROFILE=$ASH_SHELL
else
SHELL_PROFILE=sh
fi
rm -f tmp/pids/puma.5070.pid tmp/pids/server.pid
XRAY_ADDRESS="$(ip route | grep default | cut -d ' ' -f 3):2000"
export AWS_XRAY_DAEMON_ADDRESS=$XRAY_ADDRESS
echo "export AWS_XRAY_DAEMON_ADDRESS=$XRAY_ADDRESS" >> /root/.bashrc
case "$*" in
shell)
exec $SHELL_PROFILE
;;
server)
# gosu command to run rails s -b 0.0.0.0 process as nobody user
gosu nobody:nogroup bundle exec rails s -b 0.0.0.0
;;
*)
exec $#
;;
esac

Don't bother installing gosu or another tool; just set your Docker image to run as the nobody user (or some other non-root user). Do this at the very end of your Dockerfile, where you otherwise declare the CMD.
# Don't install gosu or "activate a user"; but instead
USER nobody
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["rails", "server", "-b", "0.0.0.0"]
In turn, that means you can remove the gosu invocation from the entrypoint script. I might remove most of it and trim it down to
#!/bin/sh
# Clean up stale pid files
rm -f tmp/pids/*.pid
# (Should this environment variable be set via `docker run`?)
export AWS_XRAY_DAEMON_ADDRESS="$(ip route | grep default | cut -d ' ' -f 3):2000"
# Run whatever the provided command was, in a Bundler context
exec bundle exec "$#"
If you need an interactive shell to debug the image, you can docker run --rm -it the-image bash which works on many images (provided they (a) honor CMD and (b) have bash installed); you don't need a special shell artificial command and you don't need to detect what's installed in the (fixed) image.

Related

How to avoid changing permissions on node_modules for a non-root user in docker

The issue with my current files is that in my entrypoint.sh file, I have to change the ownership of my entire project directory to the non-administrative user (chown -R node /node-servers). However, when a lot of npm packages are installed, this takes a lot of time. Is there a way to avoid having to chown the node_modules directory?
Background: The reason I create everything as root in the Dockerfile is because this way I can match the UID and GID of a developer's local user. This enables mounting volumes more easily. The downside is that I have to step-down from root in an entrypoint.sh file and ensure that the permissions of the entire project files have all been changed to the non-administrative user.
my docker file:
FROM node:10.24-alpine
#image already has user node and group node which are 1000, thats what we will use
# grab gosu for easy step-down from root
# https://github.com/tianon/gosu/releases
ENV GOSU_VERSION 1.14
RUN set -eux; \
\
apk add --no-cache --virtual .gosu-deps \
ca-certificates \
dpkg \
gnupg \
; \
\
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
\
# verify the signature
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
command -v gpgconf && gpgconf --kill all || :; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
\
# clean up fetch dependencies
apk del --no-network .gosu-deps; \
\
chmod +x /usr/local/bin/gosu; \
# verify that the binary works
gosu --version; \
gosu nobody true
COPY ./ /node-servers
# Setting the working directory
WORKDIR /node-servers
# Install app dependencies
# Install openssl
RUN apk add --update openssl ca-certificates && \
apk --no-cache add shadow && \
apk add libcap && \
npm install -g && \
chmod +x /node-servers/entrypoint.sh && \
setcap cap_net_bind_service=+ep /usr/local/bin/node
# Entrypoint used to load the environment and start the node server
#ENTRYPOINT ["/bin/sh"]
my entrypoint.sh
# In Prod, this may be configured with a GID already matching the container
# allowing the container to be run directly as Jenkins. In Dev, or on unknown
# environments, run the container as root to automatically correct docker
# group in container to match the docker.sock GID mounted from the host
set -x
if [ -z ${HOST_UID+x} ]; then
echo "HOST_UID not set, so we are not changing it"
else
echo "HOST_UID is set, so we are changing the container UID to match"
# get group of notadmin inside container
usermod -u ${HOST_UID} node
CUR_GID=`getent group node | cut -f3 -d: || true`
echo ${CUR_GID}
# if they don't match, adjust
if [ ! -z "$HOST_GID" -a "$HOST_GID" != "$CUR_GID" ]; then
groupmod -g ${HOST_GID} -o node
fi
if ! groups node | grep -q node; then
usermod -aG node node
fi
fi
# gosu drops from root to node user
set -- gosu node "$#"
[ -d "/node-servers" ] && chown -v -R node /node-servers
exec "$#"
You shouldn't need to run chown at all here. Leave the files owned by root (or by the host user). So long as they're world-readable the application will still be able to run; but if there's some sort of security issue or other bug, the application won't be able to accidentally overwrite its own source code.
You can then go on to simplify this even further. For most purposes, users in Unix are identified by their numeric user ID; there isn't actually a requirement that the user be listed in /etc/passwd. If you don't need to change the node user ID and you don't need to chown files, then the entrypoint script reduces to "switch user IDs and run the main script"; but then Docker can provide an alternate user ID for you via the docker run -u option. That means you don't need to install gosu either, which is a lot of the Dockerfile content.
All of this means you can reduce the Dockerfile to:
FROM node:10.24-alpine
# Install OS-level dependencies (before you COPY anything in)
apk add openssl ca-certificates
# (Do not install gosu or its various dependencies)
# Set (and create) the working directory
WORKDIR /node-servers
# Copy language-level dependencies in
COPY package.json package-lock.json .
RUN npm ci
# Copy the rest of the application in
# (make sure `node_modules` is in .dockerignore)
COPY . .
# (Do not call setcap here)
# Set the main command to run
USER node
CMD npm run start
Then when you run the container, you can use Docker options to specify the current user and additional capability.
docker run \
-d \ # in the background
-u $(id -u) \ # as an alternate user
-v "$PWD/data:/node-servers/data" \ # mounting a data directory
-p 8080:80 \ # publishing a port
my-image
Docker grants the NET_BIND_SERVICE capability by default so you don't need to specially set it.
This same permission setup will work if you're using bind mounts to overwrite the application code; again, without a chown call.
docker run ... \
-u $(id -u) \
-v "$PWD:/node-servers" \ # run the application from the host, not the image
-v /node-servers/node_modules \ # with libraries that will not be updated ever
...

Is it possible to map a user inside the docker container to an outside user?

I know that one can use the --user option with Docker to run a container as a certain user, but in my case, my Docker image has a user inside it, let us call that user manager. Now is it possible to map that user to a user on host? For example, if there is a user john on the host, can we map john to manager?
Yes, you can set the user from the host, but you should modify your Dockerfile a bit to deal with run time user.
FROM alpine:latest
# Override user name at build. If build-arg is not passed, will create user named `default_user`
ARG DOCKER_USER=default_user
# Create a group and user
RUN addgroup -S $DOCKER_USER && adduser -S $DOCKER_USER -G $DOCKER_USER
# Tell docker that all future commands should run as this user
USER $DOCKER_USER
Now, build the Docker image:
docker build --build-arg DOCKER_USER=$(whoami) -t docker_user .
The new user in Docker will be the Host user.
docker run --rm docker_user ash -c "whoami"
Another way is to pass host user ID and group ID without creating the user in Dockerfile.
export UID=$(id -u)
export GID=$(id -g)
docker run -it \
--user $UID:$GID \
--workdir="/home/$USER" \
--volume="/etc/group:/etc/group:ro" \
--volume="/etc/passwd:/etc/passwd:ro" \
--volume="/etc/shadow:/etc/shadow:ro" \
alpine ash -c "whoami"
You can further read more about the user in docker here and here.
Another way is through an entrypoint.
Example
This example relies on gosu which is present in recent Debian derivatives, not yet in Alpine 3.13 (but is in edge).
You could run this image as follow:
docker run --rm -it \
--env UID=$(id -u) \
--env GID=$(id -g) \
-v "$(pwd):$(pwd)" -w "$(pwd)" \
imagename
tree
.
├── Dockerfile
└── files/
└── entrypoint
Dockerfile
FROM ...
# [...]
ARG DOCKER_USER=default_user
RUN addgroup "$DOCKER_USER" \
&& adduser "$DOCKER_USER" -G "$DOCKER_USER"
RUN wget -O- https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64 |\
install /dev/stdin /usr/local/bin/gosu
COPY files /
RUN chmod 0755 /entrypoint \
&& sed "s/\$DOCKER_USER/$DOCKER_USER/g" -i /entrypoint
ENTRYPOINT ["/entrypoint"]
files/entrypoint
#!/bin/sh
set -e
set -u
: "${UID:=0}"
: "${GID:=${UID}}"
if [ "$#" = 0 ]
then set -- "$(command -v bash 2>/dev/null || command -v sh)" -l
fi
if [ "$UID" != 0 ]
then
usermod -u "$UID" "$DOCKER_USER" 2>/dev/null && {
groupmod -g "$GID" "$DOCKER_USER" 2>/dev/null ||
usermod -a -G "$GID" "$DOCKER_USER"
}
set -- gosu "${UID}:${GID}" "${#}"
fi
exec "$#"
Notes
UID is normally a read-only variable in bash, but it will work as expected if set by the docker --env flag
I choose gosu for it's simplicity, but you could make it work with su or sudo; it will need more configuration however
if you don't want to specify two --env switch, you could do something like: --env user="$(id -u):$(id -g)" and in the entrypoint: uid=${user%:*} gid=${user#*:}; note at this point the UID variable will be read-only in bash that's why I switched to lower-case... rest of the adaptation is left to the reader
There is no simple solution that handles all use cases. Solving these problems is continuous work, a part of life in the containerized world.
There is no magical parameter that you could add to a docker exec or docker run invocation and reliably cause the containerized software to no longer run into permissions issues during operations on host-mapped volumes. Unless your mapped directories are chmod-0777-and-come-what-may (DON'T), you will be running into permissions issues and you will be solving them as you go, and this is the task you should try becoming efficient at, instead of trying to find a miracle once-and-forever solution that will never exist.

Set docker image username at container creation time?

I have an OpenSuse 42.3 docker image that I've configured to run a code. The image has a single user(other than root) called "myuser" that I create during the initial Image generation via the Dockerfile. I have three script files that generate a container from the image based on what operating system a user is on.
Question: Can the username "myuser" in the container be set to the username of the user that executes the container generation script?
My goal is to let a user pop into the container interactively and be able to run the code from within the container. The code is just a single binary that executes and has some IO, so I want the user's directory to be accessible from within the container so that they can navigate to a folder on their machine and run the code to generate output in their filesystem.
Below is what I have constructed so far. I tried setting the USER environment variable during the linux script's call to docker run, but that didn't change the user from "myuser" to say "bob" (the username on the host machine that started the container). The mounting of the directories seems to work fine. I'm not sure if it is even possible to achieve my goal.
Linux Container script:
username="$USER"
userID="$(id -u)"
groupID="$(id -g)"
home="${1:-$HOME}"
imageName="myImage:ImageTag"
containerName="version1Image"
docker run -it -d --name ${containerName} -u $userID:$groupID \
-e USER=${username} --workdir="/home/myuser" \
--volume="${home}:/home/myuser" ${imageName} /bin/bash \
Mac Container script:
username="$USER"
userID="$(id -u)"
groupID="$(id -g)"
home="${1:-$HOME}"
imageName="myImage:ImageTag"
containerName="version1Image"
docker run -it -d --name ${containerName} \
--workdir="/home/myuser" \
--v="${home}:/home/myuser" ${imageName} /bin/bash \
Windows Container script:
ECHO OFF
SET imageName="myImage:ImageTag"
SET containerName="version1Image"
docker run -it -d --name %containerName% --workdir="/home/myuser" -v="%USERPROFILE%:/home/myuser" %imageName% /bin/bash
echo "Container %containerName% was created."
echo "Run the ./startWindowsLociStream script to launch container"
The below code has been checked into https://github.com/bmitch3020/run-as-user.
I would handle this in an entrypoint.sh that checks the ownership of /home/myuser and updates the uid/gid of the user inside your container. It can look something like:
#!/bin/sh
set -x
# get uid/gid
USER_UID=`ls -nd /home/myuser | cut -f3 -d' '`
USER_GID=`ls -nd /home/myuser | cut -f4 -d' '`
# get the current uid/gid of myuser
CUR_UID=`getent passwd myuser | cut -f3 -d: || true`
CUR_GID=`getent group myuser | cut -f3 -d: || true`
# if they don't match, adjust
if [ ! -z "$USER_GID" -a "$USER_GID" != "$CUR_GID" ]; then
groupmod -g ${USER_GID} myuser
fi
if [ ! -z "$USER_UID" -a "$USER_UID" != "$CUR_UID" ]; then
usermod -u ${USER_UID} myuser
# fix other permissions
find / -uid ${CUR_UID} -mount -exec chown ${USER_UID}.${USER_GID} {} \;
fi
# drop access to myuser and run cmd
exec gosu myuser "$#"
And here's some lines from a relevant Dockerfile:
FROM debian:9
ARG GOSU_VERSION=1.10
# run as root, let the entrypoint drop back to myuser
USER root
# install prereq debian packages
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
apt-transport-https \
ca-certificates \
curl \
vim \
wget \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install gosu
RUN dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')" \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch" \
&& chmod 755 /usr/local/bin/gosu \
&& gosu nobody true
RUN useradd -d /home/myuser -m myuser
WORKDIR /home/myuser
# entrypoint is used to update uid/gid and then run the users command
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD /bin/sh
Then to run it, you just need to mount /home/myuser as a volume and it will adjust permissions in the entrypoint. e.g.:
$ docker build -t run-as-user .
$ docker run -it --rm -v $(pwd):/home/myuser run-as-user /bin/bash
Inside that container you can run id and ls -l to see that you have access to /home/myuser files.
Usernames are not important. What is important are the uid and gid values.
User myuser inside your container will have a uid of 1000 (first non-root user id). Thus when you start your container and look at the container process from the host machine, you will see that the container is owned by whatever user having a uid of 1000 on the host machine.
You can override this by specifying the user once you run your container using:
docker run --user 1001 ...
Therefore if you want the user inside the container, to be able to access files on the host machine owned by a user having a uid of 1005 say, just run the container using --user 1005.
To better understand how users map between the container and host take a look at this wonderful article. https://medium.com/#mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf
First of all (https://docs.docker.com/engine/reference/builder/#arg):
Warning: It is not recommended to use build-time variables for passing
secrets like github keys, user credentials etc. Build-time variable
values are visible to any user of the image with the docker history
command.
But if you still need to do this, read https://docs.docker.com/engine/reference/builder/#arg:
A Dockerfile may include one or more ARG instructions. For example,
the following is a valid Dockerfile:
FROM busybox
ARG user1
ARG buildno
...
and https://docs.docker.com/engine/reference/builder/#user:
The USER instruction sets the user name (or UID) and optionally the
user group (or GID) to use when running the image and for any RUN, CMD
and ENTRYPOINT instructions that follow it in the Dockerfile.
USER <user>[:<group>] or
USER <UID>[:<GID>]

Unable to start node app with shell script

I created an node.js Docker image.
Using CMD node myapp.js in the end of my Dockerfile, it starts.
But when I use CMD /root/start.sh, then it fails.
This is how my start.sh looks like:
#!/bin/bash
node myapp.js
And here are the important lines of my Dockerfile:
FROM debian:latest
COPY config/start.sh /root/start.sh
RUN chmod +x /root/start.sh
WORKDIR /my/app/directory
RUN apt-get install -y wget && \
wget https://nodejs.org/dist/latest-v5.x/node-v5.12.0-linux-x64.tar.gz && \
tar -C /usr/local --strip-components 1 -xzf node-v5.12.0-linux-x64.tar.gz && \
rm -f node-v5.12.0-linux-x64.tar.gz && \
ln -s /usr/bin/nodejs /usr/bin/node
# works:
CMD node myapp.js
# doesn't work:
CMD /root/start.sh
Using docker logs I get: standard_init_linux.go:175: exec user process caused "no such file or directory"
But I don't understand, because if I add RUN ls /root in my Dockerfile, I can see the file exists.
I also tried with full paths in my script:
#!/bin/bash
/usr/bin/node /my/app/directory/myapp.js
but nothing changed. So what can be the problem?
Use docker run -entrypoint="/bin/bash" -i your_image.
What you used is the shell form of dockerfile CMD. As described in the doc, the default shell binary is /bin/sh, not as your expected /bin/bash in start.sh line 1.
Or try using exec form, that is CMD ["/root/start.sh"].
Most common error I've seen is creating the start.sh on a Windows system and saving the file either with a different character encoding or including windows linefeeds. The /bin/bash^M is not the same as /bin/bash but you won't see that linefeed on Windows. You also want to save the file in ascii encoding, not any of the multi-character UTF encodings.

Why doesn't the cron service in Dockerfile run?

While searching for this issue I found that: cron -f should start the service.
So I have:
RUN apt-get install -qq -y git cron
Next I have:
CMD cron -f && crontab -l > pullCron && echo "* * * * * git -C ${HOMEDIR} pull" >> pullCron && crontab pullCron && rm pullCron
My dockerfile deploys without errors but the cron doesn't run. What can I do to start the cron service with an added line?
PS:
I know that the git function in my cron should actually be a hook, but for me (and probably for others) this is about learning how to set crons with Docker :-)
PPS:
Complete Dockerfile (UPDATED):
RUN apt-get update && apt-get upgrade -y
RUN mkdir -p /var/log/supervisor
RUN apt-get install -qq -y nginx git supervisor cron wget
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN wget -O ./supervisord.conf https://raw.githubusercontent.com/..../supervisord.conf
RUN mv ./supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN apt-get install software-properties-common -y && apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0x5a16e7281be7a449 && add-apt-repository 'deb http://dl.hhvm.com/ubuntu utopic main' && apt-get update && apt-get install hhvm -y
RUN cd ${HOMEDIR} && git clone ${GITDIR} && mv ./tybalt/* ./ && rm -r ./tybalt && git init
RUN echo "* * * * * 'cd ${HOMEDIR} && /usr/bin/git pull origin master'" >> pullCron && crontab pullCron && rm pullCron
EXPOSE 80
CMD ["/usr/bin/supervisord"]
PPPS:
Supervisord.conf:
[supervisord]
autostart=true
autorestart=true
nodaemon=true
[program:nginx]
command=/usr/sbin/nginx -c /etc/nginx/nginx.conf
[program:cron]
command = cron -f -L 15
autostart=true
autorestart=true
Having started crond with supervisor, your cron jobs should be executed. Here are the troubleshooting steps you can take to make sure cron is running
Is the cron daemon running in the container? Login to the container and run ps a | grep cron to find out. Use docker exec -ti CONTAINERID /bin/bash to login to the container.
Is supervisord running?
In my setup for instance, the following supervisor configuration works without a problem. The image is ubuntu:14.04. I have CMD ["/usr/bin/supervisord"] in the Dockerfile.
[supervisord]
nodaemon=true
[program:crond]
command = /usr/sbin/cron
user = root
autostart = true
Try another simple cron job to findout whether the problem is your cron entry or the cron daemon. Add this when logged in to the container with crontab -e :
* * * * * echo "hi there" >> /tmp/test
Check the container logs for any further information on cron:
docker logs CONTAINERID | grep -i cron
These are just a few troubleshooting tips you can follow.
Cron is not running because only the last CMD overrides the first one (as #xuhdev said). It's documented here : https://docs.docker.com/reference/builder/#cmd.
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
If you want to have nginx and cron running in the same container, you will need to use some kind of supervisor (like supervisord or others) that will be the pid 1 process of your container and manage the chield processes. I think this project should help : https://github.com/nbraquart/docker-nginx-php5-cron (it seems to do what you're trying to achieve).
Depending on what you're cron is here for, there would be other solution to that — like building a new image for each commit or each tags, etc...
I've used this with CentOS and it works:
CMD service crond start ; tail -f /var/log/cron
The rest of my Dockerfile just yum installs cronie and touches the /var/log/cron file so it will be there when the CMD runs.
On centos 7 this works for me
[program:cron]
command=/usr/sbin/crond -n -s
user = root
autostart = true
stderr_logfile=/var/log/cron.err.log
stdout_logfile=/var/log/cron.log
-n is for foreground
-s is to log to stdout and stderr
In my case, it turns out I needed to run cron start at run time. I can't put it in my Dockerfile nor docker-compose.yml, so I ended up placing in the Makefile I use for deploy.
Something like:
task-name:
# docker-compose down && docker-compose build && docker-compose up -d
docker exec CONTAINERNAME /bin/bash -c cron start

Resources