Create User and Group on Docker Alpine Linux with root privileges - linux

I have a Dockerfile on Ubuntu server where I create a user on group www-data with root privileges.
RUN useradd -G www-data,root -u userid (like 1000) -d /home/user (like www) user
RUN mkdir -p /home/user/.composer &&
chown -R user:user /home/user
How I can do the same in alpine linux ?

Related

Using SSH inside docker with correct file permissions?

There are a few posts on how to use Docker + SSH. There are also posts on how to edit files mounted in a docker container, such that editing them won't cause the permissions to become root.
I'm trying to combine the 2 things, so I can SSH into a docker container and edit files without messing up their permissions.
For, using the correct file permissions, I use:
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
in my docker-compose.yml and
docker compose -f commands/dev/docker-compose.yml run \
--service-ports \
--user $(id -u) \
develop \
bash
so that when I start the docker container, my user is the same user as my local computer.
However, this breaks up my SSH setup inside the Docker container:
useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo ubuntu
echo 'ubuntu:ubuntu' | chpasswd
# passwd -d ubuntu
apt install -y --no-install-recommends openssh-server vim-tiny sudo
# See: https://stackoverflow.com/questions/22886470/start-sshd-automatically-with-docker-container
sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
mkdir /var/run/sshd
bash -c 'install -m755 <(printf "#!/bin/sh\nexit 0") /usr/sbin/policy-rc.d'
ex +'%s/^#\zeListenAddress/\1/g' -scwq /etc/ssh/sshd_config
ex +'%s/^#\zeHostKey .*ssh_host_.*_key/\1/g' -scwq /etc/ssh/sshd_config
RUNLEVEL=1 dpkg-reconfigure openssh-server
ssh-keygen -A -v
update-rc.d ssh defaults
# Configure sudo
ex +"%s/^%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/g" -scwq! /etc/sudoers
Here I'm creating a user called ubuntu with password ubuntu for SSH-ing. This lets me SSH in ubuntu#localhost using the password ubuntu.
The issue is that by mounting the /etc/passwd file into my container, I erase the ubuntu user inside the container. This means when I try to ssh in with ssh -p 9002 ubuntu#localhost, the authentication fails (9002 is what I bind port 22 in the container to on the host).
Does anyone have a solution?
Here's a first pass answer.
I can use:
useradd -rm -d /home/yourusername -s /bin/bash -g root -G sudo yourusername
instead of
useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo ubuntu
echo 'ubuntu:ubuntu' | chpasswd
then, I:
Run the ssh server in the container with:
su root
/usr/sbin/sshd -D -o ListenAddress=0.0.0.0 -o PermitRootLogin=yes
I can ssh into the container as root (using the root password "root", which I set with RUN echo 'root:root' | chpasswd in the Dockerfile).
Then, I can do su yourusername, to switch my user.
While this works, it is pretty annoying since I need to bake the user name into the Docker container.

Docker, why the user and group are different?

I created a Dockerfile in the following
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
ENV CUDA_PATH /usr/local/cuda
ENV CUDA_INCLUDE_PATH /usr/local/cuda/include
ENV CUDA_LIBRARY_PATH /usr/local/cuda/lib64
RUN apt update -yq
RUN apt install -yq curl wget unzip git vim cmake zlib1g-dev g++ gcc sudo build-essential libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev openssh-server
RUN adduser --disabled-password --gecos '' docker && \
adduser docker sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN mkdir -p /.cache/pip
RUN mkdir -p /.local/share
RUN mkdir -p /.local/lib
RUN mkdir -p /.local/bin
RUN chown -R docker:docker /.cache/pip
RUN chown -R docker:docker /.local
RUN chown -R docker:docker /.local/lib
RUN chown -R docker:docker /.local/bin
# Configure SSHD.
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
RUN mkdir /var/run/sshd
RUN bash -c 'install -m755 <(printf "#!/bin/sh\nexit 0") /usr/sbin/policy-rc.d'
RUN ex +'%s/^#\zeListenAddress/\1/g' -scwq /etc/ssh/sshd_config
RUN ex +'%s/^#\zeHostKey .*ssh_host_.*_key/\1/g' -scwq /etc/ssh/sshd_config
RUN RUNLEVEL=1 dpkg-reconfigure openssh-server
RUN ssh-keygen -A -v
RUN update-rc.d ssh defaults
RUN ln -s /lib/x86_64-linux-gnu/libc.so.6 /lib64/libc.so.6
RUN ln -s /lib/x86_64-linux-gnu/libc.so.6 /lib/libc.so.6
# Configure sudo.
RUN ex +"%s/^%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/g" -scwq! /etc/sudoers
USER docker
RUN ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519
WORKDIR /home/docker/
RUN chmod a+rwx /home/docker/ && \
wget https://repo.anaconda.com/miniconda/Miniconda3-py37_4.10.3-Linux-x86_64.sh && \
bash Miniconda3-py37_4.10.3-Linux-x86_64.sh -b && rm Miniconda3-py37_4.10.3-Linux-x86_64.sh
ENV PATH /home/docker/.local/bin:$PATH
ENV PATH /home/docker/miniconda3/bin:$PATH
ENV which python3.7
RUN mkdir -p /home/docker/.local/
RUN chown -R docker:docker /home/docker/.local/
RUN chmod -R 777 /home/docker/.local/
RUN chmod -R 777 /.local/lib
RUN chmod -R 777 /.local/bin
RUN chmod -R 777 /.cache/pip/
RUN python3.7 -m pip install pip -U
RUN python3.7 -m pip install tensorflow-gpu==2.5.0 ray[rllib] gym[atari] torch==1.7.1 torchvision==0.8.2 scikit_learn==0.23.1 sacred==0.8.1 PyYAML==5.4.1 tensorboard_logger
# ENV PYTHONPATH "${PYTHONPATH}:/home/docker/.local/lib/python3.7/site-packages/"
RUN sudo ln -s $(which python3.7) /usr/bin/python
RUN ls $(python3.7 -c "import site; print(site.getsitepackages()[0])")
RUN python3.7 -m pip list
RUN python3.7 -m pip uninstall -y enum34
USER docker
RUN mkdir -p /home/docker/app
RUN chown -R docker:docker /home/docker/app
WORKDIR /home/docker/app
Then I built an image. After that, I run with this image.
NV_GPU=1 nvidia-docker run -i \
--name $name \
--user docker \
-v `pwd`:/home/docker/app \
-t MyImage:1.0 \
${#:2}
I used the user docker defined in the Dockerfile and mount current files to the workdir. However, it shows the docker user had no permission to create any files
PermissionError: [Errno 13] Permission denied
And the file in /home/docker/app
docker#109c5e6b269a:~/app$ ls -l
total 64
-rw-rw-r-- 1 1002 1003 11342 Oct 13 12:50 LICENSE
-rw-rw-r-- 1 1002 1003 4831 Oct 14 05:49 README.md
drwxrwxr-x 3 1002 1003 4096 Oct 14 08:12 docker
-rwxrw-r-- 1 1002 1003 225 Oct 14 08:36 run_train.sh
drwxrwxr-x 11 1002 1003 4096 Oct 14 03:46 src
drwxrwxr-x 4 1002 1003 4096 Oct 13 12:50 third-party
It shows the user and group are not docker. I tried to change owner to docker but some error occurred in my local file system.
How can I address this PermissionError issue?
Thank you.
You are mapping some directory (pwd) to a volume. The problem is that your local directory belongs to a user with UID=1002, but inside the container the user docker maps to a different UID (probably 1000).
One easy solution is to edit the Dockerfile to specify the UID when creating the user, so it matches your local directory.
If you want your image to be used by others, one good solution is to create an entry point script to modify the user's UID at container creation time, based on environment variable.

EFS mount on ECS Fargate - Read/write permissions denied for non root user

I have an ECS Fargate container running a nodejs application with non-root permissions and is also mounted to EFS on /.user_data inside the container.
I followed this AWS tutorial. My setup is almost similar.
Here is the Docker file:
FROM node:12-buster-slim
RUN apt-get update && \
apt-get install -y build-essential \
wget \
python3 \
make \
gcc \
libc6-dev \
git
# delete old user
RUN userdel -r node
# Run as a non-root user
RUN addgroup "new_user_group" && \
useradd "new_user" --gid "new_user_group" \
--home-dir "/home/new_user"
RUN git clone https://github.com/test-app.git /home/new_user/app
RUN chown -R new_user:new_user_group /home/new_user
RUN mkdir -p /home/new_user/.user_data
RUN chown -R new_user:new_user_group /home/new_user/.user_data
RUN chmod -R 755 /home/new_user/
WORKDIR /home/new_user/app
RUN npm install
RUN npm run build
EXPOSE 1880
USER new_user
CMD [ "npm", "start" ]
When the Node app tries to write inside /.user_data I am getting read-write permission denied error.
If I run the container as root the app is able to read/write data.
I tried adding an access point to EFS with UID and permissions but that didn't help as well.
Please note: The Dockerfile works fine on my local machine.
Update
Read this blog post - Developers guide to using Amazon EFS with Amazon ECS and AWS Fargate – Part 2 > POSIX permissions
Might be related to the IAM Policy that was assigned to the ECS Task's IAM Role.
"...if the AWS policies do not allow the ClientRootAccess action, your user is going to be squashed to a pre-defined UID:GID that is 65534:65534. From this point on, standard POSIX permissions apply: what this user can do is determined by the POSIX file system permissions. For example, a folder owned by any UID:GID other than 65534:65534 that has 666 (rw for owner and rw for everyone) will allow this reserved user to create a file. However, a folder owned by any UID:GID other than 65534:65534 that has 644 (rw for owner and r for everyone) will NOT allow this squashed user to create a file."
Make sure that your root-dir permissions are set to 777. This way any UID can read/write this dir.
To be less permissive, set the root-dir to 755, which is set by default, see the docs. This provides read-write-execute to the root user, read-execute to group and read-execute to all other users.
A user (UID) can't access (read) a sub-directory if there's no read access to its parents (directories).
You can test it easily with Docker, here's a quick example
Create a Dockerfile -
FROM ubuntu:20.04
# Fetch values from ARGs that were declared at the top of this file
ARG APP_NAME
ARG APP_ARTIFACT_DIR
ARG APP_HOME_DIR="/app"
ARG APP_USER_NAME="appuser"
ARG APP_GROUP_ID="appgroup"
# Define workdir
ENV HOME="${APP_HOME_DIR}"
WORKDIR "${HOME}"
RUN apt-get update -y && apt-get install tree
# Define env vars
ENV PATH="${HOME}/.local/bin:${PATH}"
# Run as a non-root user
RUN addgroup "${APP_GROUP_ID}" && \
useradd "${APP_USER_NAME}" --gid "${APP_GROUP_ID}" --home-dir "${HOME}" && \
chown -R ${APP_USER_NAME} .
RUN mkdir -p rootdir && \
mkdir -p rootdir/subdir && \
touch rootdir/root.file rootdir/subdir/sub.file && \
chown -R root:root rootdir && \
chmod 600 rootdir rootdir/root.file && \
chmod -R 775 rootdir/subdir
You should play with chmod 600 and chmod -R 775, try different permissions sets such as 777 and 644, and see if it makes sense.
Build an image, run a container, and test the permissions -
docker build boyfromnorth .
docker run --rm -it boyfromnorth bash
root#e0f043d9884c:~$ su appuser
$ ls -la
total 12
drwxr-xr-x 1 appuser root 4096 Jan 30 12:23 .
drwxr-xr-x 1 root root 4096 Jan 30 12:33 ..
drw------- 3 root root 4096 Jan 30 12:23 rootdir
$ ls rootdir
ls: cannot open directory 'rootdir': Permission denied

How to set up multiple wordpress sites on linux correctly?

I have setup a linode to host few client's WordPress sites.
I added all sites to
var/www/html/site1.com/public_html<br>
var/www/html/site2.com/public_html<br>
var/www/html/site3.com/public_html<br>
and gave the www-data user permission:
sudo chown -R www-data:www-data /var/www/html/site1.com/public_html<br>
sudo chown -R www-data:www-data /var/www/html/site2.com/public_html<br>
sudo chown -R www-data:www-data /var/www/html/site3.com/public_html<br>
Now issue is PHP is able to write across all those folders which means if one site gets compromised , hacker will be able to access other sites public_html via PHP.
What is the best secure way to set this up ?
Step by step guide will help !! Thank you so much.
You have to create separate user for each website like site1, site2, site3
Then assign user and group for each website to get your expected security.
sudo chown -R site1:site1 /var/www/html/site1.com/public_html
sudo chown -R site2:site2 /var/www/html/site2.com/public_html
sudo chown -R site3:site3 /var/www/html/site3.com/public_html
Add user to www-data group so that you wordpreess can run regular operations such as update, delete, install...
sudo usermod -a -G www-data site1
sudo usermod -a -G www-data site3
sudo usermod -a -G www-data site3

Directory - all permissions that belong in its group?

I have created group (lets call this user admin):
sudo groupadd mygroup
switched to user test (from admin user):
sudo su - test
cd /home/test/
mkdir external
exit
cd /home/test/
sudo chgrp -R mygroup external
sudo usermod -a -G mygroup admin
sudo usermod -a -G mygroup test
sudo chmod -R g=rwx external
Now I do this:
cd external
mkdir something
mkdir: cannot create directory ‘something’: Permission denied
So how can I make that everyone that has mygroup would have all access like the owner does? So I could create inside external directory any other directory or file, delete it and so on (without using sudo).
P.S.
ls -l:
drwxrwxr-x 2 test mygroup 4096 Spa 15 16:24 external
getent group mygroup:
ambulance:x:1002:admin,test
sudo groupadd mygroup
mkdir external
sudo chown -R root:mygroup external
sudo chmod -R 'g+w' external
sudo chmod -R 'g+s' external

Resources