File permissions in Docker Container - linux

I have a simple tomcat docker file as below.
FROM tomcat:7
MAINTAINER ***
COPY sample.war /opt/tomcat/webapps
USER tomcat
RUN chown -R tomcat:tomcat /tmp/
CMD ["catalina.sh", "run"]
When i create the docker image it will give me file permission as
tomcat tomcat Dec 11 08:04 tmp
My sample app creates few directories inside tmp folder , i am expecting the owner to be tomcat but it looks like root. Since i am running the container as user tomcat , how to make it use tomcat user to create these.

I tried to build and run the Dockerfile you provided and ran into multiple errors. You are asking about the file permissions your "app" creates. So here is my starting point:
I assume that the "app" is catalina.sh. The process which creates your files in /tmp/. Since we are running the container as the user tomcat it automatically creates the files with the according file permissions. Have a look at the code comments below to get some more information about what is going on here.
Dockerfile:
FROM httpd
# I switched the image since you would need to configure tomcat to make it run
# httpd works out of the box without any application specific config. That's why
COPY ./catalina.sh /
RUN chmod +x /catalina.sh # copy your 'app' / entrypoint into the image
RUN groupadd -r tomcat \
&& useradd -r -g tomcat tomcat
# create a new user and group under which the container runs
RUN chown -R tomcat:tomcat /tmp/ # change initial file permisisons.
# So that our new user tomcat is allowed to access and fill the directory with files
USER tomcat # switch the user under which the container runs
ENTRYPOINT ["/catalina.sh"]
catalina.sh:
#!/bin/bash
mkdir /tmp/test # create a dir
touch /tmp/yolo.txt # and some files
touch /tmp/test/yolo2.txt # to check the file permissions later on
while true; do echo "sleep"; sleep 2; done
# endless loop so that the container doesn't exit
To check the file permissions exec into the running container.
docker exec -ti <container_name> bash

Docker normally runs with root privilege, so I believe you must create a docker group (if it does not exist), and add the user (tomcat, in your case) to docker group.
See below how to add user to docker group:
Create the docker group.
$ sudo groupadd docker
Add your user to the docker group.
$ sudo usermod -aG docker tomcat
Reference: https://docs.docker.com/engine/installation/linux/linux-postinstall/#manage-docker-as-a-non-root-user
With this, I believe your problem can be solved.

Related

Jenkins user gets changed based on job name,WHY?

I have installed Jenkins on Linux RHEL7 and i did below changes to run all my Jenkins job with a root user rather than Jenkins user.
The changes I made:
Inside file /etc/sysconfig/jenkins:
JENKINS_USER="root" (default it used to be "jenkins")
Changed the ownership of files using below commands:
sudo chown -R root:root /var/lib/jenkins
sudo chown -R root:root /var/cache/jenkins
sudo chown -R root:root /var/log/jenkins
Now when I am creating a job named "snjiii", it runs correctly i.e., using root user:
Console Output
Started by user FSQA ADMIN
Running as SYSTEM
Building on master in workspace /var/lib/jenkins/workspace/snjiii
[snjiii] $ /bin/sh -xe /tmp/jenkins8170584152885597665.sh
+ whoami
root
Finished: SUCCESS
But when I am creating a different job named as "snjiii_newjob_l1_rhl7", it's running as jenkins user:
Console Output
Started by user FSQA ADMIN
Running as SYSTEM
Building remotely on Slave_1_14 (Redhat_linux) in workspace /var/lib/jenkins/workspace/snjiii_newjob_l1_rhl7
[snjiii_newjob_l1_rhl7] $ /bin/sh -xe /tmp/jenkins2597327522922634209.sh
+ whoami
jenkins
Finished: SUCCESS
I need to create job and run it using root user irrespective of naming convention. How to do it???
Even though I was running the job on my master, I created a master-slave configuration for the same wherein I created a node with master credentials and used it as my slave i.e. I restricted my project to run on the slave machine which is my master itself and henceforth it uses ssh credentials to communicate, it resolves my issue.

How do I change the permissions in openshift container platform?

I am new to Openshift.
I have deployed an application in openshift. When I checked the logs, there is permission denied error for some files.
Now, I want to change the permissions on the the container that is already deployed in Openshift, but I am getting, "Operation not permitted" warning. How do I fix this ?
This is for linux running latest version of MongoDB. I have already tried executing RUN chmod 777 /path/to/directory in my docker file, created the image and pulled the same image in my yaml file, which I am deploying in my openshift. However, when I check my docker container, it shows that the permissions are changed for that directory, but when I deploy, I get the warning in my logs as "permission denied".
FROM node:10.16.3
RUN apt update && apt install -y openjdk-8-jdk
RUN useradd -ms /bin/bash admin
# Set the workdir /var/www/myapp
WORKDIR /var/www/myapp
# Copy the package.json to workdir
COPY package.json .
# Run npm install - install the npm dependencies
RUN npm install
RUN npm install sqlite3
# Copy application source
COPY . .
RUN chown -R admin:admin /var/www/myapp
RUN chmod 775 /var/www/myapp
USER admin
# Copy .env.docker to workdir/.env - use the docker env
#COPY .env.docker ./.env
# Expose application ports - (4300 - for API and 4301 - for front end)
# EXPOSE 4300 4301
EXPOSE 52000
CMD [ "npm", "start" ]
Athough, when I run my dockerifle, the permissions have changed, but when I try to deploy in my openshift, I get permission denied for some files in that directory.
By default any container started in OpenShift gets a random user ID. Therefor images not designed to handle such a random UID will fail with permission errors.
In order to get your image working I recommed you read following Article of the latest OpenShift documentation: https://docs.openshift.com/container-platform/4.2/openshift_images/create-images.html
Here the short version that relates to your issue:
SUPPORT ARBITRARY USER IDS
By default, OpenShift Container Platform
runs containers using an arbitrarily assigned user ID. This provides
additional security against processes escaping the container due to a
container engine vulnerability and thereby achieving escalated
permissions on the host node.
For an image to support running as an arbitrary user, directories and
files that may be written to by processes in the image should be owned
by the root group and be read/writable by that group. Files to be
executed should also have group execute permissions.
Adding the following to your Dockerfile sets the directory and file
permissions to allow users in the root group to access them in the
built image:
RUN chgrp -R 0 /some/directory && \
chmod -R g=u /some/directory
Because the container user is always a member of the root group, the container user can read and write
these files. The root group does not have any special permissions
(unlike the root user) so there are no security concerns with this
arrangement. In addition, the processes running in the container must
not listen on privileged ports (ports below 1024), since they are not
running as a privileged user.
Adding this to my Dockerfile worked:
RUN chgrp -R 0 /some/directory && \
chmod -R g=u /some/directory
The following changes in the Dockerfile solved the issue for me
#Add user
RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo -u 1001 ubuntu
#Add permissions
RUN chown -R ubuntu:root /home/ubuntu && \
chgrp -R 0 /home/ubuntu && \
chmod -R 775 /home/ubuntu
##Specify the user with UID
USER 1001
OpenShift runs the Pod with arbitrary UID by default. You can check what UID is used by in the pod using oc rsh <pod name> id. Your pod might run not using admin you specified.
The container you built is required admin permission, so you should configure anyuid SCC to the default serviceaccount for avoid the permission error.
Further information about SCC is here: Enable Container Images that Require Root
.
e.g.> default scc is using to run pod by default
$ oc adm policy add-scc-to-user anyuid -z default
$ oc delete pod <your pod name>
FYI, before building your image, checking here:General Container Image Guidelines
is helpful to build your image used on OpenShift.

Docker mount volume directory permissions to non root user using dockerfile

I have a dockerfile with an entrypoint that starts a java app jar in shell process.I have set USER nobody in dockerfile before entrypoint to make sure the entrypoint shell is executed by nobody user and not root. The task definition for the container on AWS ECS takes care of the volume and mount parameters. So ideally at container run time /var/log/applogs on host is mounted onto the container with same path. But what happens is that since the user context is switched from root to nobody, so nobody that runs the container (executing the entrypoint) doesn’t have write permission to /var/log/applogs/ directory on the container. As such the app is unable to create the log file there on the container.
What i see is that the host ec2 /var/log/applogs gets mounted just fine on the container being run by nobody user but since the nobody user is missing write permissions on this path on the container, hence its unable to write app log here.
What is the correct way to fix this either on the dockerfile or something else?
You can try this...
Remove the USER nobody entry from Dockerfile and change your
Entrypoint into the following script:
#!/bin/bash
# change ownership to user=nobody, group=nobody
chown -R nobody:nobody /var/log/applogs
# open with sudo a new shell as user nobody and call your application
sudo -u nobody /bin/bash -c "java -jar /path/to/your/application.jar"
# alternative
# open with su a new shell as user nobody and call your application
# su -s /bin/bash -c "java -jar /path/to/your/application.jar" nobody

How to give non-root user in Docker container access to a volume mounted on the host

I am running my application in a Docker container as a non-root user. I did this since it is one of the best practices. However, while running the container I mount a host volume to it -v /some/folder:/some/folder . I am doing this because my application running inside the docker container needs to write files to the mounted host folder. But since I am running my application as a non-root user, it doesn't have permission to write to that folder
Question
Is it possible to give a nonroot user in a docker container access to the hosted volume?
If not, is my only option to run the process in docker container as root?
There's no magic solution here: permissions inside docker are managed the same as permissions without docker. You need to run the appropriate chown and chmod commands to change the permissions of the directory.
One solution is to have your container run as root and use an ENTRYPOINT script to make the appropriate permission changes, and then your CMD as an unprivileged user. For example, put the following in entrypoint.sh:
#!/bin/sh
chown -R appuser:appgroup /path/to/volume
exec runuser -u appuser "$#"
This assumes you have the runuser command available. You can accomplish pretty much the same thing using sudo instead.
Use the above script by including an ENTRYPOINT directive in your Dockerfile:
FROM baseimage
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/bin/sh", "entrypoint.sh"]
CMD ["/usr/bin/myapp"]
This will start the container with:
/bin/sh entrypoint.sh /usr/bin/myapp
The entrypoint script will make the required permissions changes, then run /usr/bin/myapp as appuser.
There will throw error if host env don't have appuser or appgroup, so better to use a User ID instead of user name:
inside your container, run
appuser$ id
This will show:
uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
From host env, run:
mkdir -p /some/folder
chown -R 1000:1000 /some/folder
docker run -v /some/folder:/some/folder [your_container]
inside your container, check
ls -lh
to see the user and group name, if it's not root, then it's should worked.
In the specific situation of using an image built from a custom Dockerfile, you can do the following (using example commands for a debian image):
FROM baseimage
...
RUN useradd --create-home appuser
USER appuser
RUN mkdir /home/appuser/my_volume
...
Then mount the volume using
-v /some/folder:/home/appuser/my_volume
Now appuser has write permissions to the volume as it's in their home directory. If the volume has to be mounted outside of their home directory, you can create it and assign appuser write permissions as an extra step within the Dockerfile.
I found it easiest to recursively apply Linux ACL (Access Control Lists) permissions on the host directory so the non root host user can access volume contents.
sudo setfacl -m u:$(id -u):rwx -R /some/folder
To check who has access to the folder:
getfacl /some/folder
Writing to the volume will create files and directories with host user id which might not be desirable for host -> container transfer. Writing can be disabled with just giving :rx permission instead of :rwx.
To enable writing, add a mirror ACL policy in a container allowing container user id full access to volume parent path.

Docker: Permissions for a mounted volume

I have this image in which I mount a volume from the host
-v /Users/john/workspace:/data/workspace
Inside the container I'm using a user different than root. Now the problem is that it cannot create/modify files inside /data/workspace (permission denied). Now I solved it for now to do chmod -R 777 workspace on the host. What would be the docker way to solve this ?
This might be solved with user mapping (issue 7198), but that same thread include:
Managed to solve this using the new dockerfile args. It doesn't require doing anything special after the container is built, so I thought I'd share. (Requires Docker 1.9)
In the Dockerfile:
# Setup User to match Host User, and give superuser permissions
ARG USER_ID=0
RUN useradd code_executor -u ${USER_ID} -g sudo
RUN echo 'code_executor ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER ${USER_ID}
Then to build:
docker build --build-arg USER_ID=$(id -u)
That way, the user in the container can write in the mounted host volume (no chown/chmod required)

Resources