Permission error running a JAR as non-root user on Docker - linux

I did write the following Dockerfiler:
FROM openjdk:8-jdk-alpine
RUN addgroup -g 999 -S appgroup && \
adduser -u 999 -S appuser -G appgroup
USER appuser
COPY myapp.jar /usr/app/myapp.jar
ENTRYPOINT ["java","-jar","/usr/app/myapp.jar"]
When I run the image, I get the follwing error:
main ERROR Unable to create file logs/file.log java.io.IOException: Could not create directory /logs
Could you please tell me what I did wrong?

User appuser does not have write permissions in the current working directory (/, you can verify it with docker run --rm openjdk:8-jdk-alpine pwd), and it seems that your application needs to create a logs/ directory in that working directory.
Possible solutions :
set the WORKDIR to a place he can write (/tmp for example, or /home/appuser if it is created) in the Dockerfile
you can use option --workdir /path/to/a/folder/where/appuser/can/write when issuing docker run command
you can create the /logs folder with right permissions (write for appuser)

Related

How do I change the permissions in openshift container platform?

I am new to Openshift.
I have deployed an application in openshift. When I checked the logs, there is permission denied error for some files.
Now, I want to change the permissions on the the container that is already deployed in Openshift, but I am getting, "Operation not permitted" warning. How do I fix this ?
This is for linux running latest version of MongoDB. I have already tried executing RUN chmod 777 /path/to/directory in my docker file, created the image and pulled the same image in my yaml file, which I am deploying in my openshift. However, when I check my docker container, it shows that the permissions are changed for that directory, but when I deploy, I get the warning in my logs as "permission denied".
FROM node:10.16.3
RUN apt update && apt install -y openjdk-8-jdk
RUN useradd -ms /bin/bash admin
# Set the workdir /var/www/myapp
WORKDIR /var/www/myapp
# Copy the package.json to workdir
COPY package.json .
# Run npm install - install the npm dependencies
RUN npm install
RUN npm install sqlite3
# Copy application source
COPY . .
RUN chown -R admin:admin /var/www/myapp
RUN chmod 775 /var/www/myapp
USER admin
# Copy .env.docker to workdir/.env - use the docker env
#COPY .env.docker ./.env
# Expose application ports - (4300 - for API and 4301 - for front end)
# EXPOSE 4300 4301
EXPOSE 52000
CMD [ "npm", "start" ]
Athough, when I run my dockerifle, the permissions have changed, but when I try to deploy in my openshift, I get permission denied for some files in that directory.
By default any container started in OpenShift gets a random user ID. Therefor images not designed to handle such a random UID will fail with permission errors.
In order to get your image working I recommed you read following Article of the latest OpenShift documentation: https://docs.openshift.com/container-platform/4.2/openshift_images/create-images.html
Here the short version that relates to your issue:
SUPPORT ARBITRARY USER IDS
By default, OpenShift Container Platform
runs containers using an arbitrarily assigned user ID. This provides
additional security against processes escaping the container due to a
container engine vulnerability and thereby achieving escalated
permissions on the host node.
For an image to support running as an arbitrary user, directories and
files that may be written to by processes in the image should be owned
by the root group and be read/writable by that group. Files to be
executed should also have group execute permissions.
Adding the following to your Dockerfile sets the directory and file
permissions to allow users in the root group to access them in the
built image:
RUN chgrp -R 0 /some/directory && \
chmod -R g=u /some/directory
Because the container user is always a member of the root group, the container user can read and write
these files. The root group does not have any special permissions
(unlike the root user) so there are no security concerns with this
arrangement. In addition, the processes running in the container must
not listen on privileged ports (ports below 1024), since they are not
running as a privileged user.
Adding this to my Dockerfile worked:
RUN chgrp -R 0 /some/directory && \
chmod -R g=u /some/directory
The following changes in the Dockerfile solved the issue for me
#Add user
RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo -u 1001 ubuntu
#Add permissions
RUN chown -R ubuntu:root /home/ubuntu && \
chgrp -R 0 /home/ubuntu && \
chmod -R 775 /home/ubuntu
##Specify the user with UID
USER 1001
OpenShift runs the Pod with arbitrary UID by default. You can check what UID is used by in the pod using oc rsh <pod name> id. Your pod might run not using admin you specified.
The container you built is required admin permission, so you should configure anyuid SCC to the default serviceaccount for avoid the permission error.
Further information about SCC is here: Enable Container Images that Require Root
.
e.g.> default scc is using to run pod by default
$ oc adm policy add-scc-to-user anyuid -z default
$ oc delete pod <your pod name>
FYI, before building your image, checking here:General Container Image Guidelines
is helpful to build your image used on OpenShift.

Error running docker as a system user without home directory /dev/null: not a directory

I am trying to run docker commands as a system user without home directory.
For Ex.
sudo -u mysystemuser docker login -u user -p password
However I keep getting errors like
WARNING: Error loading config file:/dev/null/.docker/config.json - stat /dev/null/.docker/config.jso
n: not a directory
Error saving credentials: mkdir /dev/null: not a directory
Is their a way I can specify a directory for docker daemon? I have tried --graph and -g options to point at /tmp folder but it has not worked.
You need to pass --config /path/to/dir in the beginning of your docker command line (before the run, build, or other command you have docker perform). See the cli documentation for more details: https://docs.docker.com/engine/reference/commandline/cli/

File permissions in Docker Container

I have a simple tomcat docker file as below.
FROM tomcat:7
MAINTAINER ***
COPY sample.war /opt/tomcat/webapps
USER tomcat
RUN chown -R tomcat:tomcat /tmp/
CMD ["catalina.sh", "run"]
When i create the docker image it will give me file permission as
tomcat tomcat Dec 11 08:04 tmp
My sample app creates few directories inside tmp folder , i am expecting the owner to be tomcat but it looks like root. Since i am running the container as user tomcat , how to make it use tomcat user to create these.
I tried to build and run the Dockerfile you provided and ran into multiple errors. You are asking about the file permissions your "app" creates. So here is my starting point:
I assume that the "app" is catalina.sh. The process which creates your files in /tmp/. Since we are running the container as the user tomcat it automatically creates the files with the according file permissions. Have a look at the code comments below to get some more information about what is going on here.
Dockerfile:
FROM httpd
# I switched the image since you would need to configure tomcat to make it run
# httpd works out of the box without any application specific config. That's why
COPY ./catalina.sh /
RUN chmod +x /catalina.sh # copy your 'app' / entrypoint into the image
RUN groupadd -r tomcat \
&& useradd -r -g tomcat tomcat
# create a new user and group under which the container runs
RUN chown -R tomcat:tomcat /tmp/ # change initial file permisisons.
# So that our new user tomcat is allowed to access and fill the directory with files
USER tomcat # switch the user under which the container runs
ENTRYPOINT ["/catalina.sh"]
catalina.sh:
#!/bin/bash
mkdir /tmp/test # create a dir
touch /tmp/yolo.txt # and some files
touch /tmp/test/yolo2.txt # to check the file permissions later on
while true; do echo "sleep"; sleep 2; done
# endless loop so that the container doesn't exit
To check the file permissions exec into the running container.
docker exec -ti <container_name> bash
Docker normally runs with root privilege, so I believe you must create a docker group (if it does not exist), and add the user (tomcat, in your case) to docker group.
See below how to add user to docker group:
Create the docker group.
$ sudo groupadd docker
Add your user to the docker group.
$ sudo usermod -aG docker tomcat
Reference: https://docs.docker.com/engine/installation/linux/linux-postinstall/#manage-docker-as-a-non-root-user
With this, I believe your problem can be solved.

How to give non-root user in Docker container access to a volume mounted on the host

I am running my application in a Docker container as a non-root user. I did this since it is one of the best practices. However, while running the container I mount a host volume to it -v /some/folder:/some/folder . I am doing this because my application running inside the docker container needs to write files to the mounted host folder. But since I am running my application as a non-root user, it doesn't have permission to write to that folder
Question
Is it possible to give a nonroot user in a docker container access to the hosted volume?
If not, is my only option to run the process in docker container as root?
There's no magic solution here: permissions inside docker are managed the same as permissions without docker. You need to run the appropriate chown and chmod commands to change the permissions of the directory.
One solution is to have your container run as root and use an ENTRYPOINT script to make the appropriate permission changes, and then your CMD as an unprivileged user. For example, put the following in entrypoint.sh:
#!/bin/sh
chown -R appuser:appgroup /path/to/volume
exec runuser -u appuser "$#"
This assumes you have the runuser command available. You can accomplish pretty much the same thing using sudo instead.
Use the above script by including an ENTRYPOINT directive in your Dockerfile:
FROM baseimage
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/bin/sh", "entrypoint.sh"]
CMD ["/usr/bin/myapp"]
This will start the container with:
/bin/sh entrypoint.sh /usr/bin/myapp
The entrypoint script will make the required permissions changes, then run /usr/bin/myapp as appuser.
There will throw error if host env don't have appuser or appgroup, so better to use a User ID instead of user name:
inside your container, run
appuser$ id
This will show:
uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
From host env, run:
mkdir -p /some/folder
chown -R 1000:1000 /some/folder
docker run -v /some/folder:/some/folder [your_container]
inside your container, check
ls -lh
to see the user and group name, if it's not root, then it's should worked.
In the specific situation of using an image built from a custom Dockerfile, you can do the following (using example commands for a debian image):
FROM baseimage
...
RUN useradd --create-home appuser
USER appuser
RUN mkdir /home/appuser/my_volume
...
Then mount the volume using
-v /some/folder:/home/appuser/my_volume
Now appuser has write permissions to the volume as it's in their home directory. If the volume has to be mounted outside of their home directory, you can create it and assign appuser write permissions as an extra step within the Dockerfile.
I found it easiest to recursively apply Linux ACL (Access Control Lists) permissions on the host directory so the non root host user can access volume contents.
sudo setfacl -m u:$(id -u):rwx -R /some/folder
To check who has access to the folder:
getfacl /some/folder
Writing to the volume will create files and directories with host user id which might not be desirable for host -> container transfer. Writing can be disabled with just giving :rx permission instead of :rwx.
To enable writing, add a mirror ACL policy in a container allowing container user id full access to volume parent path.

Docker: Permissions for a mounted volume

I have this image in which I mount a volume from the host
-v /Users/john/workspace:/data/workspace
Inside the container I'm using a user different than root. Now the problem is that it cannot create/modify files inside /data/workspace (permission denied). Now I solved it for now to do chmod -R 777 workspace on the host. What would be the docker way to solve this ?
This might be solved with user mapping (issue 7198), but that same thread include:
Managed to solve this using the new dockerfile args. It doesn't require doing anything special after the container is built, so I thought I'd share. (Requires Docker 1.9)
In the Dockerfile:
# Setup User to match Host User, and give superuser permissions
ARG USER_ID=0
RUN useradd code_executor -u ${USER_ID} -g sudo
RUN echo 'code_executor ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER ${USER_ID}
Then to build:
docker build --build-arg USER_ID=$(id -u)
That way, the user in the container can write in the mounted host volume (no chown/chmod required)

Resources