inside container container file permission issue for non root user - linux

I am extending a docker image of a program from here and I want to change some configs and create my own docker image. I have written a Dockerfile as follows and replaced the server.xml file in this image:
FROM exoplatform/exo-community
COPY server.xml /opt/exo/conf
RUN chmod 777 /opt/exo/conf/server.xml
When I created the docker image and run an instance from the image, the running program of the container cannot access the file server.xml because its owner is the root user and I see the permission denied error. I tried to change the ownership in the Dockerfile by chmod command but I see the Operation not permitted error. The user of the running container is not the root user and it cannot access the server.xml file that is owned by the root user. How can I resolve this issue?

If this is actually just a config file, I wouldn't build a custom image around it. Instead, use the docker run -v option to inject it at runtime
docker run \
-v $PWD/server.xml:/opt/exo/conf/server.xml \
... \
exoplatform/exo-community
(You might still hit the same permission issues.)
In your Dockerfile approach, the base image runs as an alternate USER but a COPY instruction by default makes files owned by root. As of relatively recent Docker (18.03; if you're using Docker 1.13 on CentOS/RHEL 7 this won't work) you should be able to
COPY --chown=exo server.xml /opt/exo/conf
Or if that won't work, you can explicitly switch to the root user and back
COPY server.xml /opt/exo/conf
USER root
RUN chown exo /opt/exo/conf/server.xml
USER exo

Related

Running Singularity container without root as different user inside

I am trying to run a Docker container with Singularity that executes a Windows program (msconvert from proteowizard) via Wine. This image: https://hub.docker.com/r/chambm/pwiz-skyline-i-agree-to-the-vendor-licenses
The Wine installation inside the Docker container is owned by root. Running with Singularity on a standard user account with
singularity run --env WINEDEBUG=-all pwiz.sif wine msconvert
results in:
wine: '/wineprefix64' is not owned by you
So I must use the --fakeroot option to overcome this.
HOWEVER, on the remote HPC system I wish to run the container I cannot use --fakeroot because it is not allowed by the system admins:
FATAL: while extracting pwiz.sif: root filesystem extraction failed: extract command failed: FATAL: configuration disallows users from running sandbox based containers
My workaround is to add a %post to build the container and change the ownership of the directory to the user of the remote system, then rebuild the Singularity image on a machine I have root access. Simply:
Bootstrap: docker
From: chambm/pwiz-skyline-i-agree-to-the-vendor-licenses
%post
useradd -u 1234 user
chown -Rf --no-preserve-root user /wineprefix64
This works for me. But my question is, is there a better way to generalise this so any unprivileged user can run this without having to manually re-build it for their username?

Docker Bind Mount: error while creating mount source path, permission denied

I am trying to run the NVIDIA PyTorch container nvcr.io/nvidia/pytorch:22.01-py3 on a Linux system, and I need to mount a directory of the host system (that I have R/W access to) in the container. I know that I need to use bind mounts, and here's what I'm trying:
I'm in a directory /home/<user>/test, which has the directory dir-to-mount. (The <user> account is mine).
docker run -it -v $(pwd)/dir-to-mount:/workspace/target nvcr.io/nvidia/pytorch:22.01-py3
Here's the error output:
docker: Error response from daemon: error while creating mount source path '/home/<user>/test/dir-to-mount': mkdir /home/<user>/test: permission denied.
ERRO[0000] error waiting for container: context canceled
As far as I know, docker will only need to create the directory to be mounted if it doesn't exist already. Docker docs:
The file or directory does not need to exist on the Docker host already. It is created on demand if it does not yet exist.
I suspected that maybe the docker process does not have access; I tried chmod 777 with dir-to-mount as well as with test, but that made no difference.
So what's going wrong?
[Edit 1]
I am able to mount my user's entire home directory with the same command, but cannot mount other directories inside the home directory.
[Edit 2]
Here are the permissions:
home directory: drwx------
test: drwxrwxrwx
dir-to-mount: drwxrwxrwx
Run the command with sudo as:
sudo docker run -it -v $(pwd)/dir-to-mount:/workspace/target nvcr.io/nvidia/pytorch:22.01-py3
It appears that I can mount my home directory as a home directory (inside of /home/<username>), and this just works.
docker run -it -v $HOME:$HOME nvcr.io/nvidia/pytorch:22.01-py3
I don't know why the /home/<username> path is special, I've tried looking through the docs but I could not find anything relevant.

How to fix denied permission to access a directory if that directory was added during docker build?

I using the following Dockerfile to extend a docker image:
FROM solr:6.6
COPY --chown=solr:solr ./services-core/search/A12Core /A12Core/
Note that solr:6.6 has a USER solr statement.
When running a container built from that Dockerfile I get a permission denied when trying to access a file or directory under /A12Core:
$ docker run -it 2f3c58f093e6 /bin/bash
solr#c091f0cd9127:/opt/solr$ cd /A12Core
solr#c091f0cd9127:/A12Core$ cd conf
bash: cd: conf: Permission denied
solr#c091f0cd9127:/A12Core$ ls -l
total 8
drw-r--r-- 3 solr solr 4096 Aug 31 14:21 conf
-rw-r--r-- 1 solr solr 158 Jun 28 14:25 core.properties
solr#c091f0cd9127:/A12Core$ whoami
solr
solr#c091f0cd9127:/A12Core$
What do I need to do in order to get permission to access the fiels and folders in the /A12Core directory?
Note that I'm running the docker build from windows 7. My docker version is 18.03.0-ce.
Your directory does not have execute permission:
drw-r--r-- 3 solr solr 4096 Aug 31 14:21 conf
Without that, you cannot cd into the directory according to Linux filesystem permissions. You can fix that in your host with a chmod:
chmod +x conf
If you perform this command inside your Dockerfile (with a RUN line), it will result in any modified file being copied to a new layer, so if you run this recursively, it could double the size of your image, hence the suggestion to fix it on your build host if possible.
I had another answer here, which was wrong (but still solved your problem :), but now I see the typo in your Dockerfile. Let's take a look at this line.
COPY --chown=solr:solr ./services-core/search/A12Core /A12Core/
The COPY command checks if the target path in the container exists. If not, it creates it, before copying.
It takes A12Core from ./services-core/search.
Then it checks if path /A12Core exists.
Obviously, it does not. So, the command creates it with permissions root:root.
Lastly, it copies contents of A12Core to newly created A12Core.
In the end your have everything in /A12Core, but it belongs to root and you can't access it.
Since solr docker image already sets USER solr, the way to go would be
RUN mkdir /A12Core
COPY ./services-core/search/A12Core /A12Core
As the docs say
The USER instruction sets the user name ... the user group ... for any RUN, CMD and ENTRYPOINT instructions that follow it in the Dockerfile.

How to provide 777 default permission on all files within a given Linux folder

I have a need to make any files that are created in the specific Linux directory to have 777 permission.
I would like to have all the users to be able to do Read, Write and Execute on all files under this folder. So what is the best way or Linux command to make it happen?
What I am doing is that I am spinning off two separate containers one for Nginx server and one for PHP:FPM app server to host Laravel 5.4 app.
Please consider the following scenario. I have a docker application container A (PHP:FPM) which is used to serve the web application files to docker container B (Nginx). Now when I access the website, I am delivering the web pages through the web container. Both the containers are within the same network and I share the volumes from my app container to my web container. But when the web container tries to read the files on the app container I get the error which is something like below:
The stream or file "/var/www/storage/logs/laravel.log" could not be
opened: failed to open stream: Permission denied
So I added RUN chmod -R 777 storage in my docker file.
However it is not solving the issue.
So I also tried using SGID to fix the issue by adding one more line in my dockerfile as RUN chmod -R ug+rwxs storage. Still it is not solving the issue of permission.
On a separate note, funny thing is that on my MAC Docker container this works without any issue ( I mean without adding chmod -R 777 to folder or using SGID for setting permission to a folder in my docker file). But when the same code is run on Linux AMI EC2 instance (Amazon AMI Linux EC2) ... the permission issue start to occur.
So how do I fix this ?
The solution is to launch both containers using the same user identified by the same uid. For instance you can choose root or any uid when running the container:
docker run --user root ...
Alternatively, you can switch to another user, before startup, inside your Dockerfile by adding the following before the CMD or ENTRYPOINT
USER root
I have solved it by figuring out user name under which cache files are created when someone access the application url . And then updating my dockerfile to include statement for SGID ownership for that user on the root of app folder where all source code resides (so all subfolder and files included later in whatever way ... at run-time sometime... are accessible from web container for that user) and then using chmod 777 permission on specific folders that needs to have chmod 777 permission.

Shared volume/file permissions/ownership (Docker)

I'm having a slightly annoying issue while using a Docker container (I'm on Ubuntu, so no virtualization like VMWare or b2d). I've built my image, and have a running container that has one shared (mounted) directory from my host, and one shared (mounted) file from my host. Here's the docker run command in full:
docker run -dit \
-p 80:80 \
--name my-container \
-v $(pwd)/components:/var/www/components \
-v $(pwd)/index.php:/var/www/index.php \
my-image
This works great, and both /components (and its contents) and the file are shared appropriately. However, when I want to make changes to either the directory (e.g. adding a new file or folder), or edit the mounted file (or any file in the directory), I'm unable to do so due to incorrect permissions. Running ls- lFh shows that the owner and group for the mounted items have been changed to libuuid:libuuid. Modifying either the file or parent directory requires root permissions, which impedes my workflow (as I'm working from Sublime Text, not Terminal, I'm presented with a popup for admin privs).
Why does this occur? How can I work around this / handle this properly? From Managing Data Volumes: Mount a Host File as a Data Volume:
Note: Many tools used to edit files including vi and sed --in-place may result in an inode change. Since Docker v1.1.0, this will produce an error such as “sed: cannot rename ./sedKdJ9Dy: Device or resource busy”. In the case where you want to edit the mounted file, it is often easiest to instead mount the parent directory.
This would seem to suggest that instead of mounting /components and /index.php, I should instead mount the parent directory of both. Sounds great in theory, but based on the behavior of the -v option and how it interacts with /directory, it would seem that every file in my parent directory would be altered to be owned by libuuid:libuuid. Additionally, I have lots of things inside the parent directory that are not needed in the container - things like build tools, various files, some compressed folders, etc. Mounting the whole parent directory would seem to be wasteful.
Running chown user:group on /components and /index.php on my host machine allow me to work around this and seem to continue to sync with the container. Is this something I'll need to do every time I run a container with mounted host volumes? I'm guessing that there is a more efficient way to do this, and I'm just not finding an explanation for my particular use-case anywhere.
I am using this container for development of a module for another program, and have no desire to manage a data-only container - the only files that matter are from my host; persistence isn't needed elsewhere (like a database, etc).
Dockerfile
/setup
Created on pastebin to avoid an even longer post. Never expires.
After creating the image, this is the run command I'm using:
docker run -dit \
-p 80:80 \
--name my-container \
-v $(pwd)/components:/var/www/wp-content/plugins/my-plugin-directory/components \
-v $(pwd)/index.php:/var/www/wp-content/plugins/my-plugin-directory/index.php \
my-image
It looks like your chown -R nginx:nginx ... commands inside your container are changing the ownership bits on your files to be owned by libuuid on your host machine.
See Understanding user file ownership in docker: how to avoid changing permissions of linked volumes for a basic explanation on how file ownership bits work between your host and your docker containers.

Resources