Automatically changing the docker container file permissions in a directory in Linux - linux

We have a docker container running in Linux VMs. This container is writing the logs inside a directory in the container.
Container log directory - /opt/log/
This directory in volume mounted to host machine so that all the log files will also be available in host.
Host directory - /var/log/
Here we see container is creating the log files with 600 (-rw-------+) permission. There is no group read permission assigned to these files.
Same permissions are reflecting in host directory also. We need to add group read permission (640) (-rw-r-----+) automatically for all the files getting created in this directory so that other logging agents can read these files.
I have tried setting ACL also for adding this permission on host but these permissions are not getting set for the files inside this directory.
setfacl -Rdm g::r-- /var/log/
Is there a way we can add group read permission automatically for all the files getting created in this host directory?

From the following article,
https://dille.name/blog/2018/07/16/handling-file-permissions-when-writing-to-volumes-from-docker-containers/
There is a parameter to set the user id and the group id for example,
docker run -it --rm --volume $(pwd):/source --workdir /source --user $(id -u):$(id -g) ubuntu
To set the permissions of the user, when starting the container.

Related

Permissions between mounted host volume inside container

I have an container with a mounted host directory inside.
On the host i am running a cronjob which always create new files and directories under the mounted directory.
On the container there is apache running which loads files from the mounted directory.
Due to the cronjob process that creates new files/directories the SITE that running under the Apache service cannot be loaded due to permissions issue .
On the host there is a different user that runs the cronjob which gives the files/folders the following permissions:
Owner + Group is set to the user on host.
All the files/folders are being created with 644 permissions.
Obviously only if i give 777 permissions manually to the files/folders the site is loaded correctly without any error.
I have multiple questions:
Is it possible to share groups between HOST and CONTAINERS after creating an image and the container is running ? .
Any other suggestions how to handle this particular folder/files permissions between the HOST and CONTAINER ? .
Thanks!!!
I have tried to do the following steps:
Setting an default group to the main directory inside the CONTAINER.
Setting an default permissions under the main directory inside the CONTAINER.
Due to the fact that the HOST Group does not exist on the CONTAINER i've created new group inside the CONTAINER using the same 'gid' inside the container and adding the APACHE user as a member with the following permissions:
*) Write
*) Read
*) Execute
All of the above did not fully solved my requirements .

Docker Bind Mount: error while creating mount source path, permission denied

I am trying to run the NVIDIA PyTorch container nvcr.io/nvidia/pytorch:22.01-py3 on a Linux system, and I need to mount a directory of the host system (that I have R/W access to) in the container. I know that I need to use bind mounts, and here's what I'm trying:
I'm in a directory /home/<user>/test, which has the directory dir-to-mount. (The <user> account is mine).
docker run -it -v $(pwd)/dir-to-mount:/workspace/target nvcr.io/nvidia/pytorch:22.01-py3
Here's the error output:
docker: Error response from daemon: error while creating mount source path '/home/<user>/test/dir-to-mount': mkdir /home/<user>/test: permission denied.
ERRO[0000] error waiting for container: context canceled
As far as I know, docker will only need to create the directory to be mounted if it doesn't exist already. Docker docs:
The file or directory does not need to exist on the Docker host already. It is created on demand if it does not yet exist.
I suspected that maybe the docker process does not have access; I tried chmod 777 with dir-to-mount as well as with test, but that made no difference.
So what's going wrong?
[Edit 1]
I am able to mount my user's entire home directory with the same command, but cannot mount other directories inside the home directory.
[Edit 2]
Here are the permissions:
home directory: drwx------
test: drwxrwxrwx
dir-to-mount: drwxrwxrwx
Run the command with sudo as:
sudo docker run -it -v $(pwd)/dir-to-mount:/workspace/target nvcr.io/nvidia/pytorch:22.01-py3
It appears that I can mount my home directory as a home directory (inside of /home/<username>), and this just works.
docker run -it -v $HOME:$HOME nvcr.io/nvidia/pytorch:22.01-py3
I don't know why the /home/<username> path is special, I've tried looking through the docs but I could not find anything relevant.

inside container container file permission issue for non root user

I am extending a docker image of a program from here and I want to change some configs and create my own docker image. I have written a Dockerfile as follows and replaced the server.xml file in this image:
FROM exoplatform/exo-community
COPY server.xml /opt/exo/conf
RUN chmod 777 /opt/exo/conf/server.xml
When I created the docker image and run an instance from the image, the running program of the container cannot access the file server.xml because its owner is the root user and I see the permission denied error. I tried to change the ownership in the Dockerfile by chmod command but I see the Operation not permitted error. The user of the running container is not the root user and it cannot access the server.xml file that is owned by the root user. How can I resolve this issue?
If this is actually just a config file, I wouldn't build a custom image around it. Instead, use the docker run -v option to inject it at runtime
docker run \
-v $PWD/server.xml:/opt/exo/conf/server.xml \
... \
exoplatform/exo-community
(You might still hit the same permission issues.)
In your Dockerfile approach, the base image runs as an alternate USER but a COPY instruction by default makes files owned by root. As of relatively recent Docker (18.03; if you're using Docker 1.13 on CentOS/RHEL 7 this won't work) you should be able to
COPY --chown=exo server.xml /opt/exo/conf
Or if that won't work, you can explicitly switch to the root user and back
COPY server.xml /opt/exo/conf
USER root
RUN chown exo /opt/exo/conf/server.xml
USER exo

Docker : directory mapped in volume is not created with same user of host

i'm running a docker container inside my server .
in my server (host) i ve this folder : /opt/myapp/myFolder
where myFolder has 755 permissions and myuser:mygroup ownership
I'm using docker-compose to run my container , thus i'm mounting that same volume
mycontainer:
...
volumes:
- /opt/myapp/myFolder:/opt/myapp/myFolder
...
The probleme that , inside my container , my directory "myFolder" still having the same host permissions (755) but not the same ownership
and the ownership looks like this 65534:65534
this results on permission denied in some other treatment inside this folder.
normally inside the container "myFolder" still keeping the same host ownership .
Note : tthe user myuser and the group mygroup do exist inside the container.
Suggestions ?
Docker doesn't create users and groups to match the mounted folder's ownership.
You can add the user inside your container to the folder's group, using the group id (GID).
Check out "Docker and file system permissions" article.

How to provide 777 default permission on all files within a given Linux folder

I have a need to make any files that are created in the specific Linux directory to have 777 permission.
I would like to have all the users to be able to do Read, Write and Execute on all files under this folder. So what is the best way or Linux command to make it happen?
What I am doing is that I am spinning off two separate containers one for Nginx server and one for PHP:FPM app server to host Laravel 5.4 app.
Please consider the following scenario. I have a docker application container A (PHP:FPM) which is used to serve the web application files to docker container B (Nginx). Now when I access the website, I am delivering the web pages through the web container. Both the containers are within the same network and I share the volumes from my app container to my web container. But when the web container tries to read the files on the app container I get the error which is something like below:
The stream or file "/var/www/storage/logs/laravel.log" could not be
opened: failed to open stream: Permission denied
So I added RUN chmod -R 777 storage in my docker file.
However it is not solving the issue.
So I also tried using SGID to fix the issue by adding one more line in my dockerfile as RUN chmod -R ug+rwxs storage. Still it is not solving the issue of permission.
On a separate note, funny thing is that on my MAC Docker container this works without any issue ( I mean without adding chmod -R 777 to folder or using SGID for setting permission to a folder in my docker file). But when the same code is run on Linux AMI EC2 instance (Amazon AMI Linux EC2) ... the permission issue start to occur.
So how do I fix this ?
The solution is to launch both containers using the same user identified by the same uid. For instance you can choose root or any uid when running the container:
docker run --user root ...
Alternatively, you can switch to another user, before startup, inside your Dockerfile by adding the following before the CMD or ENTRYPOINT
USER root
I have solved it by figuring out user name under which cache files are created when someone access the application url . And then updating my dockerfile to include statement for SGID ownership for that user on the root of app folder where all source code resides (so all subfolder and files included later in whatever way ... at run-time sometime... are accessible from web container for that user) and then using chmod 777 permission on specific folders that needs to have chmod 777 permission.

Resources