Data permissions Docker for Windows - linux

I'm using Docker for Windows, with owncloud container, and I shared volumes, setting up on docker-compose.yml, in order to persist the data.
Extract from my docker-compose.yml
` owncloud:
image: jobel/owncloud:1.1
links:
- db:db
ports:
- 8002:80
volumes:
- f:/Program Files/Docker/Docker VM/owncloudconfig:/var/www/html
- f:/Program Files/Docker/Docker VM/owncloudconfig/apps:/var/www/html/apps
- f:/Program Files/Docker/Docker VM/owncloudconfig/config:/var/www/html/config
- f:/Program Files/Docker/Docker VM/owncloudconfig/data:/var/www/html/data
`
Data are present but when I try to connect on my web browser, it says : "Your data folder can be read by other users. Please change the directory permissions in 0770 mode so that the content is not listed by other users." (translate fr->eng)
But how can I change the windows folder permissions to 0770 ? ;(
Or what are the others solutions?
Thank you in advance for your answer.

In the Windows client, you can configure the docker to allow read/write on specific drives. In your case, it is the F: drive. This should fix your permissions problem.

The web-server running in Docker is checking the folder permissions, which is nothing to do with windows; you need to verify the folder permissions inside Docker itself
Below is an example which you can play with, you just need to let docker execute the Linux command in a container
hope this helps
eg:
$ docker container exec -t local-container ls -ld /var/www/html/config
$ docker container exec -t local-container chmod 0770 /var/www/html/config

Related

Chown breaks in bound volumes with Docker user namespace remapping: "Operation not permitted"

I followed the instructions here for enabling user namespace remapping.
However, now inside of a container I can no longer change the ownership of a -v bounded volume.
I'm using Docker on Fedora with docker-compose. The volumes are bound like this:
service_name:
volumes:
- ./vol/service:/opt/service:Z
# other config info removed...
I tried adding cap_add:
service_name:
volumes:
- ./vol/service:/opt/service:Z
cap_add:
- CHOWN
- SETGID
- SETUID
# other config info removed...
That didn't help. The root user inside of the container cannot chown on the folder that's bound (in this example, /opt/service), or any of the contents inside of the folder, even when I ran chmod go+rwx vol/service on the host.
Help?
I needed to do two things to fix this problem:
I needed to remove the :Z on the volume binding
I needed to make sure that the permissions on the bound folder on the host started with the UID and GID found in /etc/subuid and /etc/subgid, respectively. In other words, I had to run (on the host): chown -R 100000:100000 vol/service

How to resolve the file processing issue during docker volume mount in linux?

I am trying to containerize my application. The application basically process files and place it in a different folder after renaming it. The source folder "opt/fileprocessing/input" and target it "opt/fileprocessing/output"
Scenario 1. - without volume mount
When I start my docker container and place file in the source folder us docker cp command, the application process it and place it successfully in the target folder
Scenario 2 . - with volume mount with the host
docker run -d -v /opt/input:/opt/fileprocessing/input -v /opt/output:/opt/fileprocessing/output --name new_container processor
when I place the file in the /opt/input folder of the host, the application throws an error that it cant place the file in the destination. If I go inside the container and view the input folder I see the file in the input folder that confirms that the mount has happened succesfullu.It fails when renaming and posting it in the destination (well this is application level code error , no much help there I get).
I tried the following to make it work.
Made sure the host and container users are the same and has the same uid and gid
File has 775 permission set.
The container folder has 777 permission
Same file has been placed that was used for scenario 1.
File name same and format as well
container OS
NAME="CentOS Linux"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
host os
NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
Scenario - 3 - Mounted the file in a different way , as below
docker run -d -v /opt/fileprocessing:/opt/fileprocessing -v /opt/fileprocessing:/opt/fileprocessing --name new_container processor
while the fileprocessing folder in the container and the host has two subdirectories name input and output.
This way of mount seems to work for me with out any issues.
Please let me know why scenario 2 failed to work and how to fix it ?

Docker: mounting/sharing a single sock file?

I want to share php-fpm.sock between nginx and PHP. The way I've done this right now is like so:
services:
nginx:
build:
context: .
dockerfile: docker/nginx.docker
ports:
- '8080:80'
volumes:
- type: volume
source: php_fpm_sock
target: /mnt/sock
consistency: delegated
read_only: true
volume:
nocopy: true
links:
- php
php:
build:
context: .
dockerfile: docker/php.docker
links:
- mariadb
env_file: .env
volumes:
- type: volume
source: php_fpm_sock
target: /mnt/sock
consistency: delegated
read_only: false
volume:
nocopy: true
volumes:
php_fpm_sock:
i.e. I've moved the sock file from its usual location (/var/run/php5-fpm.sock) to /mnt/sock because I can't figure out how to mount a single file and I don't want to mount the whole /var/run dir.
Secondly, I've configured php-fpm as:
[www]
listen = /mnt/sock/php-fpm
;listen.owner = www-data
;listen.group = nginx
; php-fpm runs as `www-data`, nginx runs as `nginx`
listen.mode = 0664
i.e., I've given "other" full read privileges because the nginx user group doesn't exist in the php-fpm container so I don't know how else I can give permissions to just nginx.
This all feels pretty hacky. So my questions are:
How can I share just the sock file between these two containers so that I can move the sock file back to /var/run/php5-fpm.sock
How can I fix the read permissions on this file so that only nginx can read it? i.e. how can share linux user accounts across containers?
I can also add the nginx user into the php-fpm image,
RUN useradd -r -s /usr/sbin/nologin -c "Nginx service" -U nginx -u 101
And then I can set the permissions correctly. The UID must match between both images or it won't work so I've set it explicitly. If 101 had already been used for PHP I think I'd be in trouble again, so I don't love this solution either.
You kind of have to do it the way you show. (But mounting the volume over the PHP container's /var/run is probably fine.)
You have three big constraints here:
Mounts are mounting some external content into the container (which can be bidirectionally written to after startup), not publishing content out.
Docker mounts the volume into the container before the main container process starts, so whatever you're mounting must already exist.
The Compose dependency mechanism essentially enforces the order in which containers are created, but it won't wait for container processes to actually start.
So by the second constraint, you can't mount the PHP-FPM socket into the PHP container, because it doesn't exist yet when you're running docker-compose up. And by the third constraint, you can't mount the socket into the nginx container either, because it's not guaranteed to exist at the moment the container is created. (You can't mount single files out of named volumes either, but in theory you could work around this by bind-mounting a host directory.)
Since a container only runs a single process, there's probably nothing else in /var/run and by its nature it should be okay to hide everything there with an empty volume anyways. The only thing you might need to manually manage is a pid file.
How can I fix the read permissions on this file so that only nginx can read it? i.e. how can share linux user accounts across containers?
You need to manually ensure that the USER both containers are using have the same numeric user ID. Note that it's common (if discouraged) to run containers as root which will bypass any controls here anyways. There's no built-in way to do this.

inside container container file permission issue for non root user

I am extending a docker image of a program from here and I want to change some configs and create my own docker image. I have written a Dockerfile as follows and replaced the server.xml file in this image:
FROM exoplatform/exo-community
COPY server.xml /opt/exo/conf
RUN chmod 777 /opt/exo/conf/server.xml
When I created the docker image and run an instance from the image, the running program of the container cannot access the file server.xml because its owner is the root user and I see the permission denied error. I tried to change the ownership in the Dockerfile by chmod command but I see the Operation not permitted error. The user of the running container is not the root user and it cannot access the server.xml file that is owned by the root user. How can I resolve this issue?
If this is actually just a config file, I wouldn't build a custom image around it. Instead, use the docker run -v option to inject it at runtime
docker run \
-v $PWD/server.xml:/opt/exo/conf/server.xml \
... \
exoplatform/exo-community
(You might still hit the same permission issues.)
In your Dockerfile approach, the base image runs as an alternate USER but a COPY instruction by default makes files owned by root. As of relatively recent Docker (18.03; if you're using Docker 1.13 on CentOS/RHEL 7 this won't work) you should be able to
COPY --chown=exo server.xml /opt/exo/conf
Or if that won't work, you can explicitly switch to the root user and back
COPY server.xml /opt/exo/conf
USER root
RUN chown exo /opt/exo/conf/server.xml
USER exo

How to provide 777 default permission on all files within a given Linux folder

I have a need to make any files that are created in the specific Linux directory to have 777 permission.
I would like to have all the users to be able to do Read, Write and Execute on all files under this folder. So what is the best way or Linux command to make it happen?
What I am doing is that I am spinning off two separate containers one for Nginx server and one for PHP:FPM app server to host Laravel 5.4 app.
Please consider the following scenario. I have a docker application container A (PHP:FPM) which is used to serve the web application files to docker container B (Nginx). Now when I access the website, I am delivering the web pages through the web container. Both the containers are within the same network and I share the volumes from my app container to my web container. But when the web container tries to read the files on the app container I get the error which is something like below:
The stream or file "/var/www/storage/logs/laravel.log" could not be
opened: failed to open stream: Permission denied
So I added RUN chmod -R 777 storage in my docker file.
However it is not solving the issue.
So I also tried using SGID to fix the issue by adding one more line in my dockerfile as RUN chmod -R ug+rwxs storage. Still it is not solving the issue of permission.
On a separate note, funny thing is that on my MAC Docker container this works without any issue ( I mean without adding chmod -R 777 to folder or using SGID for setting permission to a folder in my docker file). But when the same code is run on Linux AMI EC2 instance (Amazon AMI Linux EC2) ... the permission issue start to occur.
So how do I fix this ?
The solution is to launch both containers using the same user identified by the same uid. For instance you can choose root or any uid when running the container:
docker run --user root ...
Alternatively, you can switch to another user, before startup, inside your Dockerfile by adding the following before the CMD or ENTRYPOINT
USER root
I have solved it by figuring out user name under which cache files are created when someone access the application url . And then updating my dockerfile to include statement for SGID ownership for that user on the root of app folder where all source code resides (so all subfolder and files included later in whatever way ... at run-time sometime... are accessible from web container for that user) and then using chmod 777 permission on specific folders that needs to have chmod 777 permission.

Resources