Docker sets wrong owner for MongoDB mounted volume - node.js

I have a simple NodeJs app with MongoDB using Docker(docker-compose). Everything works just fine, but Mongo's mounted volume is created under ownership of user 999.
Docker is executed under the permission of a non-root user.
Here is the mounted volume permissions info:
drwxr-sr-x 4 999 www-data 4,0K Aug 5 21:56 mongo-data
Here is my docker-compose.yml file:
version: "3.3"
services:
api:
.....
mongodb:
image: mongo:latest
container_name: "mongodb"
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
volumes:
- ./mongo-data:/data/db
ports:
- 27017:27017
command: mongod --smallfiles --logpath=/dev/null
volumes:
mongo-data:
Next time when executing: docker-compose up -d --build will throw this error:
ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?
If the ownership of the mounted volume is changed, everything is back to normal until next time.
I mention that I previously used this kind of configuration with MySQL and Redis, but I never encountered this issue.
Any ideas on how to fix it?
Thank you!

This also creates issues in managing those files on the host, such as backing up those files as a non-privileged user, which might be something one wants on a developer PC.
Here's the actual solution!
You can:
docker run --user some-other-user-id:some-group-id
or, in docker-compose, here's a minimal example:
version: '3.5'
services:
mongo:
image: mongo:latest
user: '1000:1000'
volumes:
- ./data:/data/db
After running this, looking at the data directory, it contains only user/group of 1000. Of course, set this to whatever you find appropriate.
I found that it was necessary to create the directory first under the user that is specified. The docker-compose runs as root, if I'm not mistaken, and the directory gets owned by root and that creates mayhem.
So,
mkdir data
docker-compose up
Enjoy!

On Ubuntu based images, 999 will be the first possible system assigned UID for unknown users, with further IDs counting down.
What this could mean is that the directory you are mounting might be a network path or might be copied from another machine, both leading to the user being unknown on your machine, leading to a system assigned UID.
Note that you can use the ADD --chown X:Y Syntax to add files under a user with a defined user ID.

Building off of the previous answer, to avoid issues like this in the future, you could consider using managed volumes rather than a specific directory in your filesystem. Keeps all those weird files out of sight, and avoids odd permissions issues like this. Here's my Docker Compose setup for Mongo:
https://github.com/alexmacarthur/local-docker-db/blob/master/mongo/docker-compose.yml#L6

Related

Is it possible to use a docker volume without overwriting node_modules? [duplicate]

Supposed I have a Docker container and a folder on my host /hostFolder. Now if I want to add this folder to the Docker container as a volume, then I can do this either by using ADD in the Dockerfile or mounting it as a volume.
So far, so good.
Now /hostFolder contains a sub-folder, /hostFolder/subFolder.
I want to mount /hostFolder into the Docker container (whether as read-write or read-only does not matter, works both for me), but I do NOT want to have it included /hostFolder/subFolder. I want to exclude this, and I also want the Docker container be able to make changes to this sub-folder, without the consequence of having it changed on the host as well.
Is this possible? If so, how?
Using docker-compose I'm able to use node_modules locally, but ignore it in the docker container using the following syntax in the docker-compose.yml
volumes:
- './angularApp:/opt/app'
- /opt/app/node_modules/
So everything in ./angularApp is mapped to /opt/app and then I create another mount volume /opt/app/node_modules/ which is now empty directory - even if in my local machine ./angularApp/node_modules is not empty.
If you want to have subdirectories ignored by docker-compose but persistent, you can do the following in docker-compose.yml:
volumes:
node_modules:
services:
server:
volumes:
- .:/app
- node_modules:/app/node_modules
This will mount your current directory as a shared volume, but mount a persistent docker volume in place of your local node_modules directory. This is similar to the answer by #kernix, but this will allow node_modules to persist between docker-compose up runs, which is likely the desired behavior.
For those trying to get a nice workflow going where node_modules isn't overridden by local this might help.
Change your docker-compose to mount an anonymous persistent volume to node_modules to prevent your local overriding it. This has been outlined in this thread a few times.
services:
server:
build: .
volumes:
- .:/app
- /app/node_modules
This is the important bit we were missing. When spinning up your stack use docker-compose -V. Without this if you added a new package and rebuilt your image it would be using the node_modules from your initial docker-compose launch.
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving
data from the previous containers.
To exclude a file, use the following
volumes:
- /hostFolder:/folder
- /dev/null:/folder/fileToBeExcluded
With the docker command line:
docker run \
--mount type=bind,src=/hostFolder,dst=/containerFolder \
--mount type=volume,dst=/containerFolder/subFolder \
...other-args...
The -v option may also be used (credit to Bogdan Mart), but --mount is clearer and recommended.
First, using the ADD instruction in a Dockerfile is very different from using a volume (either via the -v argument to docker run or the VOLUME instruction in a Dockerfile). The ADD and COPY commands just take a copy of the files at the time docker build is run. These files are not updated until a fresh image is created with the docker build command. By contrast, using a volume is essentially saying "this directory should not be stored in the container image; instead use a directory on the host"; whenever a file inside a volume is changed, both the host and container will see it immediately.
I don't believe you can achieve what you want using volumes, you'll have to rethink your directory structure if you want to do this.
However, it's quite simple to achieve using COPY (which should be preferred to ADD). You can either use a .dockerignore file to exclude the subdirectory, or you could COPY all the files then do a RUN rm bla to remove the subdirectory.
Remember that any files you add to image with COPY or ADD must be inside the build context i.e. in or below the directory you run docker build from.
for the people who also had the issue that the node_modules folder would still overwrite from your local system and the other way around
volumes:
node_modules:
services:
server:
volumes:
- .:/app
- node_modules:/app/node_modules/
This is the solution, With the trailing / after the node_modules being the fix.
Looks like the old solution doesn't work anymore(at least for me).
Creating an empty folder and mapping target folder to it helped though.
volumes:
- ./angularApp:/opt/app
- .empty:/opt/app/node_modules/
I found this link which saved me: Working with docker bind mounts and node_modules.
This working solution will create a "exclude" named volume in docker volumes manager. The volume name "exclude" is arbitrary, so you can use a custom name for the volume intead exclude.
services:
node:
command: nodemon index.js
volumes:
- ./:/usr/local/app/
# the volume above prevents our host system's node_modules to be mounted
- exclude:/usr/local/app/node_modules/
volumes:
exclude:
You can see more infos about volumes in Official docs - Use a volume with docker compose
To exclude a mounted file contained in the volume of your machine, you will have to overwrite it by allocating a volume to this same file.
In your config file:
services:
server:
build : ./Dockerfile
volumes:
- .:/app
An example in you dockerfile:
# Image Location
FROM node:13.12.0-buster
VOLUME /app/you_overwrite_file

bitnami consul cannot access file or directory using docker desktop volume mount

Running Consul with docker desktop using windows containers and experimental mode turned on works well. However if I try mounting bitnami consul's datafile to a local volume mount I get the following error:
chown: cannot access '/bitnami/consul'
My compose file looks like this:
version: "3.7"
services:
consul:
image: bitnami/consul:latest
volumes:
- ${USERPROFILE}\DockerVolumes\consul:/bitnami
ports:
- '8300:8300'
- '8301:8301'
- '8301:8301/udp'
- '8500:8500'
- '8600:8600'
- '8600:8600/udp'
networks:
nat:
aliases:
- consul
If I remove the volumes part, everything works just fine, but I cannot persist my data. If followed instructions in the readme file. The speak of having the proper permissions, but I do not know how to get that to work using docker desktop.
Side note
If I do not mount /bitnami but /bitnami/consul, I get the following error:
2020-03-30T14:59:00.327Z [ERROR] agent: Error starting agent: error="Failed to start Consul server: Failed to start Raft: invalid argument"
Another option is to edit the docker-compose.yaml to deploy the consul container as root by adding the user: root directive:
version: "3.7"
services:
consul:
image: bitnami/consul:latest
user: root
volumes:
- ${USERPROFILE}\DockerVolumes\consul:/bitnami
ports:
- '8300:8300'
- '8301:8301'
- '8301:8301/udp'
- '8500:8500'
- '8600:8600'
- '8600:8600/udp'
networks:
nat:
aliases:
- consul
Without user: root the container is executed as non-root (user 1001):
▶ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0c590d7df611 bitnami/consul:1 "/opt/bitnami/script…" 4 seconds ago Up 3 seconds 0.0.0.0:8300-8301->8300-8301/tcp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8301->8301/udp, 0.0.0.0:8600->8600/tcp, 0.0.0.0:8600->8600/udp bitnami-docker-consul_consul_1
▶ dcexec 0c590d7df611
I have no name!#0c590d7df611:/$ whoami
whoami: cannot find name for user ID 1001
But adding this line the container is executed as root:
▶ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ac206b56f57b bitnami/consul:1 "/opt/bitnami/script…" 5 seconds ago Up 4 seconds 0.0.0.0:8300-8301->8300-8301/tcp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8301->8301/udp, 0.0.0.0:8600->8600/tcp, 0.0.0.0:8600->8600/udp bitnami-docker-consul_consul_1
▶ dcexec ac206b56f57b
root#ac206b56f57b:/# whoami
root
If the container is executed as root there shouldn't be any issue with the permissions in the host volume.
Consul container is a non-root container, in those cases, the non-root user should be able to write in the volume.
Using host directories as a volume you need to ensure that the directory you are mounting into the container has the proper permissions, in that case, writable permission for others. You can modify the permission by running sudo chmod o+x ${USERPROFILE}\DockerVolumes\consul (or the correct path to the host directory).
This local folder is created the first time you run docker-compose up or you can create it by yourself with mkdir. Once created (manually or automatically) you should give the proper permissions with chmod.
I am not familiar with Docker desktop nor Windows environments, but you should be able to do the equivalent actions using a CLI.

docker-compose.yml use volume on documents starting with dot (.)

I am using docker container with tomcat to run application. The application is saving some data in a folder that starts with dot(.). In the yml file I have something like this:
volumes:
- /my/path/folder:/path/.* folder
It is saving the required folder on the disk, but when I'm starting again the container it doesn't persist what was saved on the disk. Is there a way to do this correctly? I prefer to not change the name of the folder.
I can confirm I have the same problem with docker-compose (1.22.0 on Fedora 29). Docker-compose doesn't seem to allow dotfiles in Volumes declaration.
This works in docker-compose.yml:
volumes:
- data:/root/folder
This does not:
volumes:
- data:/root/.folder
Where both exist in the container. I've posted on docker hub and no one there seems to know either.

Docker compose volume Permissions linux

I try to run wordpress in a docker container my docker-compose.yaml file is:
version: "2"
services:
my-wpdb:
image: mariadb
ports:
- "8081:3306"
environment:
MYSQL_ROOT_PASSWORD: ChangeMeIfYouWant
my-wp:
image: wordpress
volumes:
- ./:/var/www/html
ports:
- "8080:80"
links:
- my-wpdb:mysql
environment:
WORDPRESS_DB_PASSWORD: ChangeMeIfYouWant
When i build the docker structure the volume is mounted but belongs to root.
I tried to change that with:
my-wp:
image: wordpress
user: 1000:1000 # added
volumes:
- ./:/var/www/html
ports:
- "8080:80"
links:
- my-wpdb:mysql
environment:
WORDPRESS_DB_PASSWORD: ChangeMeIfYouWant
Now I can edit files. But then the container doesn't serve the website anymore.
What is the right way to solve this permission issue?
According to the docker-compose and docker run reference, the user option sets the user id (and group id) of the process running in the container. If you set this to 1000:1000, your webserver is not able to bind to port 80 any more. Binding to a port below 1024 requires root permissions. This means you should remove the added user: 1000:1000 statement again.
To solve the permission issue with the shared volume, you need to change the ownership of the directory. Run chown 1000:1000 /path/to/volume. This can be executed inside the container or directly on the host system. The change is persistent and effective immediately (no container restarted required).
In general, I think the volume should be in a sub-directory, e.g.
volumes:
- ./public:/var/www/html
Make sure that the correct user owns ./public. If you start the container and the directory does not exist, docker creates it for you. In this case, the directory is owned by root and you need to change ownership manually as explained above.
Alternatively, you can run the webserver as an unprivileged user (user: 1000:1000), let the server listen on port 8080 and change the routing to
ports:
- "8080:8080"
answered in same question
use root user in your docker-compose to get full permission
EX:-
node-app:
container_name: node-app
image: node
user: root
volumes:
- ./:/home/node/app
- ./node_modules:/home/node/app/node_modules
- ./.env.docker:/home/node/app/.env
NOTE:- user: root => gives you a full permission of your volumne
I was using Google Cloud shell and found that the following command enabled the correct permissions for me to use FTP file access with the WordPress docker container:
sudo chmod 644 -R wordpress-docker-compose

Can't mount mongodb directory volume with docker-compose

In my main project directory I have another local directory that I store my mongodb data in which is ./pokemondb. I know this directory is filled with my data because I have run mongoimport --db stats --colletion pokemon --file stats.json and can confirm the data is there in the mongo shell. I also have a docker-compose file that looks like this
pokemon-app:
build: .
ports:
- "3000:3000"
links:
- mongo
mongo:
image: mongo:3.2.4
ports:
- "27017:27017"
volumes:
- "./pokemondb:/data/db"
I run docker-compose up and no errors occur. But the problem is that the mongodb directory /data/db now doesn't contain the mounted volume I tried to pass. I can confirm that the data wasn't passed correctly by executing docker exec -ti [mongo container id] bash and checking the /data/db directory with the mongo shell, indeed nothing is there. What am I doing wrong and why is my mongodb data directory not mounting the volume correctly?
EDIT: I found an unexpected solution to my problem. One of my problems was that I had a fundamental misunderstanding of what docker volumes are. I previously thought that docker volumes were meant to copy data from your local machine into a docker container when it starts up. But in fact docker volumes are meant to save data on your local machine generated in the docker container. The solution the original problem I asked above was to create a dockerfile that copies the data into the image then import the data into the database when the container starts up. My final docker compose file looks like this.
app:
build: .
ports:
- 3000:3000
links:
- mongodb
mongodb:
image: mongo:3.2.4
ports:
- 27017:27017
stats:
build: ./stats
links:
- mongodb
If you're using Docker Toolbox for OSX then only the home directory is available to the VM. If you're outside the your home directory, you need to either add another shared volume to the Virtualbox VM, or move the project under your home directory.

Resources