"Permission denied" in Docker container unless --privileged=true - linux

I'm trying to run an nginx container as a service and share 2 volumes between the host machine and container, so that files in one directory are automatically shared with the other paired directory.
My docker-compose.yml is the following:
version: '2'
services:
nginx:
image: nginx
build: .
ports:
- "5000:80"
volumes:
- /home/user1/share:/share/user1
- /home/user2/share:/share/user2
restart: always
The only way I can get this to work currently is by adding privileged: true to the docker-compose file, however I am not allowed to due this due to security requirements.
When trying to access the volume in the container, I get the following error:
[root#host docker-nginx]# docker exec -it dockernginx_nginx_1 bash
root#2d574f9c6131:/# ls /share/user1/
ls: cannot open directory /share/user1/: Permission denied
Even attaching myself to bash on the container with the following parameters denies me of accessing the resource (or at least listing the contents):
docker exec -it --privileged=true -u 6004:6004 dockernginx_nginx_1 bash
(Note: 6004:6004 happens to be the id:gid ownership that is passed on to /share/user1/)
Is there any way of accessing the contents without building the nginx service with elevated privileges?
Perhaps the issue lies in SELinux restrictions enforced in the container?
The container is running Debian GNU/Linux 8 (jessie) and the host is running CentOS Linux 7 (Core)
Related questions:
Permission denied inside Docker container

Docker was running with --selinux-enabled=true, this prohibited me from accessing the contents of directories in the container.
Read more: http://www.projectatomic.io/blog/2016/07/docker-selinux-flag/
The solution was to disable it, it can either be done by (1) configuring or by (2) installing the non-selinux CentOS package, I went with option 2:
I made sure to reinstall and update Docker from 1.10 to 1.12.1 and not install docker-engine-selinux.noarch but instead have docker-engine.x86_64 and have the SELinux package installed as a dependency (yum does this automatically). By doing this and starting the Docker daemon, you can verify with ps aux | grep "docker" that docker-containerd is not started with the --selinux-enabled=true option.

Related

Running docker compose inside Docker Container

I have a docker file I am building, it will use Localstack to spin up a mock AWS environment, at the minute I do this locally with my docker compose file, so I was thinking I could just copy my docker-compose.yml over when building my docker file and then run docker-compose up from dockerfile and I would be able to run my application from the container created from dockerfile
Here is the docker compose file
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Here us my Dockerfile
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
WORKDIR /app
COPY serverless.yml ./
COPY localstack_endpoints.json ./
COPY docker-compose.yml ./
COPY --from=library/docker:latest /usr/local/bin/docker /usr/bin/docker
COPY --from=docker/compose:latest /usr/local/bin/docker-compose /usr/bin/docker-compose
EXPOSE 3000
RUN docker-compose up
CMD ["sls","deploy" ]
But the error I am receiving is
#17 0.710 Couldn't connect to Docker daemon at http+docker://localhost - is it running?
#17 0.710
#17 0.710 If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
I'm new to Docker, when i researched the error online I see people saying it needs to be run with Sudo, although I think in this case it is something to do with my volumes linking to the host running the container but really not sure.
Inside the Docker container try to reach socket but it can not. so when you want to run your container use
-v /var/run/docker.sock:/var/run/docker.sock
it should fix the problem.
As a general rule, you can't do things in your Dockerfile that affect persistent state or processes running outside the container. Imagine docker building your image, docker pushing it to a registry, and docker pulling it on a new system; if the build step was able to start other running containers, they wouldn't be running with the same image on a different system.
At a more mechanical level, the build sequence doesn't have access to bind-mounted host directories or a variety of other runtime settings. That's why you get the "couldn't connect to Docker daemon" message: the build container isn't running a Docker daemon and it doesn't have access to the host's daemon.
Rather than try to have a container embed the Compose tool and Compose setup, you might find it easier to just distribute a docker-compose.yml file, and make the standard way to run your composite application be running docker-compose up on the host. Access to the Docker socket is incredibly powerful -- you can almost trivially use it to root the host -- and I wouldn't require it to avoid needing a fairly standard tool on the host.

bitnami consul cannot access file or directory using docker desktop volume mount

Running Consul with docker desktop using windows containers and experimental mode turned on works well. However if I try mounting bitnami consul's datafile to a local volume mount I get the following error:
chown: cannot access '/bitnami/consul'
My compose file looks like this:
version: "3.7"
services:
consul:
image: bitnami/consul:latest
volumes:
- ${USERPROFILE}\DockerVolumes\consul:/bitnami
ports:
- '8300:8300'
- '8301:8301'
- '8301:8301/udp'
- '8500:8500'
- '8600:8600'
- '8600:8600/udp'
networks:
nat:
aliases:
- consul
If I remove the volumes part, everything works just fine, but I cannot persist my data. If followed instructions in the readme file. The speak of having the proper permissions, but I do not know how to get that to work using docker desktop.
Side note
If I do not mount /bitnami but /bitnami/consul, I get the following error:
2020-03-30T14:59:00.327Z [ERROR] agent: Error starting agent: error="Failed to start Consul server: Failed to start Raft: invalid argument"
Another option is to edit the docker-compose.yaml to deploy the consul container as root by adding the user: root directive:
version: "3.7"
services:
consul:
image: bitnami/consul:latest
user: root
volumes:
- ${USERPROFILE}\DockerVolumes\consul:/bitnami
ports:
- '8300:8300'
- '8301:8301'
- '8301:8301/udp'
- '8500:8500'
- '8600:8600'
- '8600:8600/udp'
networks:
nat:
aliases:
- consul
Without user: root the container is executed as non-root (user 1001):
▶ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0c590d7df611 bitnami/consul:1 "/opt/bitnami/script…" 4 seconds ago Up 3 seconds 0.0.0.0:8300-8301->8300-8301/tcp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8301->8301/udp, 0.0.0.0:8600->8600/tcp, 0.0.0.0:8600->8600/udp bitnami-docker-consul_consul_1
▶ dcexec 0c590d7df611
I have no name!#0c590d7df611:/$ whoami
whoami: cannot find name for user ID 1001
But adding this line the container is executed as root:
▶ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ac206b56f57b bitnami/consul:1 "/opt/bitnami/script…" 5 seconds ago Up 4 seconds 0.0.0.0:8300-8301->8300-8301/tcp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8301->8301/udp, 0.0.0.0:8600->8600/tcp, 0.0.0.0:8600->8600/udp bitnami-docker-consul_consul_1
▶ dcexec ac206b56f57b
root#ac206b56f57b:/# whoami
root
If the container is executed as root there shouldn't be any issue with the permissions in the host volume.
Consul container is a non-root container, in those cases, the non-root user should be able to write in the volume.
Using host directories as a volume you need to ensure that the directory you are mounting into the container has the proper permissions, in that case, writable permission for others. You can modify the permission by running sudo chmod o+x ${USERPROFILE}\DockerVolumes\consul (or the correct path to the host directory).
This local folder is created the first time you run docker-compose up or you can create it by yourself with mkdir. Once created (manually or automatically) you should give the proper permissions with chmod.
I am not familiar with Docker desktop nor Windows environments, but you should be able to do the equivalent actions using a CLI.

Mouting relative folders in docker-compose when using docker daemon via a container?

We have previously been running Jenkins in Docker in Docker (DIND) mode, i.e. running a docker daemon inside the Jenkins docker container. But due to many problems (some of which are described in the link above) we've decided to move away from this approach and instead let the container use the host daemon by simply mounting it as volume when starting the container:
-v /var/run/docker.sock:/var/run/docker.sock
But now we run into problems when mounting relative paths with Docker Compose that is started inside the container which worked fine in DIND mode. Consider this docker-compose file:
myimage:
build: .
environment:
LANG: C.UTF-8
working_dir: /code
volumes:
- ../../../:/code
- ~/.m2/repository:/root/.m2/repository
- ~/.gradle:/root/.gradle
Previously this mounted all folders, for example the ../../../ folder, from the container but now it seems to try to mount them from the host. When I check the directory structure on the host it seems like docker-compose have replicated the directory structure from the container and then tries to mount this folder which makes it empty.
So my question is, how can one mount relative paths in Docker Compose when using the docker daemon from the host?
You'll need to make sure the relative path on your host is the same inside your Jenkins container for this to work.
It's not really a relative path, docker-compose is doing the best it can to convert a relative path into an absolute path which the docker host requires. All paths are evaluated on the docker host to create a new container, it doesn't know you are running the docker client remotely or inside of a container, and it doesn't know what directory you are currently in.
As another option, you may want to consider switching to named volumes and map the same named volume in your Jenkins container as in your other containers.
Docker has a client server architecture, when you mount the docker socket which is on the host, you are simply communication with the hosts docker deamonm. Thus all host volume path will be interpreted as path on the host.
To solve that you need to bind the jenkins containers directories onto the host and then use the host folders as mount points. Thus simply start jenkins container with
-v ./code:<path-to-../../..> -v ./m2-repo:.../.m2/repository
Then change the compose file to use the host folders:
myimage:
build: .
environment:
LANG: C.UTF-8
working_dir: /code
volumes:
- ./code:/code
- ./m2-repo:/root/.m2/repository
...

GitLab-CI: Cannot link to a non running container

I've tried to get my setup work with gitlab-ci. I have a simple gitlab-ci.yml file
build_ubuntu:
image: ubuntu:14.04
services:
- rikorose/gcc-cmake:gcc-5
stage: build
script:
- apt-get update
- apt-get install -y python3 build-essential curl
- cmake --version
tags:
- linux
I want to get a ubuntu 14.04 LTS with gcc and cmake (apt-get version is to old) installed. If i use it locally (via docker --link command) everything works, but when the gitlab-ci-runner will process it i get the following waring (which is in my case an error)
Running with gitlab-ci-multi-runner 9.2.0 (adfc387)
on xubuntuci1 (19c6d3ce)
Using Docker executor with image ubuntu:14.04 ...
Starting service rikorose/gcc-cmake:gcc-5 ...
Pulling docker image rikorose/gcc-cmake:gcc-5 ...
Using docker image rikorose/gcc-cmake:gcc-5
ID=sha256:ef2ac00b36e638897a2046c954e89ea953cfd5c257bf60103e32880e88299608
for rikorose/gcc-cmake service...
Waiting for services to be up and running...
*** WARNING: Service runner-19c6d3ce-project-54-concurrent-0-rikorose__gcc-
cmake probably didn't start properly.
Error response from daemon: Cannot link to a non running container: /runner-
19c6d3ce-project-54-concurrent-0-rikorose__gcc-cmake AS /runner-19c6d3ce-
project-54-concurrent-0-rikorose__gcc-cmake-wait-for-service/runner-
19c6d3ce-project-54-concurrent-0-rikorose__gcc-cmake
Does anybody know how i can fix this?
Thanks in advance
Tonka
You must start the gitlab-runner container with
--privileged true
but that is not enough. Any runner containers that are spun up by gitlab after registering need to be privileged too. So you need to mount the gitlab-runner
docker exec -it runner /bin/bash
nano /etc/gitlab-runner/config.toml
and change privileged flag from false into true:
privileged = true
That will solve the problem!
note: you can also mount the config.toml as a volume on the container then you won't have to log into the container to change privileged to true because you can preconfigure the container before running it.
In my case, I had to add
variables:
DOCKER_TLS_CERTDIR: ""

Docker in Docker cannot mount volume

I am running a Jenkins cluster where in the Master and Slave, both are running as a Docker containers.
The Host is latest boot2docker VM running on MacOS.
To allow Jenkins to be able to perform deployment using Docker, I have mounted the docker.sock and docker client from the host to the Jenkins container like this :-
docker run -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker -v $HOST_JENKINS_DATA_DIRECTORY/jenkins_data:/var/jenkins_home -v $HOST_SSH_KEYS_DIRECTORY/.ssh/:/var/jenkins_home/.ssh/ -p 8080:8080 jenkins
I am facing issues while mounting a volume to Docker containers that are run inside the Jenkins container. For example, if I need to run another Container inside the Jenkins container, I do the following :-
sudo docker run -v $JENKINS_CONTAINER/deploy.json:/root/deploy.json $CONTAINER_REPO/$CONTAINER_IMAGE
The above runs the container, but the file "deploy.json" is NOT mounted as a file, but instead as a "Directory". Even if I mount a Directory as a Volume, I am unable to view the files in the resulting container.
Is this a problem, because of file permissions due to Docker in Docker case?
A Docker container in a Docker container uses the parent HOST's Docker daemon and hence, any volumes that are mounted in the "docker-in-docker" case is still referenced from the HOST, and not from the Container.
Therefore, the actual path mounted from the Jenkins container "does not exist" in the HOST. Due to this, a new directory is created in the "docker-in-docker" container that is empty. Same thing applies when a directory is mounted to a new Docker container inside a Container.
Very basic and obvious thing which I missed, but realized as soon I typed the question.
Lots of good info in these posts but I find none of them are very clear about which container they are referring to. So let's label the 3 environments:
host: H
docker container running on H: D
docker container running in D: D2
We all know how to mount a folder from H into D: start D with
docker run ... -v <path-on-H>:<path-on-D> -v /var/run/docker.sock:/var/run/docker.sock ...
The challenge is: you want path-on-H to be available in D2 as path-on-D2.
But we all got bitten when trying to mount the same path-on-H into D2, because we started D2 with
docker run ... -v <path-on-D>:<path-on-D2> ...
When you share the docker socket on H with D, then running docker commands in D is essentially running them on H. Indeed if you start D2 like this, all works (quite unexpectedly at first, but makes sense when you think about it):
docker run ... -v <path-on-H>:<path-on-D2> ...
The next tricky bit is that for many of us, path-on-H will change depending on who runs it. There are many ways to pass data into D so it knows what to use for path-on-H, but probably the easiest is an environment variable. To make the purpose of such var clearer, I start its name with DIND_. Then from H start D like this:
docker run ... -v <path-on-H>:<path-on-D> --env DIND_USER_HOME=$HOME \
--env DIND_SOMETHING=blabla -v /var/run/docker.sock:/var/run/docker.sock ...
and from D start D2 like this:
docker run ... -v $DIND_USER_HOME:<path-on-D2> ...
Another way to go about this is to use either named volumes or data volume containers. This way, the container inside doesn't have to know anything about the host and both Jenkins container and the build container reference the data volume the same way.
I have tried doing something similar to what you are doing, except with an agent rather that using the Jenkins master. The problem was the same in that I couldn't mount the Jenkins workspace in the inner container. What worked for me was using the data volume container approach and the workspace files were visible to both the agent container and the inner container. What I liked about the approach is the both containers reference the data volume in the same way. Mounting directories with an inner container would be tricky as the inner container now needs to know something about the host that its parent container is running on.
I have detailed blog post about my approach here:
http://damnhandy.com/2016/03/06/creating-containerized-build-environments-with-the-jenkins-pipeline-plugin-and-docker-well-almost/
As well as code here:
https://github.com/damnhandy/jenkins-pipeline-docker
In my specific case, not everything is working the way I'd like it to in terms of the Jenkins Pipeline plugin. But it does address the issue of the inner container being able to access the Jenkins workspace directory.
Regarding your use case related to Jenkins, you can simply fake the path by creating a symlink on the host:
ln -s $HOST_JENKINS_DATA_DIRECTORY/jenkins_data /var/jenkins_home
If you are like me and don't want to mess with Jenkins Setup or too lazy to go through all this trouble, here is a simple workaround I did to get this working for me.
Step 1 - Add following variables to the environment section of pipeline
environment {
ABSOLUTE_WORKSPACE = "/home/ubuntu/volumes/jenkins-data/workspace"
JOB_WORKSPACE = "\${PWD##*/}"
}
Step 2 - Run you container with following command Jenkins pipeline as follows.
steps {
sh "docker run -v ${ABSOLUTE_WORKSPACE}/${JOB_WORKSPACE}/my/dir/to/mount:/targetPath imageName:tag"
}
Take note of the double quotes in the above statement, Jenkins will not convert the env variables if the quotes are not formatted properly or single quotes are added instead.
What does each variable signify?
ABSOLUTE_WORKSPACE is the path of our Jenkins volume which we had mounted while starting Jenkins Docker Container. In my case, the docker run command was as follows.
sudo docker run \
-p 80:8080 \
-v /home/ubuntu/volumes/jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-d -t jenkinsci/blueocean
Thus the varible ABSOLUTE_WORKSPACE=/home/ubuntu/volumes/jenkins-data + /workspace
JOB_WORKSPACE command gives us the current workspace directory where your code's lives. This is also the root dir of your code base. Just followed this answer for reference.
How does this work ?
It is very straight forward, as mentioned in #ZephyrPLUSPLUS ( credits where due ) answer, the source path for our docker container which is being run in Jenkins pipeline is not the path in current container, rather the path taken is host's path. All we are doing here is constructing the path where our Jenkins pipeline is being run. And mounting it to our container. Voila!!
Here's a little illustration to help clarify ...
This also works via docker-compose and/or named volumes so you don't need to create a data only container, but you still need to have the empty directory on the host.
Host setup
Make host side directories and set permissions to allow Docker containers to access
sudo mkdir -p /var/jenkins_home/{workspace,builds,jobs} && sudo chown -R 1000 /var/jenkins_home && sudo chmod -R a+rwx /var/jenkins_home
docker-compose.yml
version: '3.1'
services:
jenkins:
build: .
image: jenkins
ports:
- 8080:8080
- 50000:50000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- workspace:/var/jenkins_home/workspace/
# Can also do builds/jobs/etc here and below
jenkins-lts:
build:
context: .
args:
versiontag: lts
image: jenkins:lts
ports:
- 8081:8080
- 50001:50000
volumes:
workspace:
driver: local
driver_opts:
type: none
o: bind
device: /var/jenkins_home/workspace/
When you docker-compose up --build jenkins (you may want to incorporate this into a ready to run example like https://github.com/thbkrkr/jks where the .groovy scripts pre-configure Jenkins to be useful on startup) and then you will be able to have your jobs clone into the $JENKINS_HOME/workspace directory and shouldn't get errors about missing files/etc because the host and container paths will match, and then running further containers from within the Docker-in-Docker should work as well.
Dockerfile (for Jenkins with Docker in Docker)
ARG versiontag=latest
FROM jenkins/jenkins:${versiontag}
ENV JAVA_OPTS="-Djenkins.install.runSetupWizard=false"
COPY jenkins_config/config.xml /usr/share/jenkins/ref/config.xml.override
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
USER root
RUN curl -L http://get.docker.io | bash && \
usermod -aG docker jenkins
# Since the above takes a while make any other root changes below this line
# eg `RUN apt update && apt install -y curl`
# drop back to the regular jenkins user - good practice
USER jenkins
EXPOSE 8080
A way to work around this issue is to mount a directory (inside your docker container in which you mounted your docker socket) using the exact same path for its destination. Then, when you run a container from within that container, you are able to mount anything within that mount's path into the new container using docker -v.
Take this example:
# Spin up your container from which you will use docker
docker run -v /some/dir:/some/dir -v /var/run/docker.sock:/var/run.docker.sock docker:latest
# Now spin up a container from within this container
docker run -v /some/dir:/usr/src/app $CONTAINER_IMAGE
The folder /some/dir is now mounted across your host, the intermediate container as well as your destination container. Since the mount's path exists on both the host as the "nearly docker-in-docker" container, you can use docker -v as expected.
It's kind of similar to the suggestion of creating a symlink on the host but I found this (at least in my case), a cleaner solution. Just don't forget to cleanup the dir on the host afterwards! ;)
I have same problem in Gitlab CI, I solved this by using docker cp to do something like mount
script:
- docker run --name ${CONTAINER_NAME} ${API_TEST_IMAGE_NAME}
after_script:
- docker cp ${CONTAINER_NAME}:/code/newman ./
- docker rm ${CONTAINER_NAME}
Based from the description mentioned by #ZephyrPLUSPLUS
here is how I managed to solve this:
vagrant#vagrant:~$ hostname
vagrant
vagrant#vagrant:~$ ls -l /home/vagrant/dir-new/
total 4
-rw-rw-r-- 1 vagrant vagrant 10 Jun 19 11:24 file-new
vagrant#vagrant:~$ cat /home/vagrant/dir-new/file-new
something
vagrant#vagrant:~$ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker /bin/sh
/ # hostname
3947b1f93e61
/ # ls -l /home/vagrant/dir-new/
ls: /home/vagrant/dir-new/: No such file or directory
/ # docker run -it --rm -v /home/vagrant/dir-new:/magic ubuntu /bin/bash
root#3644bfdac636:/# ls -l /magic
total 4
-rw-rw-r-- 1 1000 1000 10 Jun 19 11:24 file-new
root#3644bfdac636:/# cat /magic/file-new
something
root#3644bfdac636:/# exit
/ # hostname
3947b1f93e61
/ # vagrant#vagrant:~$ hostname
vagrant
vagrant#vagrant:~$
So docker is installed on a Vagrant machine. Lets call it vagrant. The directory you want to mount is in /home/vagrant/dir-new in vagrant.
It starts a container, with host 3947b1f93e61. Notice that /home/vagrant/dir-new/ is not mounted for 3947b1f93e61.
Next we use the exact location from vagrant, which is /home/vagrant/dir-new as the source of the mount and specify any mount target we want, in this case it is /magic. Also note that /home/vagrant/dir-new does not exist in 3947b1f93e61.
This starts another container, 3644bfdac636.
Now the contents from /home/vagrant/dir-new in vagrant can be accessed from 3644bfdac636.
I think because docker-in-docker is not a child, but a sibling. and the path you specify must be the parent path and not the sibling's path. So any mount would still refer to the path from vagrant, no matter how deep you do docker-in-docker.
You can solve this passing in an environment variable.
Example:
.
├── docker-compose.yml
└── my-volume-dir
└── test.txt
In docker-compose.yml
version: "3.3"
services:
test:
image: "ubuntu:20.04"
volumes:
- ${REPO_ROOT-.}/my-volume-dir:/my-volume
entrypoint: ls /my-volume
To test run
docker run -e REPO_ROOT=${PWD} \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ${PWD}:/my-repo \
-w /my-repo \
docker/compose \
docker-compose up test
You should see in the output:
test_1 | test.txt

Resources