I'm building a network with docker compose and some bash scripts and i'm having problems during the process. Basically i have a some containers and volumes.
In one of the container i have to rename a file and copy it in a volume to make it accesible to other containers.
The problem is that this file is regenerated with a different name every time i start the network (this because it's a key) so i don't know its name.
If i try with this command in the container:
docker exec -it containerName cp /path_in_container/* /volume/key.pem
docker give me an error related to the path. The same thing happen if i use
docker exec -it containerName cp /path_in_container/. /volume/key.pem
If i insert the real name this way:
docker exec -it containerName cp /path_in_container/2164921649_sk /volume/key.pem
i have no problem but, how i already explained, i can't know its name.
I tried to solve the problem copying the file from the linked volume folder directly in my system but, since the folder is protected, i need to use:
sudo chown -R user:user /tmp/path/*
In this case, the problem is that if I enter the chown command in a bash script, I then have to enter the password and it doesn't always work.
So I would like to try to copy the file directly from the container by making a copy of all the files in the folder or make the bash script run with the chown command inside, before the various copy operations, without entering the password.
Can someone help me?
Thanks
EDIT:
This is a part of the code useful to understand the problem
#Copy TLS-CA certificate
docker exec -it tls-ca cp /tmp/hyperledger/fabric-ca/admin/msp/cacerts/tls-ca-7051.pem /certificates/tls-ca-7051.pem
echo "Start operation for ORG0"
#ENROLL ORDERER
# for identity
docker exec -it rca-org0 fabric-ca-client enroll -d -u https://orderer1-org0:ordererpw#rca-org0:7052 --tls.certfiles /tmp/hyperledger/fabric-ca/admin/msp/cacerts/rca-org0-7052.pem --home /tmp/hyperledger/fabric-ca-enrollment/orderer --mspdir msp
sleep 5
# for TLS
docker exec -it rca-org0 fabric-ca-client enroll -d -u https://orderer1-org0:ordererPW#tls-ca:7051 --enrollment.profile tls --csr.hosts orderer1-org0 --tls.certfiles /certificates/tls-ca-7051.pem --home /tmp/hyperledger/fabric-ca-enrollment/orderer --mspdir tls-msp
sleep5
#ENROLL ADMIN USER
docker exec -it rca-org0 fabric-ca-client enroll -d -u https://admin-org0:org0adminpw#rca-org0:7052 --tls.certfiles /tmp/hyperledger/fabric-ca/admin/msp/cacerts/rca-org0-7052.pem --home /tmp/hyperledger/fabric-ca-enrollment/admin/ --mspdir msp
sleep 5
#CREATE NECESSARY FOLDERS
docker exec rca-org0 cp /tmp/hyperledger/fabric-ca-enrollment/orderer/tls-mps/keystore/*
chown -R fabrizio:fabrizio /tmp/hyperledger/*
mv /tmp/hyperledger/org0/orderer/tls-msp/keystore/* /tmp/hyperledger/org0/orderer/tls-msp/keystore/key.pem
mkdir -p /tmp/hyperledger/org0/orderer/msp/admincerts
cp /tmp/hyperledger/org0/admin/msp/signcerts/cert.pem /tmp/hyperledger/org0/orderer/msp/admincerts/orderer-admin-cert.pem
mkdir /tmp/hyperledger/org0/msp
mkdir /tmp/hyperledger/org0/msp/{admincerts,cacerts,tlscacerts,users}
cp /tmp/hyperledger/org0/ca/admin/msp/cacerts/rca-org0-7052.pem /tmp/hyperledger/org0/msp/cacerts/org0-ca-cert.pem
cp /tmp/hyperledger/certificates/tls-ca-7051.pem /tmp/hyperledger/org0/msp/tlscacerts/tls-ca-cert.pem
cp /tmp/hyperledger/org0/admin/msp/signcerts/cert.pem /tmp/hyperledger/org0/msp/admincerts/admin-org0-cert.pem
cp ./org0-config.yaml /tmp/hyperledger/org0/msp/config.yaml
In the script you show, you run a series of one-off commands in an existing container, and then need to manage the container filesystem. It might be more straightforward to script a series of docker run commands, that can use docker run -v bind mounts to inject input files into the container and get the output files back out.
docker run --rm \
-v "$PWD/cacerts:/cacerts" \
-v "$PWD/certs:/certs" \
image-for-fabric-ca-client \
fabric-ca-client enroll \
-d \
-u https://orderer1-org0:ordererpw#rca-org0:7052 \
--tls.certfiles /cacerts/rca-org0-7052.pem \
--home /certs \
--mspdir msp
If this invocation has the TLS CA certificates used as input in ./cacerts, and the resulting TLS server certificates as output in ./certs, then you've "escaped" Docker space; you can use ordinary shell commands here.
mv ./certs/*_sk ./certs/key.pem
Depending on what the fabric-ca-client enroll command actually does, it might be possible to run it as the same user ID as on the host
docker run \
-u $(id -u) \
-v "$PWD/certs:/certs" \
...
So long as the host ./cacerts directory is world-readable and the ./certs directory is writable by the current user, the main container process will run as the same (numeric) user ID as on the host, and the files will be readable without chown.
In general I'd recommend avoiding docker exec and docker cp in scripts, in much the same way you don't use a debugger like gdb for routine tasks like generating CA certificates.
Also consider the possibility that you may need to run this script as root anyways. TLS private keys typically aren't readable by other than their owner (mode 0600 or 0400) and you might need to chown the files to the eventual container users, which will require root access. Also note in the last docker run invocation that nothing stops you from specifying -u root or mounting a "system" host directory -v /host-etc:/etc, so it's very easy to use docker run to root the host; on many systems access to the Docker socket will be very reasonably restricted to require sudo access.
I'm trying to mount a directory in Docker run:
docker run --restart always -t -v /home/dir1/dir2/dir3:/dirX --name [...]
But I get the error:
error while creating mount source path '/home/dir1/dir2/dir3': mkdir /home/dir1/dir2/dir3: permission denied.
All the directories exist for sure, and the strange thing is when trying to mount dir2 and not dir3 it is working ok:
docker run --restart always -t -v /home/dir1/dir2/:/dirX --name [...] # THIS IS WORKING
All the directories ('dir2' and 'dir3') have the same permissions: drwxr-x---
Any suggestions on what might be the problem? why one is working and the other don't?
Thanks
Check the permission for the folder you're trying to mount docker with ls -la, you might need to modify the permissons with chmod.
If you don't want to modify permissions, just add sudoto the beggining of the command.
sudo docker run --restart always -t -v /home/dir1/dir2/dir3:/dirX --name [...]
When I try to run simple docker commands like:
$ docker ps -a
I get an error message:
Got permission denied ... /var/run/docker.sock: connect: permission denied
When I check permissions with
$ ls -al /var/run/
I see this line:
srw-rw---- root docker docker.sock
So, I follow an advice from many forums and add local user to docker group:
$ sudo usermod -aG docker $USER
But it does not help. I still get the very same error message. How can I fix it?
For those new to the shell, the command:
$ sudo usermod -aG docker $USER
needs to have $USER defined in your shell. This is often there by default, but you may need to set the value to your login id in some shells.
Changing the groups of a user does not change existing logins, terminals, and shells that a user has open. To avoid performing a login again, you can simply run:
$ newgrp docker
to get access to that group in your current shell.
Once you have done this, the user effectively has root access on the server, so only do this for users that are trusted with unrestricted sudo access.
Reason: The error message means that the current user can’t access the docker engine, because the user hasn't enough permissions to access the UNIX socket to communicate with the engine.
Quick Fix:
Run the command as root using sudo.
sudo docker ps
Change the permissions of /var/run/docker.sock for the current user.
sudo chown $USER /var/run/docker.sock
Caution: Running sudo chmod 777 /var/run/docker.sock will solve your problem but it will open the docker socket for everyone which is a security vulnerability as pointed out by #AaylaSecura. Hence it shouldn't be used, except for testing purposes on the local system.
Permanent Solution:
Add the current user to the docker group.
sudo usermod -a -G docker $USER
Note: You have to log out and log in again for the changes to take effect.
Refer to this blog to know more about managing Docker as a non-root user.
Make sure your $USER variable is set
$ echo $USER
$ sudo usermod -aG docker $USER
logout
Upon login, restart the docker service
$ sudo systemctl restart docker
$ docker ps
enter the command and explore docker without sudo command
sudo chmod 666 /var/run/docker.sock
As mentioned earlier in the comment the changes won't apply until your re-login. If you were doing a SSH and opening a new terminal, it would have worked in new terminal
But since you were using GUI and opening the new terminal the changes were not applied. That is the reason the error didn't go away
So below command did do its job, its just a re-login was missed
sudo usermod -aG docker $USER
You need to manage docker as a non-root user.
To create the docker group and add your user:
Create the docker group.
$ sudo groupadd docker
Add your user to the docker group.
$ sudo usermod -aG docker $USER
Log out and log back in so that your group membership is re-evaluated.
If testing on a virtual machine, it may be necessary to restart the virtual machine for changes to take effect.
On a desktop Linux environment such as X Windows, log out of your session completely and then log back in.
On Linux, you can also run the following command to activate the changes to groups:
$ newgrp docker
Verify that you can run docker commands without sudo.
$ docker run hello-world
As my user is and AD user, I have to add the AD user to the local group by manually editing /etc/group file. Unforrtunately the adduser commands do not seem to be nsswitch aware and do not recognize a user not locally defined when adding someone to a group.
Then reboot or refresh /etc/group. Now, you can use docker without sudo.
Regards.
***Important Note on these answers: the docker group is not always "docker" sometimes it is "dockerroot", for example the case of Centos 7 installation by
sudo yum install -y docker
The first thing you should do, after installing Docker, is
sudo tail /etc/group
it should say something like
......
sshd:x:74:
postdrop:x:90:
postfix:x:89:
yourusername:x:1000:yourusername
cgred:x:996:
dockerroot:x:995:
In this case, it is "dockerroot" not "docker". So,
sudo usermod -aG dockerroot yourusername
logout
When I try to run simple docker commands like: $ docker ps -a
I get an error message: Got permission denied ... /var/run/docker.sock: connect: permission denied.
[…] How can I fix it?
TL;DR: There are two ways (the first one, also mentioned in the question itself, was extensively addressed by other answers, but comes with security concerns; so I'll elaborate on this issue, and develop the second solution that can also be applicable for this fairly sensible use case).
Just to recall the context, the Docker daemon socket is owned by root:docker:
$ ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 janv. 28 14:23 /var/run/docker.sock
so with this default setup, one needs to prepend all docker CLI commands by sudo.
To avoid this, one can either:
add one's user account ($USER) to the docker group − but that's quite risky to do this on one's personal workstation, as this would amount to provide all programs run by the user with root permissions without any sudo password prompt nor auditing.
See also:
this page in the official Docker documentation:
https://docs.docker.com/engine/security/#docker-daemon-attack-surface
this page that documents the related exploit:
https://fosterelli.co/privilege-escalation-via-docker.html
one can otherwise prepend sudo automatically without typing sudo docker manually: to this aim, a solution consists in adding the following alias in the ~/.bashrc (see e.g. this thread for details):
__docker() {
if [[ "${BASH_SOURCE[*]}" =~ "bash-completion" ]]; then
docker "$#"
else
sudo docker "$#"
fi
}
alias docker=__docker
Then one can test this by opening a new terminal and typing:
docker run --pul〈TAB〉 # → docker run --pull
# autocompletion works
docker run --pull always --rm -it debian:11 # ask one's password
\docker run --help # bypass the alias (thanks to the \) and ask no password
With the help of the below command I was able to execute the docker command without sudo
sudo setfacl -m user:$USER:rw /var/run/docker.sock
bash into container as root user
docker exec -it --user root <dc5> bash
create docker group if it's not already created
groupadd -g 999 docker
add user to docker group
usermod -aG docker jenkins
change permissions
chmod 777 /var/run/docker.sock
You have to use pns executer instead of docker.
run the following patch which modifies the configmap and you are all set.
kubectl -n argo patch cm workflow-controller-configmap -p '{"data": {"containerRuntimeExecutor": "pns"}}' ;
ref: https://www.youtube.com/watch?v=XySJb-WmL3Q&list=PLGHfqDpnXFXLHfeapfvtt9URtUF1geuBo&index=2&t=3996s
I am logged in in my PC (Fedora 24) as rperez. I have setup Docker for being able to run through this user, so I am running a container as follow:
$ docker run -d \
-it \
-e HOST_IP=192.168.1.66 \
-e PHP_ERROR_REPORTING='E_ALL & ~E_STRICT' \
-p 80:80 \
-v ~/var/www:/var/www \
--name php55-dev reypm/php55-dev
Notice the $ sign meaning I am running the command as a non root user (which uses #). The command above creates the following directory: /home/rperez/var/www but owner is set to root I believe this is because docker run as root user behind scenes.
Having this setup I am not able to create a file under ~/var/www as rperez because the owner is root so ...
What is the right way to deal with this? I have read this and this but is not so helpful.
Any help?
As discussioned here, this is an expected behavior of docker. You can create the target volume directory before running docker command or change the owner to your current user after the directory is created by docker:
chown $(whoami) -R /path/to/your/dir
I hit this same issue (also in a genomics context for the very same reason) and also found it quite unintuitive. What is the recommended way to "inherit ownership". Sorry if this described elsewhere, but I couldn't find it. Is it something like:
docker run ... -u $(id -u):$(id -g) ...
I've played a lot with any rights combinations to make docker to work, but... at first my environment:
Ubuntu linux 15.04 and Docker version 1.5.0, build a8a31ef.
I have a directory '/test/dockervolume' and two users user1 and user2 in a group users
chown user1.users /test/dockervolume
chmod 775 /test/dockervolume
ls -la
drwxrwxr-x 2 user1 users 4096 Oct 11 11:57 dockervolume
Either user1 and user2 can write delete files in this directory.
I use standard docker ubuntu:15.04 image. user1 has id 1000 and user2 has id 1002.
I run docker with next command:
docker run -it --volume=/test/dcokervolume:/tmp/job_output --user=1000 --workdir=/tmp/job_output ubuntu:15.04
Within docker I just do simple 'touch test' and it works for user1 with id 1000. When I run docker with --user 1002 I can't write to that directory:
I have no name!#6c5e03f4b3a3:/tmp/job_output$ touch test2
touch: cannot touch 'test2': Permission denied
I have no name!#6c5e03f4b3a3:/tmp/job_output$
Just to be clear both users can write to that directory if not in docker.
So my question is this behavior by docker design or it is a bug or I missed something in the manual?
docker's --user parameter changes just id not a group id within a docker. So, within a docker I have:
id
uid=1002 gid=0(root) groups=0(root)
and it is not like in original system where I have groups=1000(users)
So, one workaround might be mapping passwd and group files into a docker.
-v /etc/docker/passwd:/etc/passwd:ro -v /etc/docker/group:/etc/group:ro
The other idea is to map a tmp directory owned by running --user and when docker's work is complete copy files to a final location
TMPFILE=`mktemp`; docker run -v $TMPFILE:/working_dir/ --user=$(id -u); cp $TMPDIR $NEWDIR
This discussion Understanding user file ownership in docker: how to avoid changing permissions of linked volumes brings some light to my question.
For both correct uid and gid mapping try: docker run --user=$(id -u):$(id -g)
Avoid use another use, because the UID is different and you can't sure about the user name. You can use root without problem inside container.