I'm trying to get my Docker container to read and write to a host directory.
I run the container with:
docker run -it -v $(pwd):/file logstash-5.1.2
Inside the container, I can see that /file has the uid of my (non-root) user on the host, and the same permissions as that on the host:
drwxrwxrwx. 2 1156 1156 4096 Jul 21 05:00 file
and that root can't access /file.
root#c642b0c37e09:~# ls /file
ls: cannot open directory /file: Permission denied
I've read posts about creating a user in the container with the same uid as the host, but that seems to be frowned upon.
Why can't root access the directory? I thought it could do everything.
What's the best way to have the container read and write to the mounted directory, which is not owned by root, in Docker?
We're also using Rancher. Does that make it easier? I haven't yet come across something different there, mainly as I'm trying to see if I can do it purely within Docker.
You should change the context as svirt_sandbox_file_t to let container access this folder in this context.
If you are sure about folder permission then just only try;
chcon -R -t svirt_sandbox_file_t /your/host/path
If are not sure try;
chown -R groupId:userId /your/host/path
chcon -R -t svirt_sandbox_file_t /your/host/path
In here chcon command applies the SELinux context with changing the context of "/your/host/path" to the svirt_sandbox_file_t.
Related
I have a program called HelloWorld belonging to user test
HelloWorld will create a file HelloWorld.pid in /var/run to keep single instance.
I using following command to try to make test can access /var/run
usermod -a -G root test
However, when I run it, falied
could someone help me?
What are the permissions on /var/run? On my system, /var/run is rwxr-xr-x, which means only the user root can write to it. The permissions do not allow write access by members of the root group.
The normal way of handling this is by creating a subdirectory of /var/run that is owned by the user under which you'll be running your service. E.g.,
sudo mkdir /var/run/helloworld
sudo chown myusername /var/run/helloworld
Note that /var/run is often an ephemeral filesystem that disappears when your system reboots. If you would like your target directory to be created automatically when the system boots you can do that using the systemd tmpfiles service.
Some linux systems store per-user runtime files in /var/run/user/UID/.
In this case you can create your pid file in /var/run/user/$(id -u test)/HelloWorld.pid.
Alternatively just use /tmp.
You may want to use the user's name as a prefix to the pid filename to avoid collision with other users, for instance /tmp/test-HelloWorld.pid.
I'm kind of fighting with privileges (no troll) for my Docker project as I'm trying to make one of my user on Docker, able to read/write a volume shared with the host, while the host user should also be able to read/write with the docker user in this directory.
In my case, nor Docker user, nor Host user should be root. This mean that, on the shared volume, the user which is running docker shouldn't be able to reach files in the volume that aren't. However, I discovered that running a volume as an user without root privileges do not save root's files
Example
For instance, in the following situation
Users directory with two files, one root one non-root, the user's name is user and has no root privileges, however he is part of Docker group :
C:/.../directory :
dwrx-rx--x root file1
dwrx-rx--x user file2
Users run docker through the command :
docker run -v /c/.../directory:/volume:rw -e USER_ID=$(id -u) -e GROUP_ID=$(id -g)
And the entrypoint of Docker is the following script.sh :
#!/bin/bash
usermod -u ${USER_ID} dockeruser \;
groupmod -g ${GROUP_ID} dockeruser ;
chown dockeruser:dockeruser -R /volume ;
exit;
The permissions, even are changed on the host's directory, even for roots file that I shouldn't have been to write on :
C:/.../directory :
dwrx-rx--x user file1
dwrx-rx--x user file2
Is it normal that, an user that isn't the root could do anything with files which do not belongs to him ?
I'm pretty a beginner so, I don't know if it's a misleading vulnerability due to the fact we force user to not be root nor sudo but in fact it doesn't change anything, or if I just am getting it wrong ^^, so feel free to tell me if it's not the way I should handle it.
Regards,
Waldo
I have a problem with creating new files in mounted docker volume.
Firstly after installation docker i added my user to docker group.
sudo usermod -aG docker $USER
Created as my $USER folder:
mkdir -p /srv/redis
And starting container:
docker run -d -v /srv/redis:/data --name myredis redis
when i want to create file in /srv/redis as a user which created container I have a problem with access.
mkdir /srv/redis/redisTest
mkdir: cannot create directory ‘/srv/redis/redisTest’: Permission denied
I tried to search in other threads but i didn't find appropriate solution.
The question title does not reflect the real problem in my opinion.
mkdir /srv/redis/redisTest
mkdir: cannot create directory ‘/srv/redis/redisTest’: Permission denied
This problem occurs very likely because when you run:
docker run -d -v /srv/redis:/data --name myredis redis
the directory /srv/redis ownership changes to root. You can check that by
ls -lah /srv/redis
This is normal consequence of mounting external directory to docker. To regain access you have to run
sudo chown -R $USER /srv/redis
I think /srv/redis/redisTest directory is created by user inside redis container, so it belong to redis container user.
Have you already check using ls -l to see that /srv/redis/redisTest directory belong to $USER?
This could also be related (as I just found out) to having SELinux activated. This answer on the DevOps Stack Exchange worked for me:
The solution is to simply append a :z to the [docker] run volume argument so that this:
docker run -v /host/foobar:/src_dir /bin/bash
becomes this:
docker run -it -v /host/foobar:/src_dir:z /bin/bash
I am trying to transfer files to my Google cloud hosted Linux (Debian) instance via secure copy (scp). I did exactly what the documentation told to connect from a local machine to the instance. https://cloud.google.com/compute/docs/instances/connecting-to-instance.
Created a SSH keygen
Added the keygen to my instance
I can login successfully by:
ssh -i ~/.ssh/my-keygen [USERNAME]#[IP]
But when I want to copy files to the instance I get a message "permission denied".
scp -r -i ~/.ssh/my-keygen /path/to/directory/ [USERNAME]#[IP]:/var/www/html/
It looks like the user with which I login has no permissions to write files, so I already tried to change the file permissions of /var/www/, but this still gives the permission denied message.
I also tried to add the user to the root group, but this still gives the same problem.
usermod -G root myuser
The command line should be
scp -r -i ~/.ssh/my-keygen /path/to/directory/ [USERNAME]#[IP]:/var/www/html/
Assuming your files are in the local /path/to/directory/ and the /var/www/html/ is on the remote server.
The permissions does not allow to write in the /var/www/html/. Writing to /tmp/ should work. Then you can copy the files with sudo to the desired destination with root privileges.
If SSH isn't working, install gcloud CLI and run the following locally: gcloud compute scp --recurse /path/to/directory [IP] --tunnel-through-iap. This will dump the directory into your /home/[USERNAME]/ folder. Then log into the console and use sudo to move the directory to /var/www/html/.
For documentation, see https://cloud.google.com/sdk/gcloud/reference/compute/scp.
I've played a lot with any rights combinations to make docker to work, but... at first my environment:
Ubuntu linux 15.04 and Docker version 1.5.0, build a8a31ef.
I have a directory '/test/dockervolume' and two users user1 and user2 in a group users
chown user1.users /test/dockervolume
chmod 775 /test/dockervolume
ls -la
drwxrwxr-x 2 user1 users 4096 Oct 11 11:57 dockervolume
Either user1 and user2 can write delete files in this directory.
I use standard docker ubuntu:15.04 image. user1 has id 1000 and user2 has id 1002.
I run docker with next command:
docker run -it --volume=/test/dcokervolume:/tmp/job_output --user=1000 --workdir=/tmp/job_output ubuntu:15.04
Within docker I just do simple 'touch test' and it works for user1 with id 1000. When I run docker with --user 1002 I can't write to that directory:
I have no name!#6c5e03f4b3a3:/tmp/job_output$ touch test2
touch: cannot touch 'test2': Permission denied
I have no name!#6c5e03f4b3a3:/tmp/job_output$
Just to be clear both users can write to that directory if not in docker.
So my question is this behavior by docker design or it is a bug or I missed something in the manual?
docker's --user parameter changes just id not a group id within a docker. So, within a docker I have:
id
uid=1002 gid=0(root) groups=0(root)
and it is not like in original system where I have groups=1000(users)
So, one workaround might be mapping passwd and group files into a docker.
-v /etc/docker/passwd:/etc/passwd:ro -v /etc/docker/group:/etc/group:ro
The other idea is to map a tmp directory owned by running --user and when docker's work is complete copy files to a final location
TMPFILE=`mktemp`; docker run -v $TMPFILE:/working_dir/ --user=$(id -u); cp $TMPDIR $NEWDIR
This discussion Understanding user file ownership in docker: how to avoid changing permissions of linked volumes brings some light to my question.
For both correct uid and gid mapping try: docker run --user=$(id -u):$(id -g)
Avoid use another use, because the UID is different and you can't sure about the user name. You can use root without problem inside container.