How to access host machine from Docker container? - linux

I have an application written in PHP and I need to work with the host machine: create a Linux user, copy some files, etc.

Run this program as root outside a container on the host system.
One of the core features of Docker is that a container isn't normally allowed to, for example, reset the host root user's password by writing /etc/shadow, or more generally read or write the host filesystem at all. A container is similarly normally forbidden from other system-management tasks like changing the host network configuration. This filesystem isolation, keeping a container process from corrupting key system files, is a primary reason to use Docker at all, and it can't be trivially disabled.
So in particular, "create a user" is precisely the class of dangerous-to-the-host operations that a container process by design is forbidden from doing. More generally, "copy files" is merely hard, but a task whose main purpose is reading and writing host files will generally be much easier to run outside a container.
In theory you can accomplish some of this using bind mounts. For the "copy files" part of the task, in principle you can run something like
docker run --rm \
-v "$PWD/here:/input" \
-v "$PWD/there:/output" \
your-image
and in the container, /input and /output will be the host's ./here and ./there directories.
It's possible to mount the entire host filesystem, -v /:/host for example. You could in theory use this to edit /host/etc/passwd, or possibly even to chroot(8) back into the host system and effectively escape the container. But at this point you're not really getting much benefit from Docker, and it'll be much easier to run the task outside a container.

I dit it via SSH & host.docker.internal:
Dockerfile:
RUN apt-get update && apt-get upgrade -y && apt-get install -y ssh
# ...
COPY ./.ssh /root/.ssh
RUN chmod 700 /root/.ssh && chmod 644 /root/.ssh/* && chmod 600 /root/.ssh/id_rsa
docker-compose.yml:
extra_hosts:
- 'host.docker.internal:host-gateway'

Related

Mount point name for a docker volume

With the below command:
docker container run -dit --name testcontainer –mount source= ubervol, target=/vol alpine:latest
source mount point name is ubervol pointing to target /vol that resides within container as shown below:
user#machine:~$ docker container exec -it b4fd sh
/ # pwd
/
/ # ls vol
vol
ubervol sits outside the container in /var/lib/docker/volumes/ubervol path of host machine(hosting docker daemon)
With the below Dockerfile:
# Create the folders and volume mount points
RUN mkdir -p /var/log/jenkins
RUN chown -R jenkins:jenkins /var/log/jenkins
RUN mkdir -p /var/jenkins_home
RUN chown -R jenkins:jenkins /var/jenkins_home
VOLUME ["/var/log/jenkins", "/var/jenkins_home"]
my understanding is, target is sitting within container with path /var/log/jenkins
&& /var/jenkins_home
What is the source mount point name? for each target(/var/log/jenkins && /var/jenkins_home)
What is the path of this mount point name in host machine?
The location of the volume data on the host is an implementation detail that you shouldn't try to take advantage of. On some environments, like the Docker Desktop for Mac application, the data will be hidden away inside a virtual machine you can't directly access. While I've rarely encountered one there are also alternate volume drivers that would let you store the content somewhere else.
Every time you docker run a container based on an image that declares a VOLUME, if you don't mount something else with a -v option, Docker will create an anonymous volume and mount it there for you (in the same way as if you didn't specify a --mount source=...). If you start multiple containers from the same image, I believe each gets a new volume (with a different host path, if there is one). The Dockerfile cannot control the location of the volume on the host; the operator could mount a different named volume or a host directory instead.
In practice there's almost no point to using VOLUME in a Dockerfile. You can use docker run -v whether or not here's a VOLUME for the same directory. Its principal effect is to prevent future RUN commands from modifying that directory.

Docker "permission denied" in container

I am trying to run a docker image by
docker run -it -v $PWD/examples:/home/user/examples image
which should make $PWD/examples in the host accessible in the container. However when I ls in the container, it keeps giving me
ls: cannot access 'examples': Permission denied
I have tried the answers for similar questions, the z/Z option and chcon -Rt svirt_sandbox_file_t /host/path/ and run --privileged, but neither of them have any effect in my case.
In fact, the z option appears to work for the first time ls, but when I issue ls the second time it is denied again.
In the comments it turned out that there is probably a USER instruction in the Dockerfile of the image. This user is not allowed to access examples due to file access permissions of examples.
It is possible to supersede USER with docker run option --user.
A quick and dirty solution is to run with --user=root to allow arbitrary access.
Be aware that files written as root in container to folder examples will be owned by root.
A better solution is to look for owner of examples, call him foo. Specify its user id and group id to have exactly the same user in container:
docker run --user $(id -u foo):$(id -g foo) imagename
Another possible solution is to allow arbitray access with chmod 666 examples or chmod 644 examples, but most probably you don't want that.
The best way would be to look at the Dockerfile and check the purpose of USER instruction.
If it only serves the purpose of avoiding root in container, the best way is to use --user=foo or more precisely --user=$(id -u foo):$(id -g foo).
If something in Dockerfile/image relies on specific USER, it may be the best to change access permissions of examples.
If you have access to the Dockerfile, you may adjust it to fit your host user/the owner of examples.
Try running the container as privileged:
sudo docker run --privileged=true -itd -v /***/***:/*** ubuntu bash
for example: sudo docker run --privileged=true -itd -v
/home/willie:/wille ubuntu bash

Docker container containing a shell script requiring user input and manipulating the network interfaces of the host

I have a simple shell script which is generating delay, jitter, packet loss using the tc-netem package. You can check the script here. My idea is that I want to put it into a container and to execute it on my host using an alias. Something like: echo "alias netsim="sudo docker run -v=pwd:/home/netsim/ docker-image"" >> ~/.bashrc and then when I run netsim on my host to ask me for all the user input. Also the container should control the host network interfaces and I am not even sure if this is possible. My Dockerfile so far is the following:
FROM alpine:3.7
WORKDIR /home/netsim
RUN apk update && apk upgrade
COPY netsim.sh random_netsim.sh /home/netsim/
CMD chmod +x /home/netsim/netsim.sh && chmod +x /home/netsim/random_netsim.sh
The reason why I want to do it is because my host is running Ubuntu 18.04 and in the tc-netem version there is a bug with the jitter generation. I was thinking that it would be pretty cool if I can use containers to do the same, without re-compiling/downgrading the tc-netem version.

Use sudo inside Dockerfile (Alpine)

I have this Dockerfile ...
FROM keymetrics/pm2:latest-alpine
RUN apk update && \
apk upgrade && \
apk add \
bash
COPY . ./
EXPOSE 1886 80 443
CMD pm2-docker start --auto-exit --env ${NODE_ENV} ecosystem.config.js
How can I execute the CMD command using sudo ?
I need to do this because the port 443 is allowed only for sudo user.
The su-exec can be used in alpine.
Do add it the package, if not already available, add the following to your Dockerfile
RUN apk add --no-cache su-exec
Inside your scripts you'd run inside docker you can use the following to become another user:
exec su-exec <my-user> <my command>
Alternatively, you could add the more familiair sudo package while building your docker-file
Add the following to your Dockerfile that's FROM alpine
RUN set -ex && apk --no-cache add sudo
After that you can use sudo
sudo -u <my-user> <my command>
Sudo isn't shipped with Alpine images normally, and it rarely makes sense to include it inside of any container. What you need isn't sudo to bind to a low numbered port, but the root user itself, and sudo is just a common way to get root access in multi-user environments. If a container included sudo, you would need to either setup the user with a password, or allow commands to run without a password. Regardless of which you chose, you now have a privilege escalation inside the container, defeating the purpose of running the container as a normal user, so you may as well run the container as root at that point.
If the upstream image is configured to run as a non-root user (unlikely since you run apk commands during the build), you can specify USER root in your Dockerfile, and all following steps will run as root by default, including the container entrypoint/cmd.
If you start your container as a different user, e.g. docker run -u 1000 your_image, then to run your command as root, you'd remove the -u 1000 option. This may be an issue if you run your container in higher security environments that restrict containers to run as non-root users.
If your application itself is dropping the root privileges, then including sudo is unlikely not help, unless the application itself has calls to sudo internally. If that's the case, update the application to drop root privileges after binding to the ports.
Most importantly, if the only reason for root inside your container is to bind to low numbered ports, then configure your application inside the container to bind to a high numbered port, e.g. 8080 and 8443. You can map this container port to any port on the host, including 80 and 443, so the outside world does not see any impact. E.g. docker run -p 80:8080 -p 443:8443 your_image. This simplifies your image (removing tools like sudo) and increases your security at the same time.

Mount SMB/CIFS share within a Docker container

I have a web application running in a Docker container. This application needs to access some files on our corporate file server (Windows Server with an Active Directory domain controller). The files I'm trying to access are image files created for our clients and the web application displays them as part of the client's portfolio.
On my development machine I have the appropriate folders mounted via entries in /etc/fstab and the host mount points are mounted in the Docker container via the --volume argument. This works perfectly.
Now I'm trying to put together a production container which will be run on a different server and which doesn't rely on the CIFS share being mounted on the host. So I tried to add the appropriate entries to the /etc/fstab file in the container & mounting them with mount -a. I get mount error(13): Permission denied.
A little research online led me to this article about Docker security. If I'm reading this correctly, it appears that Docker explicitly denies the ability to mount filesystems within a container. I tried mounting the shares read-only, but this (unsurprisingly) also failed.
So, I have two questions:
Am I correct in understanding that Docker prevents any use of mount inside containers?
Can anyone think of another way to accomplish this without mounting a CIFS share on the host and then mounting the host folder in the Docker container?
Yes, Docker is preventing you from mounting a remote volume inside the container as a security measure. If you trust your images and the people who run them, then you can use the --privileged flag with docker run to disable these security measures.
Further, you can combine --cap-add and --cap-drop to give the container only the capabilities that it actually needs. (See documentation) The SYS_ADMIN capability is the one that grants mount privileges.
yes
There is a closed issue mount.cifs within a container
https://github.com/docker/docker/issues/22197
according to which adding
--cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH
to the run options will make mount -t cifs operational.
I tried it out and:
mount -t cifs //<host>/<path> /<localpath> -o user=<user>,password=<user>
within the container then works
You could use the smbclient command (part of the Samba package) to access the SMB/CIFS server from within the Docker container without mounting it, in the same way that you might use curl to download or upload a file.
There is a question on StackExchange Unix that deals with this, but in short:
smbclient //server/share -c 'cd /path/to/file; put myfile'
For multiple files there is the -T option which can create or extract .tar archives, however this looks like it would be a two step process (one to create the .tar and then another to extract it locally). I'm not sure whether you could use a pipe to do it in one step.
You can use a Netshare docker volume plugin which allows to mount remote CIFS/Samba as volumes.
Do not make your containers less secure by exposing many ports just to mount a share. Or by running it as --privileged
Here is how I solved this issue:
First mount the volume on the server that runs docker.
sudo mount -t cifs -o username=YourUserName,uid=$(id -u),gid=$(id -g) //SERVER/share ~/WinShare
Change the username, SERVER and WinShare here. This will ask your sudo password, then it will ask password for the remote share.
Let's assume you created WinShare folder inside your home folder. After running this command you should be able to see all the shared folders and files in WinShare folder. In addition to that since you use the uidand gid tags you will have write access without using sudo all the time.
Now you can run your container by using -v tag and share a volume between the server and the container.
Let's say you ran it like the following.
docker run -d --name mycontainer -v /home/WinShare:/home 2d244422164
You should be able to access the windows share and modify it from your container now.
To test it just do:
docker exec -it yourRunningContainer /bin/bash
cd /Home
touch testdocfromcontainer.txt
You should see testdocfromcontainer.txt in the windows share.

Resources