How to mount /proc in Docker Container - linux

Why can't I mount the /proc device from the container during the build process?
If I run docker build -t test . with this Dockerfile:
FROM debian:stable-slim
RUN bash -c 'ls {/proc,/dev,/sys}'
I can see that all special devices are populated. But if I try this Dockerfile:
FROM debian:stable-slim
RUN bash -c 'ls {/proc,/dev,/sys}'
RUN mount --bind /proc /mnt
I get the following error:
mount: /mnt: permission denied.
The command '/bin/sh -c mount --bind /proc /mnt' returned a non-zero code: 32
I know it's possible to use --privileged mode in docker run, but my goal is not to access the host's /proc but to just mount the /proc device from container in a file system that I'm generating inside the container with debootstrap. So that I can install some packages, specifically default-jre.
My Docker Version: 20.10.8
EDIT
My goal is to create a custom live-cd like here, so I can't use the container's base OS.

Related

How to set limit on folder memory running in Docker Container

I am running a script in docker container which create some files and logs information in that.
Command is
docker run -t --name a6f97966d3a2552283df -v "/temp/a6f97966d3a2552283df":/usercode ubuntu_16_04:firsttry /usercode/script.sh
I want to limit the size of that folder which i have mounted using this command because log size may increase very much.
One solution for that may be i mount a virtual filesystem in container using following commands
mkdir -p /quota
mkdir -p /var/virtual_disks
touch /var/virtual_disks/directory_with_size_limit.ext3
dd if=/dev/zero of=/var/virtual_disks/directory_with_size_limit.ext3 count=51200
mkfs.ext3 /var/virtual_disks/directory_with_size_limit.ext3
mount -o loop,rw,usrquota,grpquota /var/virtual_disks/directory_with_size_limit.ext3 /quota
Its working fine on my local system but not in container.
Is there any other way of acheiving this?
It is now working fine.Actually the mounted path in folder should be the same as the mount path of the virtual filesystem
So the modified command is
docker run -t --name a6f97966d3a2552283df -v "/quota":/usercode ubuntu_16_04:firsttry /usercode/script.sh

Providing shell access to particular docker container to user which is not added to docker group

My task is provide access to particular docker container without adding user to docker group.
What I did
1) Made a nsenter-based script which must enter container (let's call it script1.sh)
#!/bin/bash
PID=$(docker inspect --format {{.State.Pid}} kayako-dashboard)
nsenter --target $PID --mount --uts --ipc --net --pid /bin/sh
2) Make this script globally available sudo ln -s /full/path/to/script1.sh /usr/local/bin/some_new_command
3) Adjusted target ssh key by adding command="some_new_command" before ssh-rsa in authorized_keys file.
But when I log in under target user
sshpass -p <user_password> ssh <target_user>#<docker_host> "some_new_command"
I got an error Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock
What is wrong and is there any way to fix ?
My kernel version is 4.4.0, so it supports entering the pid namespace.
nsenter version: nsenter from util-linux 2.27.1
P.S. Even if I add <target_user to docker group and try to execute some_new_command I got an error: nsenter: cannot open /proc/<PID>/ns/ipc: Permission denied
P.P.S. If I use sudo some_new_command are executing fine.

Why are my mounted docker volume files turning into folders inside the container?

The scenario is docker inside/beside docker via a sock binding for the purpose of having an easily deployable and scalable runner agent for C.I./C.D. tools (in this particular case, VSTS). The reason for this set up is that the various projects that I want to test use docker/compose to run tests, and configuring a C.I./C.D. worker to be compatible with docker/compose a bunch of times gets cumbersome and time consuming. (This'll eventually be deployed to 4+ Kubernetes Clusters)
Anyway, the problem:
Steps to replicate
Run the vsts-agent image
docker run \
-it \
-v /var/run/docker.sock:/var/run/docker.sock \
nullvoxpopuli/vsts-agent-with-aws-ecr:latest \
/bin/bash
Run another image (to emulate docker/compose running tests)
echo 'test' > test-file.txt
docker run -it -v file-test.txt:/file-test.txt busybox /bin/sh
Check for existence of test-file.txt
cd /
ls -la # shows that test-file.txt is a directory
So,
- why are files being mounted as folders inside containers?
- what do I need to do to make the volumes mount correctly?
Solution A - thanks to #BMitch
# On Host machine
docker run -it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/vsts/work/:/tmp/vsts/work \
nullvoxpopuli/vsts-agent-with-aws-ecr:latest \
/bin/bash
# In vsts-agent-with-aws-ecr
cd /tmp/vsts/work/
git clone https://NullVoxPopuli#bitbucket.org/group/project.git
cd project/
./scripts/run/eslint.sh
# Success! (this uses docker-compose to map files to the node-based docker image)
Docker creates containers and mounts volumes from the docker host. Any time a file or directory in a volume mount doesn't exist, it gets initialized as an empty directory. So if you are running docker commands from inside of a container to the docker socket those commands get interpreted outside the container on the docker host, where the file doesn't exist. Additionally, the docker run command requires a full path to the volume being mounted when you want a host volume, otherwise it's interpreted as a named volume.
What you likely want to do at this point is:
docker volume rm file-test.txt
docker run -it -v $(pwd)/file-test.txt:/file-test.txt busybox /bin/sh
If instead you are trying to include a file from inside the container to another container, you can initialize a named volume with input redirection like this:
tar -cC . . | docker run -i --rm -v file-test:/target busybox tar -xC /target
docker run -it -v file-test:/data busybox /bin/sh
That uses tar to copy the contents of the current directory to stdout which is processed by the interactive docker command which then extracts those directory contents into /target inside the container which is a named volume. Note that I didn't mount the volume in root in this second example since named volumes are directories and I didn't want to replace the root filesystem.
Another option is to share a volume mount point between multiple containers on the docker host so that files you edit inside one container go to the host where they are mounted into the other container and visible there:
docker run \
-it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /container-data:/container-data \
nullvoxpopuli/vsts-agent-with-aws-ecr:latest \
/bin/bash
echo 'test' > /container-data/test-file.txt
docker run -it -v /container-data:/container-data busybox /bin/sh
I don't recommend mounting individual files into a container if these files may be modified while the container is running. File changes often result in a changed inode and docker will have the old inode mounted into the container. As a result, changes either inside or outside of the container to the file may not be seen on the other side, and if you modify the file inside the container, that change may be lost when you delete the container. The solution to the inode issue is to mount the entire directory into the container.

Not able to run shell script in busybox without mounting procfs

I am trying to run a shell script in busybox rootfs with linux kernel version 4.4.4. Test script tries to mount procfs.
#!/bin/sh
mount -t proc none /proc
I can run this script with sh test.sh, but if I tries to run using ./test.sh, it says /bin/sh test.sh not found. Strange thing is after mounting procfs manually
mount -t proc none /proc
I can run ./test.sh. For busybox I am using default config with static enabled.

Running sshfs fusermount after unshare -r

We have sshfs installed and working so we can mount a remote directory with the command
sshfs user#remotemachine:dir mount_location
however if we try using this same working command after running an unshare -r we get the error:
fusermount: mount failed: Operation not permitted.
What could we do to run the sshfs mount after the unshare without it failing?

Resources