Docker Security issue around ENTRYPOINT - security

I am experimenting with Docker and understanding concepts around use of volumes. I have a tomcat app which writes files to a particular volume.
I write a Dockerfile with ENTRYPOINT of "dosomething.sh"
The issue I have with entrypoint script is ..
In the "dosomething.sh", I could potentially have a malicious code to delete all files on the volume !!!
Is there a way to guard against it, especially because, I was planning on sharing this dockerfile and script with my dev team too and the care i have to take for production role out appears scary !
One thought is not to have an "ENTRYPOINT" at all for all the containers that have volumes.
Experienced folks,please advise on how you deal with this...

If you are using data volume container to isolate your volume, such container never run: they are created only (docker create).
That means you need to mount that data volume container into other containers for them to access that volume.
That mitigates a bit the dangerous entrypoint: a simple docker run would have access to nothing, since no -v mounting volume option would have been set.
Another approach is to at least have the script declared as CMD, not ENTRYPOINT (and for the ENTRYPOINT as [ "/bin/sh", "-c" ]. That way, it is easier to docker run with an alternative command (passed as parameter, overriding CMD), instead of having to always execute the script just because it is an ENTRYPOINT.

Related

linuxamazon not running docker deamon [duplicate]

I'm running Jenkins inside a Docker container. I wonder if it's ok for the Jenkins container to also be a Docker host? What I'm thinking about is to start a new docker container for each integration test build from inside Jenkins (to start databases, message brokers etc). The containers should thus be shutdown after the integration tests are completed. Is there a reason to avoid running docker containers from inside another docker container in this way?
Running Docker inside Docker (a.k.a. dind), while possible, should be avoided, if at all possible. (Source provided below.) Instead, you want to set up a way for your main container to produce and communicate with sibling containers.
Jérôme Petazzoni — the author of the feature that made it possible for Docker to run inside a Docker container — actually wrote a blog post saying not to do it. The use case he describes matches the OP's exact use case of a CI Docker container that needs to run jobs inside other Docker containers.
Petazzoni lists two reasons why dind is troublesome:
It does not cooperate well with Linux Security Modules (LSM).
It creates a mismatch in file systems that creates problems for the containers created inside parent containers.
From that blog post, he describes the following alternative,
[The] simplest way is to just expose the Docker socket to your CI container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.
I answered a similar question before on how to run a Docker container inside Docker.
To run docker inside docker is definitely possible. The main thing is that you run the outer container with extra privileges (starting with --privileged=true) and then install docker in that container.
Check this blog post for more info: Docker-in-Docker.
One potential use case for this is described in this entry. The blog describes how to build docker containers within a Jenkins docker container.
However, Docker inside Docker it is not the recommended approach to solve this type of problems. Instead, the recommended approach is to create "sibling" containers as described in this post
So, running Docker inside Docker was by many considered as a good type of solution for this type of problems. Now, the trend is to use "sibling" containers instead. See the answer by #predmijat on this page for more info.
It's OK to run Docker-in-Docker (DinD) and in fact Docker (the company) has an official DinD image for this.
The caveat however is that it requires a privileged container, which depending on your security needs may not be a viable alternative.
The alternative solution of running Docker using sibling containers (aka Docker-out-of-Docker or DooD) does not require a privileged container, but has a few drawbacks that stem from the fact that you are launching the container from within a context that is different from that one in which it's running (i.e., you launch the container from within a container, yet it's running at the host's level, not inside the container).
I wrote a blog describing the pros/cons of DinD vs DooD here.
Having said this, Nestybox (a startup I just founded) is working on a solution that runs true Docker-in-Docker securely (without using privileged containers). You can check it out at www.nestybox.com.
Yes, we can run docker in docker, we'll need to attach the unix socket /var/run/docker.sock on which the docker daemon listens by default as volume to the parent docker using -v /var/run/docker.sock:/var/run/docker.sock.
Sometimes, permissions issues may arise for docker daemon socket for which you can write sudo chmod 757 /var/run/docker.sock.
And also it would require to run the docker in privileged mode, so the commands would be:
sudo chmod 757 /var/run/docker.sock
docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -it ...
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container. (Which never ever works for me)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options. The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container: https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md

Configure Docker for using different volume (beside root)

I want to dockerize one project(ML based-deeplearning) /src file. But, the issue is about the space docker is using. During "docker build" stage, the process was stopped as my root directory volume goes to zero.
Why docker is taking so much space?
How to approach it?
Can I configure docker-engine to build docker, in other directory (like normal storage file?).
If I am doing something wrong then please correct me. Thank you for your valuable time.

How to customize golang-docker image to use golang for scripting?

I came across this blog: using go as a scripting language and tried to create a custom image that I can use to run golang scripts i.e.
FROM golang:1.15
RUN go get github.com/erning/gorun
RUN mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc
RUN echo ':golang:E::go::/go/bin/gorun:OC' | tee /proc/sys/fs/binfmt_misc/register
It fails with error:
mount: /proc/sys/fs/binfmt_misc: permission denied.
ERROR: Service 'go_saga' failed to build : The command '/bin/sh -c mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc' returned a non-zero code: 32
It's readonly file system so can't change the permissions as well. The task I'm trying to achieve here is well documented here. Please help me with following questions:
Is that even possible i.e. mount /proc/sys/fs/binfmt_misc and write to the file: /proc/sys/fs/binfmt_misc/register ?
If Yes, how to do that ?
I guess, it would be great, if we could run golang scripts in the container.
First a quick disclaimer that I haven't done this binfmt trick to run go scripts. I suppose it might work, but I just use go run when I want to run something on the fly.
There's a lot to unpack in this. Container isolation runs an application with a shared kernel in an isolated environment. The namespaces, cgroups, and security settings are designed to prevent one container from impacting other containers or the host.
Why is that important? Because /proc/sys/fs/binfmt_misc is interacting with the kernel, and pushing a change to that would be considered a container escape since you're modifying the underlying host.
The next thing to cover is building an image vs running a container. When you build an image with the Dockerfile, you are defining the image filesystem and some metadata (labels, entrypoint, exposed ports, etc). Each RUN command executes that command inside a temporary container, based on the previous step's result, and when the command finishes it captures the changes to the container filesystem. When you mount another filesystem, that doesn't change the underlying container filesystem, so even if you could, the mount command would be a noop during the image build.
So if this is possible, you'll need to do it inside the container rather than during build time, that container will need to be privileged since doing things like mounting filesystems and modifying /proc requires access not normally given to containers, and you'll be modifying the host kernel in the process. You'd need to make the container entrypoint run the mount and register the binfmt_misc entry, and figure out what to do if the entry is already setup/registered, but possibly to a different directory in another container.
As an aside, when dealing with binfmt_misc and containers, the F flag is very important, though in your use case it's important that you don't have it. Typically you need the F flag so the binary is found on the host filesystem rather than searched for within the container filesystem namespace. The typical use case of binfmt_misc and containers is configuring the host to be able to run containers for different architectures, e.g. Docker Desktop can run amd64, arm64, and a bunch of other platforms today using this.
In the end, if you want to run a container as a one off to run a go command as a script, I'd skip the binfmt misc trick and make an entrypoint that does a go run instead. But if you're using the container for longer run processes where you want to periodically run a go file as a script, you'll need to do that in the container, and as a privileged container that has the ability to escape to the host.

How to Mount Hugepages inside Docker

I have an application running inside Docker requires Huge-page to run .Now I tried following set of command for same.
CMD ["mkdir", "-p" ,"/dev/hugepages"]
CMD ["mount" ,"-t", "hugetlbfs" , "none", "/dev/hugepages"]
CMD ["echo 512" ,">" ,"/proc/sys/vm/nr_hugepages"]
CMD ["mount"]
But I don't see Hugepages gets mounted from mout command, why?
Could anyone please point me out, is it possible to do it?
There's a number of things at hand;
First of all, a Dockerfile only has a single command (CMD); what you're doing won't work; if you need to do multiple steps when the container is started, consider using an entrypoint script, for example this is the entrypoint script of the official mysql image
Second, doing mount in a container requires additional privileges. You can use --privileged but that is probably far too wide of a step, and gives far too much privileges to the container. You can try running the container with --cap-add SYS_ADMINin stead.
Alternative solution
A much cleaner solution could be to mount hugepages on the host, and give the container access to that device, e.g.;
docker run --device=/dev/hugepages:/dev/hugepages ....

How to start docker container as server

I would like to run a docker container that hosts a simple web application, however I do not understand how to design/run the image as a server. For example:
docker run -d -p 80:80 ubuntu:14.04 /bin/bash
This will start and immediately shutdown the container. Instead we can start it interactively:
docker run -i -p 80:80 ubuntu:14.04 /bin/bash
This works, but now I have to keep open the interactive shell for every container that is running? I would rather just start it and have it running in the background. A hack would be using a command that never returns:
docker run -d -p 80:80 {image} tail -F /var/log/kern.log
But now I cannot connect to the shell anymore, to inspect what is going on if the application is acting up.
Is there a way to start the container in the background (as we would do for a vm), in a way that allows for attaching/detaching a shell from the host? Or am I completely missing the point?
The final argument to docker run is the command to run within the container. When you run docker run -d -p 80:80 ubuntu:14.04 /bin/bash, you're running bash in the container and nothing more. You actually want to run your web application in a container and to keep that container alive, so you should do docker run -d -p 80:80 ubuntu:14.04 /path/to/yourapp.
But your application probably depends on some configuration in order to run. If it reads its configuration from environment variables, you can use the -e key=value arguments with docker run. If your application needs a configuration file to be in place, you should probably use a Dockerfile to set up the configuration first.
This article provides a nice complete example of running a node application in a container.
Users of docker tend to assume a container to be a complete a VM, while the docker design concept is more focused on optimal containerization rather than mimic the VM within a container.
Both are correct however some implementation details are not easy to get familiar with in the beginning. I am trying to summarize some of the implementational difference in a way that is easier to understand.
SSH
SSH would be the most straight-forward way to go inside a Linux VM (or container), however many dockerized templates do not have ssh server installed. I believe this is because of optimization & security reasons for the container.
docker attach
docker attach can be handy if working as out-of-the-box. However as of writing it is not stable - https://github.com/docker/docker/issues/8521. Might be associated with SSH set up, but not sure when it is completely fixed.
docker recommended practices (nsenter and etc)
Some alternatives (or best practices in some sense) recommended by Docker at https://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/
This practice basically separates out mutable elements out of a container and maps them to some places in a docker host so they can be manipulated from outside of container and/or persisted. Could be a good practice in production environment but not now when more docker related projects are around dev and staging environment.
bash command line
"docker exec -it {container id} bash" cloud be very handy and practical tool to get in to the machine.
Some basics
"docker run" creates a new container so previous changes will not be saved.
"docker start" will start an existing container so previous changes will still be in the container, however you need to find the correct container-id among many with a same image-id. Need to "docker commit" to suppress versions if wanted.
Ctrl-C will stop the container when exiting. You will want to append "&" at the end so the container can run background and gives you the prompt when hitting enter key.
To the original question, you can tail some file, like you mentioned, to keep the process running.
To reach the shell, instead of "attach", you have two options:
docker exec -it <container_id> /bin/bash
Or
run ssh daemon in the container, port map the ssh and then ssh to container.

Resources