Why are Docker Secrets considered safe? - security

I read about docker swarm secrets and did also some testing.
As far as I understood the secrets can replace sensitive environment variables provided in a docker-compose.yml file (e.g. database passwords). As a result when I inspect the docker-compose file or the running container I will not see the password. That's fine - but what does it really help?
If an attacker is on my docker host, he can easily take a look into the /run/secrets
docker exec -it df2345a57cea ls -la /run/secrets/
and can also look at the data inside:
docker exec -it df27res57cea cat /run/secrets/MY_PASSWORD
The same attacker can mostly open a bash on the running container and look how its working....
Also if an attacker is on the container itself he can look around.
So I did not understand why docker secrets are more secure as if I write them directly into the docker-compose.yml file?

A secret stored in the docker-compose.yml is visible inside that file, which should also be checked into version control where others can see the values in that file, and it will be visible in commands like a docker inspect on your containers. From there, it's also visible inside your container.
A docker secret conversely will encrypt the secret on disk on the managers, only store it in memory on the workers that need the secret (the file visible in the containers is a tmpfs that is stored in ram), and is not visible in the docker inspect output.
The key part here is that you are keeping your secret outside of your version control system. With tools like Docker EE's RBAC, you are also keeping secrets out of view from anyone that doesn't need access by removing their ability to docker exec into a production container or using a docker secret for a production environment. That can be done while still giving developers the ability to view logs and inspect containers which may be necessary for production support.
Also note that you can configure a secret inside the docker container to only be readable by a specific user, e.g. root. And you can then drop permissions to run the application as an unprivileged user (tools like gosu are useful for this). Therefore, it's feasible to prevent the secret from being read by an attacker that breaches an application inside a container, which would be less trivial with an environment variable.

Docker Secrets are for Swarm not for one node with some containers or a Docker-Compose for one machine (while it can be used, it is not mainly for this purpose). If you have more than one node then Docker Secrets is more secure than deploying your secrets on more than one worker machine, only to the machines that need the secret based on which container will be running there.
See this blog: Introducing Docker Secrets Management

Related

Docker Compose - How to handle credentials securely?

I have been trying to understand how to handle credentials (e.g. database passwords) with Docker Compose (on Linux/Ubuntu) in a secure but not overly complicated way. I have not yet been able to find a definitive answer.
I saw multiple approaches:
Using environment variables to pass credentials. However, this would mean that passwords are stored as plain text both on the system and in the container itself. Storing passwords as plain text isn't something I would be comfortable with. I think most people use this approach - how secure is it?
Using Docker secrets. This requires Docker Swarm though which would just add unnecessary overhead since I only have one Docker host.
Using a Password Vault to inject credentials into containers. This approach seems to be quite complicated.
Is there no other secure, standardized way to manage credentials for Docker containers which are created with Docker Compose? Docker secrets without the need of Docker Swarm would be perfect if it existed.
Thank you in advance for any responses.

Does Docker-compose only read the configuration at initialization?

I have a Docker-compose file that has several environment variables in it for database users. We have multiple instances of this application each running on its own server, each with a different database user.
My question is: Is the Docker-compose.yaml file read once after running docker-compose build, and not at any point after?
No. Docker-compose reads yaml file every time you execute docker-compose (build, up, info, etc.).
But if you're going towards modification of environment variables during image build or container run - sorry bro, this aint gonna work.
You can modify environment variables during service lifetime (when using swarm) , but this will restart containers. Same when using docker-compose up again on running project.
Although if you wish to have a separate docker-compose files for each of your environment with db usernames and passwords - this will work when you run docker-compose up.
You can also take advantage of possibility to pass multiple yaml files to docker compose and this way you can have a "base" yaml with common definitions, and environment yamls where you will keep stored credentials for each environment.
However, if you are concern about not exposing passwords, env variables are not the solution. check out docker secrets with docker swarm or use external key store to make passwords secure.

Persisting content across docker restart within an Azure Web App

I'm trying to run a ghost docker image on Azure within a Linux Docker container. This is incredibly easy to get up and running using a custom Docker image for Azure Web App on Linux and pointing it at the official docker hub image for ghost.
Unfortunately the official docker image stores all data on the /var/lib/ghost path which isn't persisted across restarts so whenever the container is restarted all my content get's deleted and I end up back at a default ghost install.
Azure won't let me execute arbitrary commands you basically point it at a docker image and it fires off from there so I can't use the -v command line param to map a volume. The docker image does have an entry point configured if that would help.
Any suggestions would be great. Thanks!
Set WEBSITES_ENABLE_APP_SERVICE_STORAGE to true in appsettings and the home directory would be mapped from your outer kudo instance:
https://learn.microsoft.com/en-us/azure/app-service/containers/app-service-linux-faq
You have a few options:
You could mount a file share inside the Docker container by creating a custom image, then storing data there. See these docs for more details.
You could switch to the new container instances, as they provide volume support.
You could switch to the Azure Container Service. This requires an orchestrator, like Kubernetes, and might be more work than you're looking for, but it also offers more flexibility, provides better reliability and scaling, and other benefits.
You have to use a shared volume that map the content of the container /var/lib/ghost directory to a host directory. This way, your data will persist in your host directory.
To do that, use the following command.
$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost/content ghost:1-alpine
I never worked with Azure, so I'm not 100 percent sure the following applies. But if you interface docker via the CLI there is a good chance it applies.
Persistency in docker is handled with volumes. They are basically mounts inside the container's file system tree to a directory on the outside. From your text I understand that you want store the content of the inside /var/lib/ghost path in /home/site/wwwroot on the outside. To do this you would call docker like this:
$ docker run [...] -v /var/lib/ghost:/home/site/wwwroot ghost
Unfortunately setting the persistent storage (or bring your own storage) to a specific path is currently not supported in Azure Web Apps on Linux.
That's said, you can play with ssh and try and configure ghost to point to /home/ instead of /var/lib/.
I have prepared a docker image here: https://hub.docker.com/r/elnably/ghost-on-azure that adds the ssh capability the dockerfile and code can be found here: https://github.com/ahmedelnably/ghost-on-azure/tree/master/1/alpine.
try it out by configuring you web app to use elnably/ghost-on-azure:latest, browse to the site (to start the container) and go to the ssh page .scm.azurewebsites.net, to learn more about SSH check this link: https://aka.ms/linux-ssh.

Docker security concerns using unofficial images

How to ensure, that docker container will be secure, especially when using third party containers or base images?
Is it correct, when using base image, it may initiate any services or mount arbitrary partitions of host filesystem under the hood, and potentially send sensitive data to attacker?
So if I use third party container, which Dockerfile proves the container to be safe, should I traverse the whole linked list of base images (potentially very long) to ensure the container is actually safe and does what it intends of doing?
How to ensure the trustworthy of docker container in a systematic and definite way?
Consider Docker images similar to android/iOS mobile apps. You are never quite sure if they are safe to run, but the probability of it being safe is higher when it's from an official source such as Google play or App Store.
More concretely Docker images coming from Docker hub go through security scans details of which are undisclosed as yet. So chances of a malicious image pulled from Docker hub are rare.
However, one can never be paranoid enough when it comes to security. There are two ways to make sure all images coming from any source are secure:
Proactive security: Do security source code review of each Dockerfile corresponding to Docker image, including base images which you have already expressed in question
Reactive security: Run Docker bench, open sourced by Docker Inc., which runs as a privileged container looking for runtime known malicious activities by containers.
In summary, whenever possible use Docker images from Docker hub. Perform security code reviews of DockerFiles. Run Docker bench or any other equivalent tool that can catch malicious activities performed by containers.
References:
Docker security scanning formerly known as Project Nautilus: https://blog.docker.com/2016/05/docker-security-scanning/
Docker bench: https://github.com/docker/docker-bench-security
Best practices for Dockerfile: https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/
Docker images are self-contained, meaning that unless you run them inside a container with volumes and network mode they have no way of accessing any network or memory stack of your host.
For example if I run an image inside a container by using the command:
docker run -it --network=none ubuntu:16.04
This will start the docker container ubuntu:16.04 with no mounting to host's storage and will not share any network stack with host. You can test this by running ifconfig inside the container and in your host and comparing them.
Regarding checking what the image/base-image does, a conclusion from above said is nothing harmful to your host (unless you mount your /improtant/directory_on_host to container and after starting container it removes them).
You can check what an image/base-image conatins after running by checking their dockerfile(s) or docker-compose .yml files.

How can I pass secret data to a container

My Tomcat Container needs data that has to be well protected, i.e. passwords for database access and certificates and keys for Single Sign On to other systems.
I´ve seen some suggestions to use -e or -env-file to pass secret data to a container but this can be discovered with docker inspect (-env-file also shows all the properties of the file in docker inspect).
Another approach is to link a data container with the secrets to the service container but I don´t like the concept of having this data container in my registry (accessible for a broader range of people). I know I can set up a private registry, but I would need different registries for test and production and still everyone with access to the production registry could access the secret data.
I´m thinking about setting up my servers with a directory that contains the secret data and to mount the secret data into my containers. This would work nicely with test- and production servers having different secrets. But it creates a dependency of the containers to my specific servers.
So my question is: How do you handle secret data, what´s the best solution to that problem?
Update January 2017
Docker 1.13 now has the command docker secret with docker swarm.
See also "Why is ARG in a DOCKERFILE not recommended for passing secrets?".
Original answer (Sept 2015)
The notion of docker vault, alluded to by Adrian Mouat in his previous answer, was actively discussed in issue 1030 (the discussion continues on issues 13490).
It was for now rejected as being out of scope for docker, but also included:
We've come up with a simple solution to this problem: A bash script that once executed through a single RUN command, downloads private keys from a local HTTP server, executes a given command and deletes the keys afterwards.
Since we do all of this in a single RUN, nothing gets cached in the image. Here is how it looks in the Dockerfile:
RUN ONVAULT npm install --unsafe-perm
Our first implementation around this concept is available at dockito/vault.
To develop images locally we use a custom development box that runs the Dockito Vault as a service.
The only drawback is requiring the HTTP server running, so no Docker hub builds.
Mount the encrypted keys into container, then pass the password via pipe. The difficulty comes with the detach mode, which will hang while reading the pipe within the container. Here is a trick to work around:
cid=$(docker run -d -i alpine sh -c 'read A; echo "[$A]"; exec some-server')
docker exec -i $cid sh -c 'cat > /proc/1/fd/0' <<< _a_secret_
First, create the docker daemon with -i option, the command read A will hang waiting for the input from /proc/1/fd/0;
Then run the second docker command, reading the secret from stdin and redirect to the last hanging process.

Resources