How can I pass secret data to a container - security

My Tomcat Container needs data that has to be well protected, i.e. passwords for database access and certificates and keys for Single Sign On to other systems.
I´ve seen some suggestions to use -e or -env-file to pass secret data to a container but this can be discovered with docker inspect (-env-file also shows all the properties of the file in docker inspect).
Another approach is to link a data container with the secrets to the service container but I don´t like the concept of having this data container in my registry (accessible for a broader range of people). I know I can set up a private registry, but I would need different registries for test and production and still everyone with access to the production registry could access the secret data.
I´m thinking about setting up my servers with a directory that contains the secret data and to mount the secret data into my containers. This would work nicely with test- and production servers having different secrets. But it creates a dependency of the containers to my specific servers.
So my question is: How do you handle secret data, what´s the best solution to that problem?

Update January 2017
Docker 1.13 now has the command docker secret with docker swarm.
See also "Why is ARG in a DOCKERFILE not recommended for passing secrets?".
Original answer (Sept 2015)
The notion of docker vault, alluded to by Adrian Mouat in his previous answer, was actively discussed in issue 1030 (the discussion continues on issues 13490).
It was for now rejected as being out of scope for docker, but also included:
We've come up with a simple solution to this problem: A bash script that once executed through a single RUN command, downloads private keys from a local HTTP server, executes a given command and deletes the keys afterwards.
Since we do all of this in a single RUN, nothing gets cached in the image. Here is how it looks in the Dockerfile:
RUN ONVAULT npm install --unsafe-perm
Our first implementation around this concept is available at dockito/vault.
To develop images locally we use a custom development box that runs the Dockito Vault as a service.
The only drawback is requiring the HTTP server running, so no Docker hub builds.

Mount the encrypted keys into container, then pass the password via pipe. The difficulty comes with the detach mode, which will hang while reading the pipe within the container. Here is a trick to work around:
cid=$(docker run -d -i alpine sh -c 'read A; echo "[$A]"; exec some-server')
docker exec -i $cid sh -c 'cat > /proc/1/fd/0' <<< _a_secret_
First, create the docker daemon with -i option, the command read A will hang waiting for the input from /proc/1/fd/0;
Then run the second docker command, reading the secret from stdin and redirect to the last hanging process.

Related

Why are Docker Secrets considered safe?

I read about docker swarm secrets and did also some testing.
As far as I understood the secrets can replace sensitive environment variables provided in a docker-compose.yml file (e.g. database passwords). As a result when I inspect the docker-compose file or the running container I will not see the password. That's fine - but what does it really help?
If an attacker is on my docker host, he can easily take a look into the /run/secrets
docker exec -it df2345a57cea ls -la /run/secrets/
and can also look at the data inside:
docker exec -it df27res57cea cat /run/secrets/MY_PASSWORD
The same attacker can mostly open a bash on the running container and look how its working....
Also if an attacker is on the container itself he can look around.
So I did not understand why docker secrets are more secure as if I write them directly into the docker-compose.yml file?
A secret stored in the docker-compose.yml is visible inside that file, which should also be checked into version control where others can see the values in that file, and it will be visible in commands like a docker inspect on your containers. From there, it's also visible inside your container.
A docker secret conversely will encrypt the secret on disk on the managers, only store it in memory on the workers that need the secret (the file visible in the containers is a tmpfs that is stored in ram), and is not visible in the docker inspect output.
The key part here is that you are keeping your secret outside of your version control system. With tools like Docker EE's RBAC, you are also keeping secrets out of view from anyone that doesn't need access by removing their ability to docker exec into a production container or using a docker secret for a production environment. That can be done while still giving developers the ability to view logs and inspect containers which may be necessary for production support.
Also note that you can configure a secret inside the docker container to only be readable by a specific user, e.g. root. And you can then drop permissions to run the application as an unprivileged user (tools like gosu are useful for this). Therefore, it's feasible to prevent the secret from being read by an attacker that breaches an application inside a container, which would be less trivial with an environment variable.
Docker Secrets are for Swarm not for one node with some containers or a Docker-Compose for one machine (while it can be used, it is not mainly for this purpose). If you have more than one node then Docker Secrets is more secure than deploying your secrets on more than one worker machine, only to the machines that need the secret based on which container will be running there.
See this blog: Introducing Docker Secrets Management

Persisting content across docker restart within an Azure Web App

I'm trying to run a ghost docker image on Azure within a Linux Docker container. This is incredibly easy to get up and running using a custom Docker image for Azure Web App on Linux and pointing it at the official docker hub image for ghost.
Unfortunately the official docker image stores all data on the /var/lib/ghost path which isn't persisted across restarts so whenever the container is restarted all my content get's deleted and I end up back at a default ghost install.
Azure won't let me execute arbitrary commands you basically point it at a docker image and it fires off from there so I can't use the -v command line param to map a volume. The docker image does have an entry point configured if that would help.
Any suggestions would be great. Thanks!
Set WEBSITES_ENABLE_APP_SERVICE_STORAGE to true in appsettings and the home directory would be mapped from your outer kudo instance:
https://learn.microsoft.com/en-us/azure/app-service/containers/app-service-linux-faq
You have a few options:
You could mount a file share inside the Docker container by creating a custom image, then storing data there. See these docs for more details.
You could switch to the new container instances, as they provide volume support.
You could switch to the Azure Container Service. This requires an orchestrator, like Kubernetes, and might be more work than you're looking for, but it also offers more flexibility, provides better reliability and scaling, and other benefits.
You have to use a shared volume that map the content of the container /var/lib/ghost directory to a host directory. This way, your data will persist in your host directory.
To do that, use the following command.
$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost/content ghost:1-alpine
I never worked with Azure, so I'm not 100 percent sure the following applies. But if you interface docker via the CLI there is a good chance it applies.
Persistency in docker is handled with volumes. They are basically mounts inside the container's file system tree to a directory on the outside. From your text I understand that you want store the content of the inside /var/lib/ghost path in /home/site/wwwroot on the outside. To do this you would call docker like this:
$ docker run [...] -v /var/lib/ghost:/home/site/wwwroot ghost
Unfortunately setting the persistent storage (or bring your own storage) to a specific path is currently not supported in Azure Web Apps on Linux.
That's said, you can play with ssh and try and configure ghost to point to /home/ instead of /var/lib/.
I have prepared a docker image here: https://hub.docker.com/r/elnably/ghost-on-azure that adds the ssh capability the dockerfile and code can be found here: https://github.com/ahmedelnably/ghost-on-azure/tree/master/1/alpine.
try it out by configuring you web app to use elnably/ghost-on-azure:latest, browse to the site (to start the container) and go to the ssh page .scm.azurewebsites.net, to learn more about SSH check this link: https://aka.ms/linux-ssh.

Docker Swarm on Azure: Correct use of docker4x/logger-azure

I'm using the predefined build of Docker on Azure (Edge Channel) and one of the features is the logging feature. Checking with docker ps on the manager node I saw there is this editions_logger container (docker4x/logger-azure), which catches all the container logs and writes them to an Azure storage account.
How do I use this container directly to get the logs of my containers?
My first approach was to find the right storage and share and download the logs directly from the Azure portal.
The second approach was to connect to the container directly using docker exec -ti editions_logger cat /logmnt/xxx.log
Running docker service logs xxx throws only supported with experimental daemon
All approaches (not the third one though) seem quite over complicated. Is there a better way?
I checked both approaches on our cluster, but we found a fairly easy way to check the logs for now. The Azure OMS approach is really good and i can recommend it, but the setup is too huge for us at the moment. Also the logstash approach is good.
Luckily the tail command supports wildcards and using this we can view our logs nicely.
docker exec -ti editions_logger bash
cd /logmnt
tail -f service_name*
Thank you very much for the different approaches! Im looking forward to the new Swarm features (there is already the docker service logs command, so in the future it should be even easier to check the logs.)
Another way, we can use --volumes to store container logs to Host, then use Logstash to collect logs from the volumes.
In the host machine to open a fixed directory D, and mount the logs to the sub-directory of the D directory, then the mount D to Logstash. In this way, the Logstash container can collect all logs from other containers.
It works like this:

Sharing /etc/passwd between two Docker containers

I'm trying to containerize application which consists of two services defined. One is basic server running on specific port and second one is SSH server. The first service creates users by executing standard unix commands (managing user and their ssh keys) and they can login to second ssh service. Each service is running in separate container and I use Docker Compose to make it all running together.
The issue is that I'm unable to define Docker volumes so that I can user created by first service can use second service. I need share files like /etc/passwd, /etc/shadow, /etc/group between both services.
What I tried:
1) Define volumes for each file
Not supported in Docker Compose (it can use only directories as volumes)
2) Replace /etc/passwd with symlink to copy in volume
as described here
3) Set whole /etc as volume
doesn't work (only some files get from image to volume)
is ugly
I would be thrilled with any suggestion or workaround which wouldn't require put both services in one container.

Execute host commands from within a docker container

I'm looking for a way for a user to be able to execute a limited set of commands on the host, while only accessing it from containers/browser. The goal is to prevent the need for SSH'ing to the host just to run commands occasionally like make start, make stop, etc. These make commands just execute a series of docker-compose commands and are needed sometimes in dev.
The two possible ways in I can think of are:
Via cloud9 terminal inside browser (we'll already be using it). By default this terminal only accesses the container itself of course.
Via a custom mini webapp (e.g. node.js/express) with buttons that map to commands. This would be easy to do if running on the host itself, but I want to keep all code like this as containers.
Although it might not be best practice it is still possible to control the host from inside a container. If you are running docker-compose commands you can bind mount the docker socket by using -v /var/run/docker.sock:/var/run/docker.sock on ubuntu.
If you want to use other system tools you will have to bind mount all required volumes using -v this gets really tricky and tedious when you want to use system bins that use /lib.*.so files.
If you need to use sudo commands don't forget to add --privileged flag when running the container
Named Pipes can be very useful to run commands on host machine from docker. Your question is very similar to this one
The solution using named pipes was also given in the same question. I had tried and tested this approach and it works perfectly fine.
That approach would be against the docker concepts of process/resources encapsulation. With docker you encapsulate processes completely from the host and from each other (unless you link containers/volumes). From within a docker container you cannot see any processes running on the host due to process namespaces. When you now want to execute processes on the host from within a container that would be against the docker methodology.
A container is not supposed to break out and access the host. Docker is (amongst other things) process isolation. You may find various tricks to execute some code on the host, when you set it up, though.

Resources