Docker Compose - How to handle credentials securely? - linux

I have been trying to understand how to handle credentials (e.g. database passwords) with Docker Compose (on Linux/Ubuntu) in a secure but not overly complicated way. I have not yet been able to find a definitive answer.
I saw multiple approaches:
Using environment variables to pass credentials. However, this would mean that passwords are stored as plain text both on the system and in the container itself. Storing passwords as plain text isn't something I would be comfortable with. I think most people use this approach - how secure is it?
Using Docker secrets. This requires Docker Swarm though which would just add unnecessary overhead since I only have one Docker host.
Using a Password Vault to inject credentials into containers. This approach seems to be quite complicated.
Is there no other secure, standardized way to manage credentials for Docker containers which are created with Docker Compose? Docker secrets without the need of Docker Swarm would be perfect if it existed.
Thank you in advance for any responses.

Related

Best Practise for docker intercontainer communication

I have two docker containers A and B. On container A a django application is running. On container B a WEBDAV Source is mounted.
Now I want to check from container A if a folder exists in container B (in the WebDAV mount destination).
What is the best solution to do something like that? Currently I solved it mounting the docker socket into the container A to execute cmds from A inside B. I am aware that mounting the docker socket into a container is a security risk for the host and the whole application stack.
Other possible solutions would be to use SSH or share and mount the directory which should be checked. Of course there are further possible solutions like doing it with HTTP requests.
Because there are so many ways to solve a problem like that, I want to know if there is a best practise (considering security, effort to implement, performance) to execute commands from container A in contianer B.
Thanks in advance
WebDAV provides a file-system-like interface on top of HTTP. I'd just directly use this. This requires almost no setup other than providing the other container's name in configuration (and if you're using plain docker run putting both containers on the same network), and it's the same setup in basically all container environments (including Docker Swarm, Kubernetes, Nomad, AWS ECS, ...) and a non-Docker development environment.
Of the other options you suggest:
Sharing a filesystem is possible. It leads to potential permission problems which can be tricky to iron out. There are potential security issues if the client container isn't supposed to be able to write the files. It may not work well in clustered environments like Kubernetes.
ssh is very hard to set up securely in a Docker environment. You don't want to hard-code a plain-text password that can be easily recovered from docker history; a best-practice setup would require generating host and user keys outside of Docker and bind-mounting them into both containers (I've never seen a setup like this in an SO question). This also brings the complexity of running multiple processes inside a container.
Mounting the Docker socket is complicated, non-portable across environments, and a massive security risk (you can very easily use the Docker socket to root the entire host). You'd need to rewrite that code for each different container environment you might run in. This should be a last resort; I'd consider it only if creating and destroying containers would need to be a key part of this one container's operation.
Is there a best practise to execute commands from container A in contianer B?
"Don't." Rearchitect your application to have some other way to communicate between the two containers, often over HTTP or using a message queue like RabbitMQ.
One solution would be to mount one filesystem readonly on one container and read-write on the other container.
See this answer: Docker, mount volumes as readonly

Docker plugin - passing and storing passwords in a standalone docker setup

I'm working on a docker plugin that needs to access an external service using a password. The password should be configured at plugin install time and be available during the plugin's lifetime.
Currently I'm using env variables, optionally reading the password from a file via VAR=$(cat password_file). This approach is convenient, but doesn't seem like a very good solution as the password can be looked up using docker plugin inspect.
I wonder what would be the best way to pass and store passwords in a plugin using standalone docker setup. Swarm and Kubernetes (and probably other orchestration solutions) support secrets. Unfortunately, standalone docker doesn't seem to support secrets and the customer's docker setup is not under my controls :-(
I did look through the documentation and spent time googling the answer, but came up empty. In fact, I saw a few generic threads about storing passwords in containers with no satisfactory answers, but these were from a few years ago and I was hoping that maybe in 2018 such a basic issue has a decent solution.
P.S. This is my first question - please be gentle with me.

Why are Docker Secrets considered safe?

I read about docker swarm secrets and did also some testing.
As far as I understood the secrets can replace sensitive environment variables provided in a docker-compose.yml file (e.g. database passwords). As a result when I inspect the docker-compose file or the running container I will not see the password. That's fine - but what does it really help?
If an attacker is on my docker host, he can easily take a look into the /run/secrets
docker exec -it df2345a57cea ls -la /run/secrets/
and can also look at the data inside:
docker exec -it df27res57cea cat /run/secrets/MY_PASSWORD
The same attacker can mostly open a bash on the running container and look how its working....
Also if an attacker is on the container itself he can look around.
So I did not understand why docker secrets are more secure as if I write them directly into the docker-compose.yml file?
A secret stored in the docker-compose.yml is visible inside that file, which should also be checked into version control where others can see the values in that file, and it will be visible in commands like a docker inspect on your containers. From there, it's also visible inside your container.
A docker secret conversely will encrypt the secret on disk on the managers, only store it in memory on the workers that need the secret (the file visible in the containers is a tmpfs that is stored in ram), and is not visible in the docker inspect output.
The key part here is that you are keeping your secret outside of your version control system. With tools like Docker EE's RBAC, you are also keeping secrets out of view from anyone that doesn't need access by removing their ability to docker exec into a production container or using a docker secret for a production environment. That can be done while still giving developers the ability to view logs and inspect containers which may be necessary for production support.
Also note that you can configure a secret inside the docker container to only be readable by a specific user, e.g. root. And you can then drop permissions to run the application as an unprivileged user (tools like gosu are useful for this). Therefore, it's feasible to prevent the secret from being read by an attacker that breaches an application inside a container, which would be less trivial with an environment variable.
Docker Secrets are for Swarm not for one node with some containers or a Docker-Compose for one machine (while it can be used, it is not mainly for this purpose). If you have more than one node then Docker Secrets is more secure than deploying your secrets on more than one worker machine, only to the machines that need the secret based on which container will be running there.
See this blog: Introducing Docker Secrets Management

Do we really need security updates on docker images

This question has come to my mind many times and I just wanted everyone to pitch in with their thoughts on same.
The first thing that comes to my mind is container is not a VM and is almost equivalent to a process running in isolation in the host instance. Then why do we need to keep updating our docker images with security updates? If we have taken sufficient steps to secure our host instance then docker container should be safe. And even if we think from a different direction of multi layered security if the docker host is compromised then then there is no way to stop hacker from accessing all the containers running on the host; no matter how many security updates you did on the docker image.
Are there any particular scenarios which anybody can share where security updates for docker images has really helped?
Now I understand if somebody want's to update apache running in the container however are there reasons to do OS level security updates for images?
an exploit can be dangerous even if it does not give you access to the underlying operating system. Just being able to do something within the application itself can be a big issue. For example, imagine you could inject scripts into Stackoverflow, impersonate other users, or obtain a copy of the complete user database.
just like any software, Docker (and the underlying OS-provided container mechanism) is not perfect and can also have bugs. As such, there may be ways to bypass the isolation and break out of the sandbox.

Docker for a one shot CLI application

Since I first knew of Docker, I thought it might be the solution for several problems we are usually facing at the lab. I work as a Data Analyst for a small Biology research group. I am using Snakemake for defining the -usually big and quite complex- workflows for our analyses.
From Snakemake, I usually call small scripts in R, Python, or even Command Line Applications such as aligners or annotation tools. In this scenario, it is not uncommon to suffer from dependency hell, hence I was thinking about wrapping some of the tools in Docker containers.
At this moment I am stuck at a point where I do not know if I have chosen technology badly, or if I am not able to properly assimilate all the information about Docker.
The problem is related to the fact that you have to run the Docker tools as root, which is something I would not like to do at all, since the initial idea was to make the dockerized applications available to every researcher willing to use them.
In AskUbuntu, the most voted answer proposes to add the final user to the docker group, but it seems that this is not good for security. In the security articles at Docker, on the other hand, they explain that running the tools as root is good for your security. I have found similar questions at SO, but related to the environment inside the container.
Ok, I have no problem with this, but as every moderate-complexity example I happen to find, it seems it is more oriented towards web-applications development, where the system could initially start the container once and then forget about it.
Things I am considering right now:
Configuring the Docker daemon as a TLS-enabled, TCP remote service, and provide the corresponding users with certificates. Would there be any overhead in running the applications? Security issues?
Create images that only make available the application to the host by sharing a /usr/local/bin/ volume or similar. Is this secure? How can you create a daemonized container that does not need to execute anything? The only example I have found implies creating an infinite loop.
The nucleotid.es page seem to do something similar to what I want, but I have not found any reference to security issues. Maybe they are running all the containers inside a virtual machine, where they do not have to worry about these issues, due to the fact that they do not need to expose the dockerized applications to more people.
Sorry about my verbosity. I just wanted to write down the mental process (possibly flawed, I know, I know) where I am stuck. To sum up:
Is there any possibility to create a dockerized command line application which does not need to be run using sudo, is available for several people in the same server, and which is not intended to run in a daemonized fashion?
Thank you in advance.
Regards.
If users will be able to execute docker run then will be able to control host system just because they could map files from host to container and in container they always could be root if they could use docker run or docker exec. So users should not be able to execute docker directly. I think easiest solution here to create scripts which run docker and these scripts could either have suid flag or users could have sudo access to them.

Resources