Securely store Hash in Docker Image - python-3.x

I am building a series of applications using Docker and want to securely store my api keys, db access keys, etc. In an effort to make my application more secure, I am storing my configuration file in a password protected, zipped, volume set to read-only. I can use the ZipFile python package to access this to read in the configuration, including using a password.
However, I don't want to store the password explicitly in the image, for obvious reasons. I have played around with passlib to generate a hash for the password and compare. While I am fine with storing the hash in a file in the image, generating the hash I'd like to do without storing the value in a layer of the image.
Would it be good practice to do this? The Dockerfile I have in mind would look like the following:
FROM my_custom_python_image:3.6
WORKDIR /app
COPY . /app
RUN python -m pip install -r requirements.txt
RUN python create_hash.py --token 'mysecret' >> myhash.txt
# The rest of the file here
And create_hash.py would look like:
from passlib.hash import pbkdf2_sha256
import argparse
# Logic to get my argparse token flag
hash = pbkdf2_sha256.encrypt(args.token, rounds=200000, salt_size=16)
print(hash)
If my Dockerfile is not stored in the image and the file system is read only, is the value I put to --token stored? If if is, what's a good workaround here? Again, the end goal is to use context.verify(user_token, hash) to pass the user_token to ZipFile and not explicitly store the password anywhere

you should pass these values as part of the run time deployment, not build time.
It makes your application more flexible (as it can be used in different environments with only parameter changes) and more secure as the keys are simply not there.
To pass values securely during deployment depends more on the deployed environment and features

Anything in a RUN command will be later visible via docker history.
The most secure readily accessible way to provide configuration like passwords to an application like this is to put the configuration file in a host directory with appropriate permissions and then use docker run -v or a similar option to mount that into the running container. Depending on how much you trust your host system, passing options as environment variables works well too (anyone who can run docker inspect or anyone else with root access on the system can see that, but they could read a config file too).
With your proposed approach, I suspect you will need the actual password (not a hash) to decrypt the file. Also configuration by its nature changes somewhat independently of the application, which means you could be in a situation where you need to rebuild your application just because a database hostname changed, which isn't quite what you usually want.

Related

Docker Compose - How to handle credentials securely?

I have been trying to understand how to handle credentials (e.g. database passwords) with Docker Compose (on Linux/Ubuntu) in a secure but not overly complicated way. I have not yet been able to find a definitive answer.
I saw multiple approaches:
Using environment variables to pass credentials. However, this would mean that passwords are stored as plain text both on the system and in the container itself. Storing passwords as plain text isn't something I would be comfortable with. I think most people use this approach - how secure is it?
Using Docker secrets. This requires Docker Swarm though which would just add unnecessary overhead since I only have one Docker host.
Using a Password Vault to inject credentials into containers. This approach seems to be quite complicated.
Is there no other secure, standardized way to manage credentials for Docker containers which are created with Docker Compose? Docker secrets without the need of Docker Swarm would be perfect if it existed.
Thank you in advance for any responses.

Does Docker-compose only read the configuration at initialization?

I have a Docker-compose file that has several environment variables in it for database users. We have multiple instances of this application each running on its own server, each with a different database user.
My question is: Is the Docker-compose.yaml file read once after running docker-compose build, and not at any point after?
No. Docker-compose reads yaml file every time you execute docker-compose (build, up, info, etc.).
But if you're going towards modification of environment variables during image build or container run - sorry bro, this aint gonna work.
You can modify environment variables during service lifetime (when using swarm) , but this will restart containers. Same when using docker-compose up again on running project.
Although if you wish to have a separate docker-compose files for each of your environment with db usernames and passwords - this will work when you run docker-compose up.
You can also take advantage of possibility to pass multiple yaml files to docker compose and this way you can have a "base" yaml with common definitions, and environment yamls where you will keep stored credentials for each environment.
However, if you are concern about not exposing passwords, env variables are not the solution. check out docker secrets with docker swarm or use external key store to make passwords secure.

Why are Docker Secrets considered safe?

I read about docker swarm secrets and did also some testing.
As far as I understood the secrets can replace sensitive environment variables provided in a docker-compose.yml file (e.g. database passwords). As a result when I inspect the docker-compose file or the running container I will not see the password. That's fine - but what does it really help?
If an attacker is on my docker host, he can easily take a look into the /run/secrets
docker exec -it df2345a57cea ls -la /run/secrets/
and can also look at the data inside:
docker exec -it df27res57cea cat /run/secrets/MY_PASSWORD
The same attacker can mostly open a bash on the running container and look how its working....
Also if an attacker is on the container itself he can look around.
So I did not understand why docker secrets are more secure as if I write them directly into the docker-compose.yml file?
A secret stored in the docker-compose.yml is visible inside that file, which should also be checked into version control where others can see the values in that file, and it will be visible in commands like a docker inspect on your containers. From there, it's also visible inside your container.
A docker secret conversely will encrypt the secret on disk on the managers, only store it in memory on the workers that need the secret (the file visible in the containers is a tmpfs that is stored in ram), and is not visible in the docker inspect output.
The key part here is that you are keeping your secret outside of your version control system. With tools like Docker EE's RBAC, you are also keeping secrets out of view from anyone that doesn't need access by removing their ability to docker exec into a production container or using a docker secret for a production environment. That can be done while still giving developers the ability to view logs and inspect containers which may be necessary for production support.
Also note that you can configure a secret inside the docker container to only be readable by a specific user, e.g. root. And you can then drop permissions to run the application as an unprivileged user (tools like gosu are useful for this). Therefore, it's feasible to prevent the secret from being read by an attacker that breaches an application inside a container, which would be less trivial with an environment variable.
Docker Secrets are for Swarm not for one node with some containers or a Docker-Compose for one machine (while it can be used, it is not mainly for this purpose). If you have more than one node then Docker Secrets is more secure than deploying your secrets on more than one worker machine, only to the machines that need the secret based on which container will be running there.
See this blog: Introducing Docker Secrets Management

Secure way to access encryption keys and credential data in nodejs environment?

I am running nodejs apps access resources on different servers, and I need to store the keys somewhere. Now, it is considered a bad idea to check-in user credentials or encryption keys in code repository. What is the best way to manage these keys in a secure manner?
One option I found is to save them in environment variables, and then later read their value when needed in script. Two methods I know of to populate env variables are: set them while running the script on command line (e.g USER_KEY=12345 node script.js) or let them be read from a local file on hard drive. But I want to access these keys on a random CI pipeline machine, so cant store file on disk and cant load them from command line.
Second option is to read these keys from a remote machine. Is there a known Crypto/Key Management Service or another popular NodeJS way to save credentials in a key store and then let script read it from there at run-time. Is there another way?

How can I pass secret data to a container

My Tomcat Container needs data that has to be well protected, i.e. passwords for database access and certificates and keys for Single Sign On to other systems.
I´ve seen some suggestions to use -e or -env-file to pass secret data to a container but this can be discovered with docker inspect (-env-file also shows all the properties of the file in docker inspect).
Another approach is to link a data container with the secrets to the service container but I don´t like the concept of having this data container in my registry (accessible for a broader range of people). I know I can set up a private registry, but I would need different registries for test and production and still everyone with access to the production registry could access the secret data.
I´m thinking about setting up my servers with a directory that contains the secret data and to mount the secret data into my containers. This would work nicely with test- and production servers having different secrets. But it creates a dependency of the containers to my specific servers.
So my question is: How do you handle secret data, what´s the best solution to that problem?
Update January 2017
Docker 1.13 now has the command docker secret with docker swarm.
See also "Why is ARG in a DOCKERFILE not recommended for passing secrets?".
Original answer (Sept 2015)
The notion of docker vault, alluded to by Adrian Mouat in his previous answer, was actively discussed in issue 1030 (the discussion continues on issues 13490).
It was for now rejected as being out of scope for docker, but also included:
We've come up with a simple solution to this problem: A bash script that once executed through a single RUN command, downloads private keys from a local HTTP server, executes a given command and deletes the keys afterwards.
Since we do all of this in a single RUN, nothing gets cached in the image. Here is how it looks in the Dockerfile:
RUN ONVAULT npm install --unsafe-perm
Our first implementation around this concept is available at dockito/vault.
To develop images locally we use a custom development box that runs the Dockito Vault as a service.
The only drawback is requiring the HTTP server running, so no Docker hub builds.
Mount the encrypted keys into container, then pass the password via pipe. The difficulty comes with the detach mode, which will hang while reading the pipe within the container. Here is a trick to work around:
cid=$(docker run -d -i alpine sh -c 'read A; echo "[$A]"; exec some-server')
docker exec -i $cid sh -c 'cat > /proc/1/fd/0' <<< _a_secret_
First, create the docker daemon with -i option, the command read A will hang waiting for the input from /proc/1/fd/0;
Then run the second docker command, reading the secret from stdin and redirect to the last hanging process.

Resources