How to share dynamically generated secrets between Docker containers - security

I have linked together a couple of Docker containers that use each others API endpoints. These API endpoints are protected by a secret and are generated on container startup. I'm looking for a safe way to share these secrets between those services without doing anything static (e.g. hardcoding). These services are created and linked together using docker-compose and it is possible for the secret to be overridden using an environment variable. This behavior is not encouraged for production however.
What is in my case the safest way to distribute these secrets?
Things I have considered:
Using a central data container which stores these secrets as a file. The clients can then link to this container and lookup the secret in the file.
This huge downside this approach has is that it limits the containers to run on the same node.
Generating a docker-compose file with these random secrets hardcoded into them before deploying the containers.
The downside to this approach would be that it wouldn't be possible to simply use the docker-compose file but limiting yourself to a bash script to generate something as mission critical as these secrets. This would also not adhere to my sidenote that the solution should be dynamically adaptable to secret changes.
Sidenote
Ultimately, I would prefer it if the solution could also adapt dynamically to secret changes. For example, when a container fails, it will restart automatically, thus also generating a new secret.

Related

Hashicorp Vault - Send command or api call on variable edit

Let's suppose I make a change over one variable in a Key-Value engine in my Hashicorp Vault.
Once I apply the change, it will create a new version of my variable, as expected.
Can I, somehow, send an API call or at least run a command coming from Hashicorp Vault itself? What I want to achieve is that, when I change a variable inside Hashicorp Vault, I can trigger a CICD build inside Gitlab CI.
No, Hashicorp Vault itself does not have any event/callback or similar mechanism for when secrets are updated.
However, depending on what storage backend you are using, you may be able to use features of your backend to support this. To give a few examples:
If you are using Consul as the backend for Vault, you can use watches in Consul to monitor for key/value pair changes.
If you are using DynamoDB, you can configure streams to trigger other workflows, like a lambda function to run your pipeline.
If you are using S3 as the storage backend, you can leverage S3 event notifications
Other backends will have their own mechanisms for this. However, not all backends will support this directly.
If you don't have a backend that supports this, your next best bet would be to periodically poll and check for updated values. For example, you might use scheduled pipelines to do this.

Google Cloud Secrets - Reusing a secret

I am using Google Cloud Secrets in a NodeJS Project. I am moving away from using preset environment variables and trying to find out the best practice to store and reuse secrets.
The 3 main routes I've found to use secrets are:
Fetching all secrets on startup and set them as ENV variables for later use
Fetching all secrets on startup and set as constant variables
Each time a secret is required, fetch it from Cloud Secrets
Google's own best practice documentation mentions 2 conflicting things:
Use ENV variables to set secrets at startup (source)
Don't use ENV variables as they can be accessed in debug endpoints and traversal attacks among other things (source)
My questions are:
Should I store secrets as variables to be re-used or should I fetch them each time?
Does this have an impact on quotas?
The best practice is to load one time the secret (at startup, or the first time is it accessed) to optimize performances and prevent API call latency. And yes, the access secret quotas is impacted on each access.
If a debugger tool is connected to the environment, Variables and Env Var data can be compromised. The threat is roughly the same. Be sure to secure correctly the environment.

Is it secure way to store private values in .env file?

I'm trying to build a node.js server with express framework, and I want to store a private key for admin APIs in my server.I'm now using .env file to store those values, and in my routes, using that values by calling like process.env.ADMIN_KEY.
Question
Is it secure way to handle private datas? or there's another way better than this?
It is more secure to store your secrets in a .env file than in the source code itself. But you can do one better. Here are the ways I've seen secrets managed, from least to most secure:
Hard-code the secrets in the code.
Pros: None. Don't do this.
Cons: Your developers will see your production secrets as part of their regular work. Your secrets will be checked into source control. Both are security risks. Also, you have to modify the code to use it in different environments, like dev, test, and production.
Put secrets in environment variables, loaded from a .env file.
Pros: Developers won't see your production secrets. You can use different secrets in dev, test, and production, without having to modify the code.
Cons: Malicious code can read your secrets. The bulk of your application's code is probably open-source libraries. Bad code may creep in without you knowing it.
Put secrets in a dedicated secret manager, like Vault by HashiCorp or Secret Manager by Google Cloud.
Pros: It's harder for malicious code to read your secrets. You get auditing of who accessed secrets when. You can assign fine-grained roles for who updates secrets and who can read them. You can update and version your secrets.
Cons: It's additional technology that you have to learn. It may be an additional piece of software that you need to set up and manage, unless it's included in the cloud platform you're using.
So the choice is really between items 2 and 3 above. Which one you pick will depend on how sensitive your secrets are and how much extra work it would be to use a dedicated secret manager. For example, if your project is running on Google Cloud Platform, the Secret Manager is just one API call away. It may be just as easy on the other major cloud platforms, but I don't have first-hand experience with them.
Simple answer is YES, .env is used to store keys and secrets. It is not pushed to your repo i.e. github or bitbucket or anywhere you store your code. In that way it is not exposed.
Here are the tutorial links for correct usage:
managing-environment-variables-in-node-js-with-dotenv
how-secure-is-your-environment-file-in-node-js
Secrets stored in environment variables are in risk of getting exposed (for non-private node apps) as for example libraries you use might print the environment into the log in case of an error. So it would be more safe to store them in a file outside of source control and import it where needed.
https://movingfast.io/articles/environment-variables-considered-harmful/
It is yes. An additional security check can be added by using encrypted values. Also avoid to checkin your .env file in public repo.
You can and should store secrets, credentials or private data securely inside a .env is a secure environment config section in your projects, useful for storing API keys and app credentials. Only invited collaborators are able to see the contents of your .env file.

Is there a good way to share configuration between apps in Azure?

We have a large system built in Azure apps. It is made up of an App Service for our API and several Functions Apps for backend processing.
What's the best way to allow these apps to share configuration?
We use ARM templates currently to set up the environment variables for each app, which is fine for deploy-time, but there's nothing to keep the config in sync between the apps.
A use case might be a feature flag that controls whether a sub-system is operational. We might want this flag to be used in the API and a Functions App. At present we can manually go in and set the variable in each of the apps, but it would be easier to manage if we only had to do it in one location.
Ideally, any update to the config would be detected by Azure and trigger a restart of the service, as currently happens with the native implementation.
Is there a good, off-the-shelf, way to do this? Or will I be rolling my own with a table in a database and a lightweight function?
One way would be to use the new App Configuration service: https://learn.microsoft.com/en-us/azure/azure-app-configuration/overview.
It is meant for sharing configuration settings across components.
Note it is not meant for secrets, that's what Key Vault is for.
There is a guidance/design pattern for this from Microsoft, it can be found from here.
Best Practice in Architecture: You can use the external configuration store pattern and use a Redis Cache to share the configuration between multiple applications as described in here: https://learn.microsoft.com/en-us/azure/architecture/patterns/external-configuration-store
The approach is you can get this data from Appsettings for each environement (this can be automated in CI/CD pipeline). On first connection you store the data in RedisCache.
For senstive data: Use Keyvault to store the secrets/keys/certificates.

Ansible vault password file

I'm was thinking, since we already have a secret file that we use to access the servers (the ssh private key), how much of a security risk would be to use this file as the key file for the vault?
The benefit would be that we only have to secure the ssh private key instead of having another key for the vault.
I like your thought of reducing secrets, but I have some concerns of using the ansible private key.
Scenario
Ideally, the private key you are mentioning is only existing on your management machine, from which you run your playbooks. The way I see it is that the more this key is distributed among other machines/systems, the more likely it is that it gets compromised. The ansible private key usually gives access to root on any provisioned machine in your system, which makes it a very valuable secret. I never provision the ansible private key with ansible itself (which would be kind of chicken-egg anyways, at least on the first management machine).
Problem
One potential problem I see with that approach is when developing roles locally, e.g., with vagrant.
You would need to use the private key from your management system locally to decrypt the secrets and run your playbooks against your vagrant boxes.
Also, any other developer who works on the same ansible project would need that private key locally for development.
Potential workaround
My premise is that the private key does not leave the management server. To achieve that you could develop your roles in a way that for local development you do not need any secret decryption, e.g. create a local development dev counterpart for each production group which uses only non-encrypted fake data. That way you would only need to decrypt secrets on your management machine and won't need the private key locally, but of course this leads to a higher development effort of your ansible project.
I always try to use this approach anyways as much as possible, but from time to time you might find yourself in a scenario in which you still need to decrypt some valid api key for your vagrant boxes. In some projects you might want to use your ansible playbooks not only for production servers, but also to locally provision vagrant boxes for the developers, which is usually when you need to decrypt a certain amount of valid secrets.
Also worth mentioning, with this approach changes to the production secrets could only be made directly on the management server with the private key.
Conclusion
All in all I think that while it would be theoretically possible to use the private key as vault password, the benefit of reducing one secret is too small compared to the overhead that comes with the extra security concerns.

Resources