I'm was thinking, since we already have a secret file that we use to access the servers (the ssh private key), how much of a security risk would be to use this file as the key file for the vault?
The benefit would be that we only have to secure the ssh private key instead of having another key for the vault.
I like your thought of reducing secrets, but I have some concerns of using the ansible private key.
Scenario
Ideally, the private key you are mentioning is only existing on your management machine, from which you run your playbooks. The way I see it is that the more this key is distributed among other machines/systems, the more likely it is that it gets compromised. The ansible private key usually gives access to root on any provisioned machine in your system, which makes it a very valuable secret. I never provision the ansible private key with ansible itself (which would be kind of chicken-egg anyways, at least on the first management machine).
Problem
One potential problem I see with that approach is when developing roles locally, e.g., with vagrant.
You would need to use the private key from your management system locally to decrypt the secrets and run your playbooks against your vagrant boxes.
Also, any other developer who works on the same ansible project would need that private key locally for development.
Potential workaround
My premise is that the private key does not leave the management server. To achieve that you could develop your roles in a way that for local development you do not need any secret decryption, e.g. create a local development dev counterpart for each production group which uses only non-encrypted fake data. That way you would only need to decrypt secrets on your management machine and won't need the private key locally, but of course this leads to a higher development effort of your ansible project.
I always try to use this approach anyways as much as possible, but from time to time you might find yourself in a scenario in which you still need to decrypt some valid api key for your vagrant boxes. In some projects you might want to use your ansible playbooks not only for production servers, but also to locally provision vagrant boxes for the developers, which is usually when you need to decrypt a certain amount of valid secrets.
Also worth mentioning, with this approach changes to the production secrets could only be made directly on the management server with the private key.
Conclusion
All in all I think that while it would be theoretically possible to use the private key as vault password, the benefit of reducing one secret is too small compared to the overhead that comes with the extra security concerns.
Related
I'm developing an application and I want it to be open-source.
In production, the application is using the Azure Key Vault Service only to store the database connection string. The connection string is stored on an Environment variable of the production server.
In local, I'm using an InMemory Database from EntityFramework. No sensitive data is accessible.
In production too, the application is using the Azure App Configuration Service. While being able to update the configuration of an already running application, it also allows me to centralize the configuration data of my application.
In local, I'm using the Azure App Configuration Service too. The READ-ONLY connection string is stored in my User Secrets.
And that's the point I'm struggling with. Is it considered a bad practice to share the READ-ONLY App Configuration Connection String on a Github or something else public ? Even if I don't store any sensitive data ?
The Key Vault Service is especially designed to safety store the sensitive data, so in theory the App Configuration Service doesn't have any sensitive data available.
But I can't find any relevant documentation on that topic, and the fact that every tutorials I can find are storing the connection string in the user secrets is warning me. How can I share my configuration in a safety way to make my project open-source ?
From security perspective you are violating principle of least privilege, giving read access to public that they don't need.
This could raise several risks:
You or someone else maintaining the App Configuration might "forget" about public read access and put vulnerable data there
An attacker might exploit a security bug in App Configuration itself and escalate read-only permission to read-write, which would not happen if they didn't have read-only access in the first place
You might think that probability of that happening is marginal (which is probably the case), but it is there and in security we always stay on the safe side - that's why we have the principle mentioned and it is indeed generally considered bad practice to violate it.
Finally, we always need to choose between usability and security, so in the end you might willfully agree to slightly less security if this makes your life easier and potential trouble from the risks does not scare you.
In case you would like not to expose the connection string you can think about:
abstracting configuration fetching in a similar way you did for secrets, so that production app would use App Configuration while for local development you can use InMemory database
replacing connection string with Terraform script so that you or any other developer can spin up and populate a dedicated App Configuration instance for local development purposes
We're operating Eclipse Hono and would like to perform zero-downtime updates on all components in our cluster.
For authentication between the different Eclipse Hono components we use the Hono Auth Service.
There we configured a shared secret (HONO_AUTH_SVC_SIGNING_SHARED_SECRET) to be used to for signing the issued tokens.
Consuming services (e.g. Command Router / Mongo DB Device Regsitry) are configured with the same secret.
When changing the shared secret we simultaneously need to restart all instances of the mentioned microservices, which leads to a short downtime.
If we would perform a rolling update, the old instances would not validate the issued tokens of instances already running with the new shared secret.
Has anyone the same issue, or knows how to perform a zero-downtime update?
One option to solve our problem would be the possibility to configure next to the HONO_AUTH_VALIDATION_SHARED_SECRET another secret (HONO_AUTH_VALIDATION_SHARED_SECRET_FALLBACK) which would be tried if the primary fails.
Like this we could perform a rolling update of all components without downtime.
The usage of a certificate instead of the shared secret has as far as I can see the same restriction.
Thanks
Chris
I also do not see any option to cycle the shared secret based on the current implementation without incurring any downtime.
For this to work, Hono's components would need to support configuration of multiple shared secrets for validation of the tokens, as you correctly pointed out. Maybe you want to open an issue for this with Hono?
I'm trying to build a node.js server with express framework, and I want to store a private key for admin APIs in my server.I'm now using .env file to store those values, and in my routes, using that values by calling like process.env.ADMIN_KEY.
Question
Is it secure way to handle private datas? or there's another way better than this?
It is more secure to store your secrets in a .env file than in the source code itself. But you can do one better. Here are the ways I've seen secrets managed, from least to most secure:
Hard-code the secrets in the code.
Pros: None. Don't do this.
Cons: Your developers will see your production secrets as part of their regular work. Your secrets will be checked into source control. Both are security risks. Also, you have to modify the code to use it in different environments, like dev, test, and production.
Put secrets in environment variables, loaded from a .env file.
Pros: Developers won't see your production secrets. You can use different secrets in dev, test, and production, without having to modify the code.
Cons: Malicious code can read your secrets. The bulk of your application's code is probably open-source libraries. Bad code may creep in without you knowing it.
Put secrets in a dedicated secret manager, like Vault by HashiCorp or Secret Manager by Google Cloud.
Pros: It's harder for malicious code to read your secrets. You get auditing of who accessed secrets when. You can assign fine-grained roles for who updates secrets and who can read them. You can update and version your secrets.
Cons: It's additional technology that you have to learn. It may be an additional piece of software that you need to set up and manage, unless it's included in the cloud platform you're using.
So the choice is really between items 2 and 3 above. Which one you pick will depend on how sensitive your secrets are and how much extra work it would be to use a dedicated secret manager. For example, if your project is running on Google Cloud Platform, the Secret Manager is just one API call away. It may be just as easy on the other major cloud platforms, but I don't have first-hand experience with them.
Simple answer is YES, .env is used to store keys and secrets. It is not pushed to your repo i.e. github or bitbucket or anywhere you store your code. In that way it is not exposed.
Here are the tutorial links for correct usage:
managing-environment-variables-in-node-js-with-dotenv
how-secure-is-your-environment-file-in-node-js
Secrets stored in environment variables are in risk of getting exposed (for non-private node apps) as for example libraries you use might print the environment into the log in case of an error. So it would be more safe to store them in a file outside of source control and import it where needed.
https://movingfast.io/articles/environment-variables-considered-harmful/
It is yes. An additional security check can be added by using encrypted values. Also avoid to checkin your .env file in public repo.
You can and should store secrets, credentials or private data securely inside a .env is a secure environment config section in your projects, useful for storing API keys and app credentials. Only invited collaborators are able to see the contents of your .env file.
I am using environment variables to store API secrets and data encryption keys. I wonder is environment variables are the most secure way to store such data ? If hacker get into my server, can he access environment vars ?
It depends on the platform, and it is probably somewhat opinionated, but in general I think environment variables are a good way to store secrets in many scenarios.
If for example your application is vulnerable to SQL injection, local file inclusion or some other application level vulnerability, any secret stored in a database or in a file could be easily compromised. The same attack is probably not possible if environment variables are used, local file inclusion for example can't be used to retrieve environment variables.
Also using environment variables helps with version control issues, it helps to avoid checking secrets into your VCS. It may allow you to manage secrets better across environments, only allowing relevant people to be able to learn those secrets in production.
However, in case of a full compromise of your server, the attacker can also inspect environment variables of course. But if your server is compromised to that level, you lost anyway.
Examples of better ways to store secrets could be probably listed, but they are specific to the environment and technology stack you are using. For example in Azure, Key Vault could sometimes be better, in Amazon a similar facility is the Key Management Service (KMS), etc.
I have linked together a couple of Docker containers that use each others API endpoints. These API endpoints are protected by a secret and are generated on container startup. I'm looking for a safe way to share these secrets between those services without doing anything static (e.g. hardcoding). These services are created and linked together using docker-compose and it is possible for the secret to be overridden using an environment variable. This behavior is not encouraged for production however.
What is in my case the safest way to distribute these secrets?
Things I have considered:
Using a central data container which stores these secrets as a file. The clients can then link to this container and lookup the secret in the file.
This huge downside this approach has is that it limits the containers to run on the same node.
Generating a docker-compose file with these random secrets hardcoded into them before deploying the containers.
The downside to this approach would be that it wouldn't be possible to simply use the docker-compose file but limiting yourself to a bash script to generate something as mission critical as these secrets. This would also not adhere to my sidenote that the solution should be dynamically adaptable to secret changes.
Sidenote
Ultimately, I would prefer it if the solution could also adapt dynamically to secret changes. For example, when a container fails, it will restart automatically, thus also generating a new secret.