I have around 20+ environment variables, some of them are very sensitive information, like db connections, passwords, secrets, aws keys, etc...
and that can't be anywhere in the source code.
I'm using dotenv in development, but what about production? do I have to set each variable before running node? is there any better way to do that?
Update
I'm using azure VM at the moment, but I'm moving towards aws
Regarding AWS It depends on your deployment.
Both beanstalk and lambda support environmental variables.
In case of an EC2 deployment you can add some sensitive information encrypted with user data, or specify a mechanism to setup the environmental variables from a safe place.
For example you can create and ec2 machine with a role which has access to s3 where your environmental variables reside.
Your user data script might utilize this role and setup the environmental variables from s3.
When it comes to encryption you can also check Key Management Service.
Related
I am using Google Cloud Secrets in a NodeJS Project. I am moving away from using preset environment variables and trying to find out the best practice to store and reuse secrets.
The 3 main routes I've found to use secrets are:
Fetching all secrets on startup and set them as ENV variables for later use
Fetching all secrets on startup and set as constant variables
Each time a secret is required, fetch it from Cloud Secrets
Google's own best practice documentation mentions 2 conflicting things:
Use ENV variables to set secrets at startup (source)
Don't use ENV variables as they can be accessed in debug endpoints and traversal attacks among other things (source)
My questions are:
Should I store secrets as variables to be re-used or should I fetch them each time?
Does this have an impact on quotas?
The best practice is to load one time the secret (at startup, or the first time is it accessed) to optimize performances and prevent API call latency. And yes, the access secret quotas is impacted on each access.
If a debugger tool is connected to the environment, Variables and Env Var data can be compromised. The threat is roughly the same. Be sure to secure correctly the environment.
I have been given a task at work to create an RDS cluster module using Terraform that will allow for consumers to spin up their own clusters/dbs etc. This is all fairly straight forward but it is the second lot of requirements that has me pulling my hair out. The DBAs want to know how to do the following:
Store and rotate the master password in secrets manager.
Create additional dbs, users etc via automation (nothing is to be clickops'd).
Utilise IAM authentication so that users do not have to be created/auth'd.
I have looked at a number of different ways of doing this and as i'm fairly new to this, nothing seems to stick out as "the best solution". Would anyone be able to give me a rundown of how they may have approached a similar task? Did you store and rotate password using a lambda function or did you assign the master user to an IAM role? Are you using the TF postgres provider to create roles or did you write your own code to automate?
I really appreciate any guidance.
Thanks heaps
The problem described is rather generic, but in my view you could keep almost everything under direct controll of terraform.
Store and rotate the master password in secrets manager.
Secret manager is the way to go. However, the password rotation will be an issue. When you enable rotation in AWS console, AWS magically provisions a lambda for you. If you don't use console, command line steps are a bit more involving as they require the use of aws serverless repo (SAR). Sadly, official support for SAR is not yet avaiable in terraform. Thus you would have to use local-exec provisioner to run aws cli to create rotation lambda as in the linked documentation using SAR.
Create additional dbs, users etc via automation (nothing is to be clickops'd).
As you already pointed out, the TF PostgreSQL Provider would the first thing to consider.
Utilize IAM authentication so that users do not have to be created/auth'd.
This can be enable using iam_database_authentication_enabled. But you should know that there are some limitations when using IAM auth. Most notably, only PostgreSQL versions 9.6.9 and 10.4 or higher are supported and your number of connections per second my suffer.
A follow up on point 1 for anyone in the future who wants to do a similar thing.
I ended up using a cloudformation_stack terraform resource to create the secret attachment and secret rotation - passing them parameter values from my terraform resources.
Works perfectly and easily switched out when/if terraform introduce these resources.
I'm trying to build a node.js server with express framework, and I want to store a private key for admin APIs in my server.I'm now using .env file to store those values, and in my routes, using that values by calling like process.env.ADMIN_KEY.
Question
Is it secure way to handle private datas? or there's another way better than this?
It is more secure to store your secrets in a .env file than in the source code itself. But you can do one better. Here are the ways I've seen secrets managed, from least to most secure:
Hard-code the secrets in the code.
Pros: None. Don't do this.
Cons: Your developers will see your production secrets as part of their regular work. Your secrets will be checked into source control. Both are security risks. Also, you have to modify the code to use it in different environments, like dev, test, and production.
Put secrets in environment variables, loaded from a .env file.
Pros: Developers won't see your production secrets. You can use different secrets in dev, test, and production, without having to modify the code.
Cons: Malicious code can read your secrets. The bulk of your application's code is probably open-source libraries. Bad code may creep in without you knowing it.
Put secrets in a dedicated secret manager, like Vault by HashiCorp or Secret Manager by Google Cloud.
Pros: It's harder for malicious code to read your secrets. You get auditing of who accessed secrets when. You can assign fine-grained roles for who updates secrets and who can read them. You can update and version your secrets.
Cons: It's additional technology that you have to learn. It may be an additional piece of software that you need to set up and manage, unless it's included in the cloud platform you're using.
So the choice is really between items 2 and 3 above. Which one you pick will depend on how sensitive your secrets are and how much extra work it would be to use a dedicated secret manager. For example, if your project is running on Google Cloud Platform, the Secret Manager is just one API call away. It may be just as easy on the other major cloud platforms, but I don't have first-hand experience with them.
Simple answer is YES, .env is used to store keys and secrets. It is not pushed to your repo i.e. github or bitbucket or anywhere you store your code. In that way it is not exposed.
Here are the tutorial links for correct usage:
managing-environment-variables-in-node-js-with-dotenv
how-secure-is-your-environment-file-in-node-js
Secrets stored in environment variables are in risk of getting exposed (for non-private node apps) as for example libraries you use might print the environment into the log in case of an error. So it would be more safe to store them in a file outside of source control and import it where needed.
https://movingfast.io/articles/environment-variables-considered-harmful/
It is yes. An additional security check can be added by using encrypted values. Also avoid to checkin your .env file in public repo.
You can and should store secrets, credentials or private data securely inside a .env is a secure environment config section in your projects, useful for storing API keys and app credentials. Only invited collaborators are able to see the contents of your .env file.
We have a large system built in Azure apps. It is made up of an App Service for our API and several Functions Apps for backend processing.
What's the best way to allow these apps to share configuration?
We use ARM templates currently to set up the environment variables for each app, which is fine for deploy-time, but there's nothing to keep the config in sync between the apps.
A use case might be a feature flag that controls whether a sub-system is operational. We might want this flag to be used in the API and a Functions App. At present we can manually go in and set the variable in each of the apps, but it would be easier to manage if we only had to do it in one location.
Ideally, any update to the config would be detected by Azure and trigger a restart of the service, as currently happens with the native implementation.
Is there a good, off-the-shelf, way to do this? Or will I be rolling my own with a table in a database and a lightweight function?
One way would be to use the new App Configuration service: https://learn.microsoft.com/en-us/azure/azure-app-configuration/overview.
It is meant for sharing configuration settings across components.
Note it is not meant for secrets, that's what Key Vault is for.
There is a guidance/design pattern for this from Microsoft, it can be found from here.
Best Practice in Architecture: You can use the external configuration store pattern and use a Redis Cache to share the configuration between multiple applications as described in here: https://learn.microsoft.com/en-us/azure/architecture/patterns/external-configuration-store
The approach is you can get this data from Appsettings for each environement (this can be automated in CI/CD pipeline). On first connection you store the data in RedisCache.
For senstive data: Use Keyvault to store the secrets/keys/certificates.
I am using environment variables to store API secrets and data encryption keys. I wonder is environment variables are the most secure way to store such data ? If hacker get into my server, can he access environment vars ?
It depends on the platform, and it is probably somewhat opinionated, but in general I think environment variables are a good way to store secrets in many scenarios.
If for example your application is vulnerable to SQL injection, local file inclusion or some other application level vulnerability, any secret stored in a database or in a file could be easily compromised. The same attack is probably not possible if environment variables are used, local file inclusion for example can't be used to retrieve environment variables.
Also using environment variables helps with version control issues, it helps to avoid checking secrets into your VCS. It may allow you to manage secrets better across environments, only allowing relevant people to be able to learn those secrets in production.
However, in case of a full compromise of your server, the attacker can also inspect environment variables of course. But if your server is compromised to that level, you lost anyway.
Examples of better ways to store secrets could be probably listed, but they are specific to the environment and technology stack you are using. For example in Azure, Key Vault could sometimes be better, in Amazon a similar facility is the Key Management Service (KMS), etc.