Best to download secrets during build, or assign global variables during invocation for cloud functions? - google-secret-manager

We are currently using GCS for our secrets but want to move to Google Secret Manager. We include gsutil copies in our build steps, but I like the idea of avoiding that step in local development and instead get the secrets from secret manager at invocation.
I'm concerned about cold start time increases. Does anyone do it this way, and if so how impactful is it to cold start times?

Related

Hashicorp Vault - Send command or api call on variable edit

Let's suppose I make a change over one variable in a Key-Value engine in my Hashicorp Vault.
Once I apply the change, it will create a new version of my variable, as expected.
Can I, somehow, send an API call or at least run a command coming from Hashicorp Vault itself? What I want to achieve is that, when I change a variable inside Hashicorp Vault, I can trigger a CICD build inside Gitlab CI.
No, Hashicorp Vault itself does not have any event/callback or similar mechanism for when secrets are updated.
However, depending on what storage backend you are using, you may be able to use features of your backend to support this. To give a few examples:
If you are using Consul as the backend for Vault, you can use watches in Consul to monitor for key/value pair changes.
If you are using DynamoDB, you can configure streams to trigger other workflows, like a lambda function to run your pipeline.
If you are using S3 as the storage backend, you can leverage S3 event notifications
Other backends will have their own mechanisms for this. However, not all backends will support this directly.
If you don't have a backend that supports this, your next best bet would be to periodically poll and check for updated values. For example, you might use scheduled pipelines to do this.

Google Cloud Secrets - Reusing a secret

I am using Google Cloud Secrets in a NodeJS Project. I am moving away from using preset environment variables and trying to find out the best practice to store and reuse secrets.
The 3 main routes I've found to use secrets are:
Fetching all secrets on startup and set them as ENV variables for later use
Fetching all secrets on startup and set as constant variables
Each time a secret is required, fetch it from Cloud Secrets
Google's own best practice documentation mentions 2 conflicting things:
Use ENV variables to set secrets at startup (source)
Don't use ENV variables as they can be accessed in debug endpoints and traversal attacks among other things (source)
My questions are:
Should I store secrets as variables to be re-used or should I fetch them each time?
Does this have an impact on quotas?
The best practice is to load one time the secret (at startup, or the first time is it accessed) to optimize performances and prevent API call latency. And yes, the access secret quotas is impacted on each access.
If a debugger tool is connected to the environment, Variables and Env Var data can be compromised. The threat is roughly the same. Be sure to secure correctly the environment.

Is it secure way to store private values in .env file?

I'm trying to build a node.js server with express framework, and I want to store a private key for admin APIs in my server.I'm now using .env file to store those values, and in my routes, using that values by calling like process.env.ADMIN_KEY.
Question
Is it secure way to handle private datas? or there's another way better than this?
It is more secure to store your secrets in a .env file than in the source code itself. But you can do one better. Here are the ways I've seen secrets managed, from least to most secure:
Hard-code the secrets in the code.
Pros: None. Don't do this.
Cons: Your developers will see your production secrets as part of their regular work. Your secrets will be checked into source control. Both are security risks. Also, you have to modify the code to use it in different environments, like dev, test, and production.
Put secrets in environment variables, loaded from a .env file.
Pros: Developers won't see your production secrets. You can use different secrets in dev, test, and production, without having to modify the code.
Cons: Malicious code can read your secrets. The bulk of your application's code is probably open-source libraries. Bad code may creep in without you knowing it.
Put secrets in a dedicated secret manager, like Vault by HashiCorp or Secret Manager by Google Cloud.
Pros: It's harder for malicious code to read your secrets. You get auditing of who accessed secrets when. You can assign fine-grained roles for who updates secrets and who can read them. You can update and version your secrets.
Cons: It's additional technology that you have to learn. It may be an additional piece of software that you need to set up and manage, unless it's included in the cloud platform you're using.
So the choice is really between items 2 and 3 above. Which one you pick will depend on how sensitive your secrets are and how much extra work it would be to use a dedicated secret manager. For example, if your project is running on Google Cloud Platform, the Secret Manager is just one API call away. It may be just as easy on the other major cloud platforms, but I don't have first-hand experience with them.
Simple answer is YES, .env is used to store keys and secrets. It is not pushed to your repo i.e. github or bitbucket or anywhere you store your code. In that way it is not exposed.
Here are the tutorial links for correct usage:
managing-environment-variables-in-node-js-with-dotenv
how-secure-is-your-environment-file-in-node-js
Secrets stored in environment variables are in risk of getting exposed (for non-private node apps) as for example libraries you use might print the environment into the log in case of an error. So it would be more safe to store them in a file outside of source control and import it where needed.
https://movingfast.io/articles/environment-variables-considered-harmful/
It is yes. An additional security check can be added by using encrypted values. Also avoid to checkin your .env file in public repo.
You can and should store secrets, credentials or private data securely inside a .env is a secure environment config section in your projects, useful for storing API keys and app credentials. Only invited collaborators are able to see the contents of your .env file.

How to share dynamically generated secrets between Docker containers

I have linked together a couple of Docker containers that use each others API endpoints. These API endpoints are protected by a secret and are generated on container startup. I'm looking for a safe way to share these secrets between those services without doing anything static (e.g. hardcoding). These services are created and linked together using docker-compose and it is possible for the secret to be overridden using an environment variable. This behavior is not encouraged for production however.
What is in my case the safest way to distribute these secrets?
Things I have considered:
Using a central data container which stores these secrets as a file. The clients can then link to this container and lookup the secret in the file.
This huge downside this approach has is that it limits the containers to run on the same node.
Generating a docker-compose file with these random secrets hardcoded into them before deploying the containers.
The downside to this approach would be that it wouldn't be possible to simply use the docker-compose file but limiting yourself to a bash script to generate something as mission critical as these secrets. This would also not adhere to my sidenote that the solution should be dynamically adaptable to secret changes.
Sidenote
Ultimately, I would prefer it if the solution could also adapt dynamically to secret changes. For example, when a container fails, it will restart automatically, thus also generating a new secret.

Prevent azure staging environment from accessing queue messages

After swapping the latest azure deployment from staging to production, I need to prevent the staging worker role from accessing the queue messages. I can do this by detecting if the environment is staging or production in code, but can anyone tell me if there is a any other way to prevent staging environment from accessing and processing queue messages??
Thanks for the help!
Mahesh
There is nothing in the platform that would do this. This is an app/code thing. If the app has the credentials (for example, account name and key) to access the queue, then it is doing what it was coded to do.
Have your staging environment use the primary storage key and your production environment use the secondary storage key. When you do the VIP swap you can regenerate the storage key that your now-staging environment is using which will result in it no longer having credentials to access the queue.
Notice that this does introduce a timing issue. If you do the swap first and then change the storage keys then you run the risk of the worker roles picking up messages in between the two operations. If you change the keys first and then do the swap then there will be a second or two where your production service is no longer pulling messages from the queue. It will depend on what your service does as to whether or not this timing issue is acceptable to you.
You can actually detect which Deployment Slot that current instance is running in. I detailed how to do this here: https://stackoverflow.com/a/18138700/1424115
It's really not as easy as it should be, but it's definitely possible.
If this is a question of protecting your DEV/TEST environment from your PRODUCTION environment, you may want to consider separate Azure subscriptions (one for each environment). This guide from Patterns and Practices talks about the advantages of this approach.
http://msdn.microsoft.com/en-us/library/ff803371.aspx#sec29
kwill's answer of regenerating keys is a good one, but I ended up doing this:
Optional - stop the production worker role from listening to the queue by changing an appropriate configuration key which tells it to ignore messages, then rebooting the VM (either through the management portal or by killing the WaHostBootstrapper.exe)
Publish to the staged environment (this will start accessing the queue, which is fine in our case)
Swap staged <-> production via Azure
Publish again, this time to the new staged environment (old live)
You now have both production and staging worker roles running the latest version and servicing the queue(s). This is a good thing for us, as it gives us twice the capacity, and since staging is running anyway we may as well use it!
It's important that you only use staging as a method of publishing to live (as it was intended) - create a whole new environment for testing/QA purposes, which has its own storage account and message queues.

Resources