The Azure Storage sample code for key rotation demonstrates using multiple uniquely named Secrets. However, within KeyVault it is now possible to create multiple versions of a single Secret. I can see no reason why key rotation cannot be achieved using Versions and it seems on the face of it like it'd be easier to manage.
Can anyone offer any guidance on why you'd choose multiple Secrets over versions of a single Secret to support key rotation? And potentially any generally guidance on what Versions are intended for if not this?
Thanks!
Feel free to use a single secret and multiple versions for key rotation, depending on which SDK(s) you are using.
For our Node.js-based apps, we have configuration that points at the full KeyVault secret URIs.
Some secrets only point at the short URL for a secret (no version), so the app gets "the latest version".
Other secrets that need rotation will need the full URL to the version. We use Azure Table Encryption, for example; so each row in the table when encrypted uses a key wrapping key from KeyVault. The key wrapping key is a full KeyVault versioned URL, since you need that specific secret to decrypt the table data. Over time, different rows will point at different key versions.
For general scenarios, i.e. rotating a current version of a key to a new one, just built your configuration system and connection logic to support 2 keys at a time - if one fails, use the other; or consider a time-based logic.
Any approach is fine, but consider how nice and "clean" your story will be if you use versions over time instead of proliferating additional secret names.
Related
What is the best practice with windows Azure key vault. Is it a good practice to extract keys within the application every time we make use of it or it is good to set them in the OS environment variable.
It depends on what key. You should not be extracting master key, but the associated data keys to encrypt and decrypt.
There is no hard-fast rule here but you should consider the rotation of keys. If you need the keys to be rotated often you should consider pulling from Key Vault every time.
There are other times in makes more sense to pull once and cache locally. This could include your network latency being an issue so it's more cost-effective to pull once then cache. In this scenario, you will need some mechanism in place to force a new pull if/when the key is rotated.
Currently my company is moving all our products Azure and as part of migration we are using Azure key vault for storing secret keys. We have around 10 to 15 products and for each product we use review, integration, staging and production environment.
I don't see an option to configure secrets to different environment and products as in Vault Enterprise which we currently use. I am here to ask a best approach to configure secret keys for different products and corresponding environments in Azure key vault. So that it will easy to manage the secrets on Azure key vault.
Note: We do have around 5 to 10 keys for each environment.
We had good experiences with using dedicated KeyVaults for each environment. The main advantage of using a "KeyVault per Stage" approach is that you can have the same key name in every KeyVault. This really saves you from a lot of complexity when consuming the values later on. Also, if you decide to create a new environment or drop an existing one, you don't have to worry about affecting other environments.
We usually also extend this to also create a dedicated KeyVault per product. As such, you will have "only what you need" and it is quite transparent. If you have a lot of "shared values", you could also create "common" KeyVaults instead.
If you use Azure Pipelines, it can be very nice to link the KeyVaults to a stage in the pipeline. This also works with YAML pipelines. Again, having the same secret name in each environment helps a lot in this case, since each environment can be identical.
Sidenote: With Azure Pipelines, you could also store some secrets as "secret variables". Probably not enough for your case, but I wanted to make sure you know.
You must use variables in the pipeline that will contain values from the key vault, then, just set the variables to the related product / environment.
For more info:
https://zimmergren.net/using-azure-key-vault-secrets-from-azure-devops-pipeline/
https://stefanstranger.github.io/2019/06/26/PassingVariablesfromStagetoStage/
-- I am exploring Azure functionality and am wondering if Azure Table Storage can be an easy way for holding application configuration for an entire environment. It would be easy to see and change (adding list values etc.). Can someone please guide me on whether this is a good idea? I would expect this table to hold no more than 2000 rows if all our applications were moved over to Azure.
Partition Key --> Project Name + Component Name (Azure Function/Logic App)
Row Key --> Parameter Key
Value column --> Parameter Value
-- For securing password/keys, I can use the Azure Key Vault.
There are different ways of storing application configurations:
Key Vault (as you stated) for sensitive information. Ex. tokens, keys, connection strings. It can be standardized and extended to any type of resources for ease of storing and retrieving these.
Application Settings, found under each App Service. This approach assumes you have an App Service for each of your app.
Release Pipeline, such as Azure DevOps Services (AzDo). AzDo has variables that can be global to the release pipeline or some that can be specific to each stages
I am exploring Azure functionality and am wondering if Azure Table
Storage can be an easy way for holding application configuration for
an entire environment. It would be easy to see and change (adding list
values etc.). Can someone please guide me on whether this is a good
idea?
Considering Azure Tables is a key/value pair store, it is certainly a good idea to store application configuration values there. Only thing I would recommend is that you incorporate some kind of caching layer between your application and table storage so that you don't end up making calls to table storage every time you need to fetch a setting.
I would expect this table to hold no more than 2000 rows if all our
applications were moved over to Azure.
Considering the number of entities is going to be less than 2000, I think your design would have no impact in querying the entities however I think your design is good. For best performance, please ensure that you're including both PartitionKey and RowKey while querying. At the very least, include PartitionKey in your query.
Please see this for more details: https://learn.microsoft.com/en-us/azure/cosmos-db/table-storage-design-guide.
For securing password/keys, I can use the Azure Key Vault.
That's the way to go for storing sensitive data in Azure.
Have you looked at the App Configuration service?
There are client libraries in .NET, Java, TypeScript and Python to interact with the service that you can leverage in your application.
Is there a limit on number of Keys, Certificates etc in a Key Vault?
There is no documented limit for the number of resources in a Key Vault - only the number of operations per second.
However, the more resources you have in a vault, the longer it will take to enumerate them all. If you have no need to enumerate them, this may not affect performance but also is not documented.
But if you're using a configuration library - common, for example, with ASP.NET applications - they often fetch all secrets to find the ones they need, and with a limit to the number of operations, this can either fail (too many calls) or take a long while if a retry policy is used, which built-in and enabled by default with Azure.Security.KeyVault.* packages - we recommend developers use instead of Microsoft.Azure.KeyVault, which will only get critical fixes going forward.
As common workaround is to stuff more secrets into something like a JSON blob, though make sure you want anyone with access to the blob to have access to all the secrets contained therein.
The subject says it all...
Why there are two keys for Azure DocumentDB (primary and secondary)?
This is so that you can expire a key without having any system downtime. Say you want to replace your primary key. The procedure is
Configure your service to use the secondary key - if you use the service config you can do this without downtime.
Regenerate the primary key
(Optional) reconfigure your service to use the new primary key
If there was only one key at a time, your service would be down while you did the key replacement.
Good practise is to replace your keys on a regular basis (e.g every 6 months or whatever is appropriate based on the sensitivity of your data). You should also replace keys when anyone who has access to the keys leaves your business or team. Finally, you should obviously replace them if you think they have been compromised in some way. E.g. accidentally written to a log or posted to a public GitHub repo - it happens...
https://securosis.com/blog/my-500-cloud-security-screwup
Both the primary and secondary keys can be regenerated in the Azure portal (note: at the time of writing this is the preview portal). Select your DocumentDB then the Keys pane. There are two buttons at the top of the pane: