Maximum number of secrets in Azure Key Vault? - azure

Is there a limit on number of Keys, Certificates etc in a Key Vault?

There is no documented limit for the number of resources in a Key Vault - only the number of operations per second.
However, the more resources you have in a vault, the longer it will take to enumerate them all. If you have no need to enumerate them, this may not affect performance but also is not documented.
But if you're using a configuration library - common, for example, with ASP.NET applications - they often fetch all secrets to find the ones they need, and with a limit to the number of operations, this can either fail (too many calls) or take a long while if a retry policy is used, which built-in and enabled by default with Azure.Security.KeyVault.* packages - we recommend developers use instead of Microsoft.Azure.KeyVault, which will only get critical fixes going forward.
As common workaround is to stuff more secrets into something like a JSON blob, though make sure you want anyone with access to the blob to have access to all the secrets contained therein.

Related

Azure best practice

What is the best practice with windows Azure key vault. Is it a good practice to extract keys within the application every time we make use of it or it is good to set them in the OS environment variable.
It depends on what key. You should not be extracting master key, but the associated data keys to encrypt and decrypt.
There is no hard-fast rule here but you should consider the rotation of keys. If you need the keys to be rotated often you should consider pulling from Key Vault every time.
There are other times in makes more sense to pull once and cache locally. This could include your network latency being an issue so it's more cost-effective to pull once then cache. In this scenario, you will need some mechanism in place to force a new pull if/when the key is rotated.

How many access policies can I create and add on the same one Azure container?

I didn't find the answer by search. I thought I should be able to create many stored access policies on one container (at least thousands). But by tests, my program can only add up to 5 policies on one container.
Then I tried Microsoft Azure Storage Explorer, it also has this restriction, can add only up to 5. But I cannot find any description about this. Is there any way to remove this restriction? Thanks.
The access policy limit is indeed 5 stored access policies per container, file share, table, or queue. And there's no way to alter this. The limit is documented within Azure's Storage scalability and Performance Targets document, here.
Note: You are able to generate Shared Access Signatures independently of the stored access policies. These are just more limited (e.g. you cannot revoke a SAS; it's active until the time expires, unless you delete the blob).

Azure KeyVault key rotation

The Azure Storage sample code for key rotation demonstrates using multiple uniquely named Secrets. However, within KeyVault it is now possible to create multiple versions of a single Secret. I can see no reason why key rotation cannot be achieved using Versions and it seems on the face of it like it'd be easier to manage.
Can anyone offer any guidance on why you'd choose multiple Secrets over versions of a single Secret to support key rotation? And potentially any generally guidance on what Versions are intended for if not this?
Thanks!
Feel free to use a single secret and multiple versions for key rotation, depending on which SDK(s) you are using.
For our Node.js-based apps, we have configuration that points at the full KeyVault secret URIs.
Some secrets only point at the short URL for a secret (no version), so the app gets "the latest version".
Other secrets that need rotation will need the full URL to the version. We use Azure Table Encryption, for example; so each row in the table when encrypted uses a key wrapping key from KeyVault. The key wrapping key is a full KeyVault versioned URL, since you need that specific secret to decrypt the table data. Over time, different rows will point at different key versions.
For general scenarios, i.e. rotating a current version of a key to a new one, just built your configuration system and connection logic to support 2 keys at a time - if one fails, use the other; or consider a time-based logic.
Any approach is fine, but consider how nice and "clean" your story will be if you use versions over time instead of proliferating additional secret names.

Azure blob storage and security best practices

When exploring Azure storage I've noticed that access to a storage container is done through a shared key. There is concern where I work that if a developer is using this key for an application they're building and then leave the company that they could still login to the storage account and delete anything they want. The workaround for this would be to re-generate the secondary key for the account but then we'd have to change all of the keys in every application that uses those keys.
Is it best practice to have an entire storage account per application per environment (dev, test, staging, production)?
Is it possible to secure the storage account behind a virtual network?
Should we use signatures on our containers on a per application basis?
Anybody have experiences similar and have found a good pattern for dealing with this?
I have a bit different scenario – external applications, but the problem is the same - data access security
I use Shared Access Signatures(SAS) to grant an access to a container.
In your scenario you can create Shared Access Policy per application on a container and generate SAS using this Stored Access Policy with long expiration time, you can revoke it at any point by removing Shared Access Policy from container. So in you scenario you can revoke current SAS and generate new one when your developer leaves. You can't generate single SAS for multiple containers so if you application uses multiple containers you would have to generate multiple SAS.
Usage, from the developers perspective, stays the same:
You can use SAS token to create CloudStorageAccount or CloudBlobClient so it’s almost like a regular access key.
Longer term, I would probably think about creating one internal service(internal API) responsible for generating SAS and renewing them. This way you can have completely automated system and with access keys only disclosed to this main service. You can then restrict access to this service with virtual network, certificates, authentication etc. And if something goes wrong (developer who wrote that service leaves :-) ) you can regenerate access keys and change them, but this time only in one place.
Few things:
Storage account per application (and/or environment) is a good strategy, but you have to be aware of the limit – max 100 storage accounts per subscription.
There is no option to limit access to a storage account with virtual network
You can have maximum 5 Shared Access Policies on a single container
I won't get into subjective / opinion answers, but from an objective perspective: If a developer has a storage account key, then they have full access to the storage account. And if they left the company and kept a copy of the key? The only way to block them out is to regenerate the key.
You might assume that separating apps with different storage accounts helps. However, just keep this in mind:if a developer had access to a subscription, they had access to keys for every single storage account in that subscription.
When thinking of key regeneration, think about the total surface area of apps having knowledge of the key itself. If storage manipulation is solely a server-side operation, the impact of changing a key is minimal (a small app change in each deployment, along with updating any storage browsing tools you use). If you embedded the key in a desktop/mobile application for direct storage access, you have a bigger problem with having to push out updated clients, but you already have a security problem anyway.

Practical Limit on number of Azure Shared Access Signatures?

I'm looking to avoid having to use a handler/module in my Webrole to protect images being served up from Block Blob storage on Azure. Shared Access Signatures (SAS) seems to be the way to go.
My question, is there a practical limit on the number of SAS I can issue - Could I issue one every 1 minute, say? Is there a performance issue (time to issue SAS) that would be the limiting factor?
I had initially thought that one SAS per user session would protect me better than a single SAS, but since there is nothing tying a SAS to a user, that won't help...
Shared Access Signatures have an optional component called a "container-level access policy." If you used a container-level access policy, that actually gets stored in blob storage and has a limit of five per container.
If you don't use a container-level access policy, you can make as many Shared Access Signatures as you want, and the server isn't even involved. (The signature is generated locally, meaning in your web role instance.) Generated the signature does involve some crypto, so you may eventually peg the CPU, but I suspect it's "fast enough."

Resources