Azure best practice - azure

What is the best practice with windows Azure key vault. Is it a good practice to extract keys within the application every time we make use of it or it is good to set them in the OS environment variable.

It depends on what key. You should not be extracting master key, but the associated data keys to encrypt and decrypt.

There is no hard-fast rule here but you should consider the rotation of keys. If you need the keys to be rotated often you should consider pulling from Key Vault every time.
There are other times in makes more sense to pull once and cache locally. This could include your network latency being an issue so it's more cost-effective to pull once then cache. In this scenario, you will need some mechanism in place to force a new pull if/when the key is rotated.

Related

AWS QLDB - GDPR support

I am planning for AWS QLDB for audit data.
Does QLDB support GDPR? Is there any performance impact to this?
There are some fields encrypted using custom encryption key before storing into QLDB. I might change the key down the line when the key gets compromised or for the key rotation policy. So, I may need to read all the old records, decrypt using old key and encrypt using new record and update again. Is this possible with QLDB?
How to do multi tenancy with QLDB? Like, I have multiple apps writing to audit and would like to have a virtual separation for each app in the same cluster.
thank you for the question; it touches some of the concepts that is at the heart of QLDB.
Does QLDB support GDPR? Is there any performance impact to this?
The QLDB developer guide page on data protection may help provide more information about the AWS shared responsibility model. It may also be helpful to read this AWS blog post about the shared responsibility model and GDPR.
We are currently working on a feature that will allow customers to remove the customer data payload from QLDB revisions. Many customers have asked for this feature in order to accommodate GDPR ”Right to forget“ requirements. Please do be aware that this is not a claim of ”compliance“ - as this is something you would need to evaluate independently. We do not anticipate this impacting any read/write performance. If you’re interested to know more about this, please reach out to AWS support and they’ll connect you with our team to tell you more about it.
There are some fields encrypted using custom encryption key before storing into QLDB. I might change the key down the line when the key gets compromised or for the key rotation policy. So, I may need to read all the old records, decrypt using old key and encrypt using new record and update again. Is this possible with QLDB?
Reading all the old records is possible in QLDB through a few different methods — querying revision history, exporting journal data, or streaming journal data.
However, it is worth noting that QLDB does provide encryption at rest via KMS. You can leverage KMS for key rotations or key expiry as well, and you’ll be able to access the old data with the new key via KMS’s key hierarchy. KMS will allow you to rotate keys without the need to reencrypt all your data.
How to do multi tenancy with QLDB? Like, I have multiple apps writing to audit and would like to have a virtual separation for each app in the same cluster.
There are a few potential ways to go about this, that ultimately may depend on your use-case(s). Within a single ledger you could leverage attributes in each document to differentiate between tenants. You could leverage multiple ledgers in QLDB in a single account with the default quota. It may also be the case that you want even more separation and may consider creating multiple accounts and leveraging something like AWS Control Tower.
All that said, the best approach could depend very heavily on your use-case(s), as well as other AWS products that you’re using. You may want to reach out to AWS support on this as well to potentially connect with the relevant Solutions Architect who could consult on approaches, given your specific use-case(s).

Maximum number of secrets in Azure Key Vault?

Is there a limit on number of Keys, Certificates etc in a Key Vault?
There is no documented limit for the number of resources in a Key Vault - only the number of operations per second.
However, the more resources you have in a vault, the longer it will take to enumerate them all. If you have no need to enumerate them, this may not affect performance but also is not documented.
But if you're using a configuration library - common, for example, with ASP.NET applications - they often fetch all secrets to find the ones they need, and with a limit to the number of operations, this can either fail (too many calls) or take a long while if a retry policy is used, which built-in and enabled by default with Azure.Security.KeyVault.* packages - we recommend developers use instead of Microsoft.Azure.KeyVault, which will only get critical fixes going forward.
As common workaround is to stuff more secrets into something like a JSON blob, though make sure you want anyone with access to the blob to have access to all the secrets contained therein.

How can I cache data using Azure Key Vault?

I want to use Azure Key Vault for my PAAS application. Is there any way to cache the data instead of making calls every time to Key Vault to retrieve a key?
Here is a code sample to cache and proxy secrets, keys, and certificates from Azure Key Vault.
Links:
https://learn.microsoft.com/en-us/samples/azure/azure-sdk-for-net/azure-key-vault-proxy/ OR
https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/keyvault/samples/keyvaultproxy/src
It is a pretty clean way.
Yes, any of the standard caching mechanisms will still work.
On first request, your app will look in cache first and won't find the value, so it will call KeyVault for value. You'll then store the value in cache so that the next time your application needs the value, it will be retrieved from cache.
You could do in memory, or ideally, something out of process, such as Redis.

Azure KeyVault key rotation

The Azure Storage sample code for key rotation demonstrates using multiple uniquely named Secrets. However, within KeyVault it is now possible to create multiple versions of a single Secret. I can see no reason why key rotation cannot be achieved using Versions and it seems on the face of it like it'd be easier to manage.
Can anyone offer any guidance on why you'd choose multiple Secrets over versions of a single Secret to support key rotation? And potentially any generally guidance on what Versions are intended for if not this?
Thanks!
Feel free to use a single secret and multiple versions for key rotation, depending on which SDK(s) you are using.
For our Node.js-based apps, we have configuration that points at the full KeyVault secret URIs.
Some secrets only point at the short URL for a secret (no version), so the app gets "the latest version".
Other secrets that need rotation will need the full URL to the version. We use Azure Table Encryption, for example; so each row in the table when encrypted uses a key wrapping key from KeyVault. The key wrapping key is a full KeyVault versioned URL, since you need that specific secret to decrypt the table data. Over time, different rows will point at different key versions.
For general scenarios, i.e. rotating a current version of a key to a new one, just built your configuration system and connection logic to support 2 keys at a time - if one fails, use the other; or consider a time-based logic.
Any approach is fine, but consider how nice and "clean" your story will be if you use versions over time instead of proliferating additional secret names.

Ansible vault password file

I'm was thinking, since we already have a secret file that we use to access the servers (the ssh private key), how much of a security risk would be to use this file as the key file for the vault?
The benefit would be that we only have to secure the ssh private key instead of having another key for the vault.
I like your thought of reducing secrets, but I have some concerns of using the ansible private key.
Scenario
Ideally, the private key you are mentioning is only existing on your management machine, from which you run your playbooks. The way I see it is that the more this key is distributed among other machines/systems, the more likely it is that it gets compromised. The ansible private key usually gives access to root on any provisioned machine in your system, which makes it a very valuable secret. I never provision the ansible private key with ansible itself (which would be kind of chicken-egg anyways, at least on the first management machine).
Problem
One potential problem I see with that approach is when developing roles locally, e.g., with vagrant.
You would need to use the private key from your management system locally to decrypt the secrets and run your playbooks against your vagrant boxes.
Also, any other developer who works on the same ansible project would need that private key locally for development.
Potential workaround
My premise is that the private key does not leave the management server. To achieve that you could develop your roles in a way that for local development you do not need any secret decryption, e.g. create a local development dev counterpart for each production group which uses only non-encrypted fake data. That way you would only need to decrypt secrets on your management machine and won't need the private key locally, but of course this leads to a higher development effort of your ansible project.
I always try to use this approach anyways as much as possible, but from time to time you might find yourself in a scenario in which you still need to decrypt some valid api key for your vagrant boxes. In some projects you might want to use your ansible playbooks not only for production servers, but also to locally provision vagrant boxes for the developers, which is usually when you need to decrypt a certain amount of valid secrets.
Also worth mentioning, with this approach changes to the production secrets could only be made directly on the management server with the private key.
Conclusion
All in all I think that while it would be theoretically possible to use the private key as vault password, the benefit of reducing one secret is too small compared to the overhead that comes with the extra security concerns.

Resources