Various microservices will need access to various Azure resources, each of which has various connection string/authentication key requirements.
I'm thinking that Azure Key Vault is the best way to store the connection information for each resource, with the microservices authenticating themselves in order to gain access to the connection information they need.
Is this a best-practice approach?
Yes, I think so. You can secure access to your key vaults by allowing only authorized applications and users. As you need access to various Azure resources, I think it is very convenient to use Azure key vault.
Besides, Azure Key Vault uses Hardware Security Modules (HSMs) by Thales. Special about HSMs is that they do not give you keys. You create or import a key into an HSM. Later on you give data to the HSM and the HSM is executing cryptographic operations on that data. E.g. encrypting, decrypting, hashing ect. By the way, those hardware devices are really expensive. With Azure Key Vault you are able to use this protection for a small price. That's one benefit using Azure Key Vault.
Reference:
Best practices to use Key Vault
Related
We are developing an Azure multi-tenant web service portal that calls external API services. Each called web service may have oAuth parameters such as end point, client Id, secret, etc. Currently, we 2 flavors of working code:
uses SQL to store these parameters; and
uses a json config file to maintain runtime parameters. I would like to offer a Azure Key Vault solution, but it seems unwise to have both a Client ID and Client Secret in the same Key Vault.
Additionally, we have many subscribers in our multi-tenant model, and each may have a specific config (for example: Solr collection name, SQL Db connection string; etc.) and I am wondering about comingling these runtime parameter verses allowing the customer to have their own Vault which of course requires that the customer share access with us as a SaaS vendor. We want to offer best practices security to our subscribers, many of which are Fintechs. Any advice is appreciated.
"but it seems unwise to have both a Client ID and Client Secret in the same Key Vault"
Why would you store these in the same database or JSON - far less secure.
Have you looked at Azure API Management, this is by far the best way to amalgamate services.
If you are too far down the road, use KeyVault. Use MSI (Managed Service Identity) to connect from your app service / function app, limit access Keys, Secrets, Get, List, Read, Write. Limit access via the firewall. Make sure all diagnostics are logged.
If you need a per-client model and not a multi-tenant, then deploy separate instances of the portal or API management for each. Per-client is more expensive and more tricky to maintain, but far more secure because you can enforce physical partitional on a number of fronts.
I'm provisioning a Key Vault in Azure. I wish to grant a development team permissions to be able to access and create keys and secrets and certs in this vault, but not have access to ALL of the keys, secrets and certs in the vault. Is that possible or do I need a separate key vault with separate permissions/access policies?
Thanks!
[Edit 2]
Now you can. For example, for secrets:
https://learn.microsoft.com/en-us/azure/key-vault/general/rbac-guide?tabs=azure-cli#secret-scope-role-assignment
Anyways, it is still recommended to not to do this except you really need it, and instead use many KeyVaults based on permissions
[Edit]
This feature might be coming in the near future. Stay tuned ;)
[Original]
No you cannot. But you can create as many KeyVauls as you want :)
Docs:
Important
Key Vault access policies don't support granular, object-level
permissions like a specific key, secret, or certificate. When a user
is granted permission to create and delete keys, they can perform
those operations on all keys in that key vault.
Azure DevTest labs do this. When you create a lab it creates one KV per each user so you can have granularity in the permissions.
For anyone else looking, please refer to. I am not the author or anything, just posting what I've found useful
https://feedback.azure.com/forums/906355-azure-key-vault/suggestions/32213176-per-secret-key-certificate-access-control
We implemented Azure RBAC for Key Vault Data Plane, which will allow
creating role assignment on individual key, secret, certificate as
scope.
Our best practices is to have one Key Vault per application, per
region, per environment to provide complete isolation and avoid blast
radius in case of a failure. Consolidation of key vaults is not
recommended and Key Vault service will not scale that way. Important
limitations to consider - Azure RBAC max 2000 role assignments per
subscription and Key Vault max 2000 operations within 10 seconds.
Documentation:
https://learn.microsoft.com/en-us/azure/key-vault/general/rbac-guide
I would like to use a single keyvault, share among multiple teams. I want to maintain certficates, secrets and keys and use RBAC to allow users, Groups, Service Principals to insure they only have access to those Objects they have access to ?
As mentioned in the comment, you can't.
The granularity of the data tier access control is not such meticulous, the user/service principal/group either access all the secrets or not access all the secrets, the same for keys and certificates.
At this point in the features available in the product, you cannot control or deploy RBAC's at data plane for customized access to KV entities such as secrets, keys, and certificates.
You may consider creating multiple instances of vaults to deploy such a pattern. At the same time, you should also voice your opinion on the feedback portal.
The Disable-AzureRmVMDiskEncryption cmdlet (I believe disable = decryption) just needs a name of the VM to disable encryption.
Isn't it a security issue disabling encryption without any key ? How can the disks be safeguarded from disabling encryption, through RBAC ?
Isn't it a security issue disabling encryption without any key ?
It doesn't look like a security concern because there are two separate concerns at play here:
Protecting Data at rest - which is taken care of by Azure Disk Encryption (only if you enable it as per Azure Data Security and Encryption Best Practices)
Protecting access to VM itself and it's resources - which is taken care of by RBAC.
When you Disable Disk Encryption
It does actually make sure that currently encrypted data gets decrypted back and is no longer encrypted at rest.
Since Azure already knows the details about the Key Encryption Key (KEK) and Disk Encryption Key (DEK) details from the time you enable the encryption in first place, it doesn't really need to ask back for these details in order to decrypt the currently encrypted information.
Here are the details of decryption flow from Microsoft Docs:
Decryption workflow
How can the disks be safeguarded from disabling encryption, through
RBAC ?
The real concern of who can manage VM in general or initiate/disable Disk Encryption can be controlled by assigning (or removing) the correct roles like Owner or Virtual Machine Contributor using RBAC from Azure Portal/PowerShell etc.
For a new web application that is going to be built in Azure we are thinking of storing sensitive personal documents (scans of passports, educational transcripts, etc) in Azure blob.
Is this a secure enough approach or is there a better way of doing this? Thanks for your input.
Storage service encryption is enabled by default for all storage accounts. You have the option to use your own encryption keys if you wish.
https://learn.microsoft.com/en-us/azure/storage/common/storage-service-encryption
If you wish to create custom keys managed by Azure Key Vault, you can follow instructions here: https://learn.microsoft.com/en-us/azure/storage/common/storage-service-encryption-customer-managed-keys
However, if you worry about the data as it is being transferred to Azure blob, you will need to use client-side encryption as well. Here is the link for that: https://learn.microsoft.com/en-us/azure/storage/common/storage-client-side-encryption?toc=%2fazure%2fstorage%2fqueues%2ftoc.json
Like many things in Azure, it can be secure, but takes effort to do so.
1) As of last year, all storage accounts are encrypted at rest using Microsoft-managed keys using AES-256. You can also bring your own key, as mentioned here.
2) Employ client side encryption - that way, if the account was compromised, the attacker can't read the data; they also need the key to decrypt the data. This does influence performance, but is often acceptable for most scenarios.
3) Use the storage account's firewall to permit only the addresses that require access to the storage account.
Side note: If you're accessing Storage from an App Service, the outbound IP addresses will not change unless you scale the App Service Plan up or down. Auto-scaling your app service horizontally does not change the outbound IP addresses.
4) Integrate the storage account with Azure KeyVault to automatically rotate the keys and generate SAS tokens, as documented here. I wish this could be done via the portal, as most people aren't aware that this exists.
5) Do not use the storage account keys - generate and hand out short lived SAS tokens. KeyVault integration can help with this.
6) Enable storage diagnostics metrics and logging. While not a defensive measure by itself, it can be helpful after the fact.
7) Enable soft delete; this may reduce the impact of certain attacks if a breach were to occur.
8) Enable the 'secure transfer required' setting, to permit traffic only over HTTPS.