Can I provide Azure ExpressRoute Service key to 3rd party - azure

What is encoded in an Azure Expressroute Service key ? is there a security concern if exposed to a 3rd party, can it be used by another person or on another Azure subscription where it was not created? how can a customer protect this key, is it a one-time use.

The service key is not a secret by the virtue of its content. There is 1:1 mapping between the Azure Expressroute circuit (for a subscription) and the service key.
This means that for every new Azure ER circuit you launch within a single subscription (limit of 10 ER's), there will be a new service key.

Related

Azure Key Vault best practices

We are developing an Azure multi-tenant web service portal that calls external API services. Each called web service may have oAuth parameters such as end point, client Id, secret, etc. Currently, we 2 flavors of working code:
uses SQL to store these parameters; and
uses a json config file to maintain runtime parameters. I would like to offer a Azure Key Vault solution, but it seems unwise to have both a Client ID and Client Secret in the same Key Vault.
Additionally, we have many subscribers in our multi-tenant model, and each may have a specific config (for example: Solr collection name, SQL Db connection string; etc.) and I am wondering about comingling these runtime parameter verses allowing the customer to have their own Vault which of course requires that the customer share access with us as a SaaS vendor. We want to offer best practices security to our subscribers, many of which are Fintechs. Any advice is appreciated.
"but it seems unwise to have both a Client ID and Client Secret in the same Key Vault"
Why would you store these in the same database or JSON - far less secure.
Have you looked at Azure API Management, this is by far the best way to amalgamate services.
If you are too far down the road, use KeyVault. Use MSI (Managed Service Identity) to connect from your app service / function app, limit access Keys, Secrets, Get, List, Read, Write. Limit access via the firewall. Make sure all diagnostics are logged.
If you need a per-client model and not a multi-tenant, then deploy separate instances of the portal or API management for each. Per-client is more expensive and more tricky to maintain, but far more secure because you can enforce physical partitional on a number of fronts.

Azure Key Vault - Geo Replication?

Does Azure Key Vault supports Geo-Replication between the regions? I don't see any options?
https://learn.microsoft.com/en-us/azure/key-vault/general/disaster-recovery-guidance
"The contents of your key vault are replicated within the region and
to a secondary region at least 150 miles away but within the same
geography to maintain high durability of your keys and secrets. See
the Azure paired regions document for details on specific region
pairs."
From #Karthikeyan Vijayakumar comment above:
However I have the application deployed on both West US (primary) and East US(secondary) and I want to sync between the regions.
You don't need to replicate your Key Vault instance to make it available to your applications in both regions.
Simply call the URL (https://<vault-instance-name>.vault.azure.net), Azure DNS will dynamically resolve to the active region. By default, the active region is the region where you created the instance. In the event this region is unavailable, the DNS will resolve to the geo-replica, hosted in the corresponding paired region.
The problem with this approach is that you still on the mercy of Microsoft, as the service will be reestablished only if they decide to failover the region.
Short story long: There is no user managed geo replication of Azure Key vault like Azure SQL for example. In your case, you need to build a workflow that replicates the values between your primary and secondary key vaults.
Backup and Restore : https://learn.microsoft.com/en-us/azure/key-vault/general/backup?tabs=azure-cli
You can use these capabilities to build your workflow.
You can use the changelog to track changes to your key vault, and trigger a backup/Restore or you can schedule it like once a day.
A change tracking is better as you can only replicate changes and not the entire key vault.
Regards

Securing Resource Connection Data/Strings For Use by Microservices in Azure

Various microservices will need access to various Azure resources, each of which has various connection string/authentication key requirements.
I'm thinking that Azure Key Vault is the best way to store the connection information for each resource, with the microservices authenticating themselves in order to gain access to the connection information they need.
Is this a best-practice approach?
Yes, I think so. You can secure access to your key vaults by allowing only authorized applications and users. As you need access to various Azure resources, I think it is very convenient to use Azure key vault.
Besides, Azure Key Vault uses Hardware Security Modules (HSMs) by Thales. Special about HSMs is that they do not give you keys. You create or import a key into an HSM. Later on you give data to the HSM and the HSM is executing cryptographic operations on that data. E.g. encrypting, decrypting, hashing ect. By the way, those hardware devices are really expensive. With Azure Key Vault you are able to use this protection for a small price. That's one benefit using Azure Key Vault.
Reference:
Best practices to use Key Vault

How to secure "Azure Storage Queues" for each tenant?

I'm building a queue messaging system in Azure and what I'm trying to do is an outbound message queue container in Azure Storage Queue that allows my desktop Windows Services to get the latest messages from that queue. The problem I'm facing is that I want to have multiple queues per tenant (each Windows service serves one client) in one storage account. As far as I see, there is no way to restrict the connection string access to each queue. On the other hand, it is not practical for me to create one storage account per tenant. What is the best way to restrict client access to one specific queue with the current security methods available in Azure? I was thinking about using Service Bus Queues, but even that doesn't solve the connection string issue I have in the client application.
I think service bus queues is your answer; they allow a multi-subscriber model with "subjects" and various filters etc.
Storage queues are very simplistic and are not the right answer for this particular scenario.
Sorry, on my mobile so haven't got all the relevant docs to hand.
One option is to use AAD identities and Storage's AAD authentication support (which is currently in public preview).
You would need a Service Principal in Azure AD for each tenant for this,
and add the principal to the Storage Queue Data Reader or Storage Queue Data Contributor role on their respective queue.
You can then use the principal's credentials to get an access token that is tenant-specific.
Documentation:
https://azure.microsoft.com/en-us/blog/announcing-the-preview-of-aad-authentication-for-storage/
https://joonasw.net/view/azure-ad-authentication-with-azure-storage-and-managed-service-identity

Azure:limit the access of ARM PaaS services to certain storage accounts

I have a security question related to Azure that I could really do with some guidance on the art of what is possible.
I would like to know if it is possible to restrict what services can be called (i.e what storage account endpoints can be used to write data to) from PaaS services such as service fabric or web apps (ASE). i.e. if I have a web app that writes to storage and someone maliciously altered the code to write to a third party storage account on Azure; is this something I could mitigate in advance by saying this application (i.e. this web app or this SF cluster) can only talk to a particular set of storage accounts or a particular database. So that even if the code was changed to talk to another storage account, it wouldnt be able to. I.e can I explicitly define as part of an environment what storage items an application can talk to; Is this something that is possible?
Azure Storage Accounts have Access Keys and Shared Access Keys that are used to authenticate REST calls to read / write data to them. Your app will be able to perform read / write operations against the Azure Storage Account that it has an access key and connection string for that it uses to connect to it with.
It's not possible to set any kind of firewall rule on an Azure App Service app to prevent it from communicating with certain internet or Azure endpoints. You can set NSG firewall rules with App Service Environment, but you still can only either open or close access; not restrict on certain DNS names or IP Addresses.
Perhaps you should look for a mitigation to this threat in the way applications are deployed, connection strings are managed and code is deployed:
Use Azure Role Based Access control to limit access to the resource in Azure, so unauthorized persons cannot modify deployments
Use a secure way of managing your source code. Remember it is not on the PaaS service, because that only holds the binaries.
Use SAS tokens for application access to storage accounts, not the full access keys. For example, a SAS key could be given write access, not read or list access to a storage account.
If, as a developer, you don't trust the person managing the application deployment, you could even consider signing your application parameters/connection strings. That only protects against tampering though, not against extraction of the connection string.

Resources