Azure storage and security - azure

For a new web application that is going to be built in Azure we are thinking of storing sensitive personal documents (scans of passports, educational transcripts, etc) in Azure blob.
Is this a secure enough approach or is there a better way of doing this? Thanks for your input.

Storage service encryption is enabled by default for all storage accounts. You have the option to use your own encryption keys if you wish.
https://learn.microsoft.com/en-us/azure/storage/common/storage-service-encryption
If you wish to create custom keys managed by Azure Key Vault, you can follow instructions here: https://learn.microsoft.com/en-us/azure/storage/common/storage-service-encryption-customer-managed-keys
However, if you worry about the data as it is being transferred to Azure blob, you will need to use client-side encryption as well. Here is the link for that: https://learn.microsoft.com/en-us/azure/storage/common/storage-client-side-encryption?toc=%2fazure%2fstorage%2fqueues%2ftoc.json

Like many things in Azure, it can be secure, but takes effort to do so.
1) As of last year, all storage accounts are encrypted at rest using Microsoft-managed keys using AES-256. You can also bring your own key, as mentioned here.
2) Employ client side encryption - that way, if the account was compromised, the attacker can't read the data; they also need the key to decrypt the data. This does influence performance, but is often acceptable for most scenarios.
3) Use the storage account's firewall to permit only the addresses that require access to the storage account.
Side note: If you're accessing Storage from an App Service, the outbound IP addresses will not change unless you scale the App Service Plan up or down. Auto-scaling your app service horizontally does not change the outbound IP addresses.
4) Integrate the storage account with Azure KeyVault to automatically rotate the keys and generate SAS tokens, as documented here. I wish this could be done via the portal, as most people aren't aware that this exists.
5) Do not use the storage account keys - generate and hand out short lived SAS tokens. KeyVault integration can help with this.
6) Enable storage diagnostics metrics and logging. While not a defensive measure by itself, it can be helpful after the fact.
7) Enable soft delete; this may reduce the impact of certain attacks if a breach were to occur.
8) Enable the 'secure transfer required' setting, to permit traffic only over HTTPS.

Related

Azure Key Vault best practices

We are developing an Azure multi-tenant web service portal that calls external API services. Each called web service may have oAuth parameters such as end point, client Id, secret, etc. Currently, we 2 flavors of working code:
uses SQL to store these parameters; and
uses a json config file to maintain runtime parameters. I would like to offer a Azure Key Vault solution, but it seems unwise to have both a Client ID and Client Secret in the same Key Vault.
Additionally, we have many subscribers in our multi-tenant model, and each may have a specific config (for example: Solr collection name, SQL Db connection string; etc.) and I am wondering about comingling these runtime parameter verses allowing the customer to have their own Vault which of course requires that the customer share access with us as a SaaS vendor. We want to offer best practices security to our subscribers, many of which are Fintechs. Any advice is appreciated.
"but it seems unwise to have both a Client ID and Client Secret in the same Key Vault"
Why would you store these in the same database or JSON - far less secure.
Have you looked at Azure API Management, this is by far the best way to amalgamate services.
If you are too far down the road, use KeyVault. Use MSI (Managed Service Identity) to connect from your app service / function app, limit access Keys, Secrets, Get, List, Read, Write. Limit access via the firewall. Make sure all diagnostics are logged.
If you need a per-client model and not a multi-tenant, then deploy separate instances of the portal or API management for each. Per-client is more expensive and more tricky to maintain, but far more secure because you can enforce physical partitional on a number of fronts.

Securing Resource Connection Data/Strings For Use by Microservices in Azure

Various microservices will need access to various Azure resources, each of which has various connection string/authentication key requirements.
I'm thinking that Azure Key Vault is the best way to store the connection information for each resource, with the microservices authenticating themselves in order to gain access to the connection information they need.
Is this a best-practice approach?
Yes, I think so. You can secure access to your key vaults by allowing only authorized applications and users. As you need access to various Azure resources, I think it is very convenient to use Azure key vault.
Besides, Azure Key Vault uses Hardware Security Modules (HSMs) by Thales. Special about HSMs is that they do not give you keys. You create or import a key into an HSM. Later on you give data to the HSM and the HSM is executing cryptographic operations on that data. E.g. encrypting, decrypting, hashing ect. By the way, those hardware devices are really expensive. With Azure Key Vault you are able to use this protection for a small price. That's one benefit using Azure Key Vault.
Reference:
Best practices to use Key Vault

Azure:limit the access of ARM PaaS services to certain storage accounts

I have a security question related to Azure that I could really do with some guidance on the art of what is possible.
I would like to know if it is possible to restrict what services can be called (i.e what storage account endpoints can be used to write data to) from PaaS services such as service fabric or web apps (ASE). i.e. if I have a web app that writes to storage and someone maliciously altered the code to write to a third party storage account on Azure; is this something I could mitigate in advance by saying this application (i.e. this web app or this SF cluster) can only talk to a particular set of storage accounts or a particular database. So that even if the code was changed to talk to another storage account, it wouldnt be able to. I.e can I explicitly define as part of an environment what storage items an application can talk to; Is this something that is possible?
Azure Storage Accounts have Access Keys and Shared Access Keys that are used to authenticate REST calls to read / write data to them. Your app will be able to perform read / write operations against the Azure Storage Account that it has an access key and connection string for that it uses to connect to it with.
It's not possible to set any kind of firewall rule on an Azure App Service app to prevent it from communicating with certain internet or Azure endpoints. You can set NSG firewall rules with App Service Environment, but you still can only either open or close access; not restrict on certain DNS names or IP Addresses.
Perhaps you should look for a mitigation to this threat in the way applications are deployed, connection strings are managed and code is deployed:
Use Azure Role Based Access control to limit access to the resource in Azure, so unauthorized persons cannot modify deployments
Use a secure way of managing your source code. Remember it is not on the PaaS service, because that only holds the binaries.
Use SAS tokens for application access to storage accounts, not the full access keys. For example, a SAS key could be given write access, not read or list access to a storage account.
If, as a developer, you don't trust the person managing the application deployment, you could even consider signing your application parameters/connection strings. That only protects against tampering though, not against extraction of the connection string.

Azure blob storage and security best practices

When exploring Azure storage I've noticed that access to a storage container is done through a shared key. There is concern where I work that if a developer is using this key for an application they're building and then leave the company that they could still login to the storage account and delete anything they want. The workaround for this would be to re-generate the secondary key for the account but then we'd have to change all of the keys in every application that uses those keys.
Is it best practice to have an entire storage account per application per environment (dev, test, staging, production)?
Is it possible to secure the storage account behind a virtual network?
Should we use signatures on our containers on a per application basis?
Anybody have experiences similar and have found a good pattern for dealing with this?
I have a bit different scenario – external applications, but the problem is the same - data access security
I use Shared Access Signatures(SAS) to grant an access to a container.
In your scenario you can create Shared Access Policy per application on a container and generate SAS using this Stored Access Policy with long expiration time, you can revoke it at any point by removing Shared Access Policy from container. So in you scenario you can revoke current SAS and generate new one when your developer leaves. You can't generate single SAS for multiple containers so if you application uses multiple containers you would have to generate multiple SAS.
Usage, from the developers perspective, stays the same:
You can use SAS token to create CloudStorageAccount or CloudBlobClient so it’s almost like a regular access key.
Longer term, I would probably think about creating one internal service(internal API) responsible for generating SAS and renewing them. This way you can have completely automated system and with access keys only disclosed to this main service. You can then restrict access to this service with virtual network, certificates, authentication etc. And if something goes wrong (developer who wrote that service leaves :-) ) you can regenerate access keys and change them, but this time only in one place.
Few things:
Storage account per application (and/or environment) is a good strategy, but you have to be aware of the limit – max 100 storage accounts per subscription.
There is no option to limit access to a storage account with virtual network
You can have maximum 5 Shared Access Policies on a single container
I won't get into subjective / opinion answers, but from an objective perspective: If a developer has a storage account key, then they have full access to the storage account. And if they left the company and kept a copy of the key? The only way to block them out is to regenerate the key.
You might assume that separating apps with different storage accounts helps. However, just keep this in mind:if a developer had access to a subscription, they had access to keys for every single storage account in that subscription.
When thinking of key regeneration, think about the total surface area of apps having knowledge of the key itself. If storage manipulation is solely a server-side operation, the impact of changing a key is minimal (a small app change in each deployment, along with updating any storage browsing tools you use). If you embedded the key in a desktop/mobile application for direct storage access, you have a bigger problem with having to push out updated clients, but you already have a security problem anyway.

Does Windows Azure have the equivalent of AWS Identity Access Management?

So I have a mobile app that uses AWS's IAM infrastructure that effectively allows me to provide temporary access tokens to anonymous mobile devices, so that they can run queries against AWS services directly from the mobile device.
Does anyone know if Windows Azure has a drop in replacement for this sort of thing too? I've read about Windows Azure Access Control but all examples seem to focus on allowing authentication via the likes of Facebook, Twitter or Windows Live etc. In my case, I don't want the mobile user to have to "log-in" anywhere, I just want them to be able to access Azure services such as table storage, without having to go via my server.
Thanks!
You do have the ability to create Signed Access Signatures for all three Windows Azure Storage services (BLOBs, Queues and Tables) as well as for Windows Azure Service Bus Brokered Messages (Queues, Topics & Subscriptions). These SAS urls are temporary and you can create them ad-hoc with expiration times. After that time expires the device would have to request a new one, likely from your server. This reduce the load as they aren't coming back all the time, but you do still have to run something that will gen these SAS uris for the devices. You can generate SAS manually against the REST API direct, or you can use one of the SDKs to generate them for you (which also hit the REST API).
Note that when you create a SAS you have the option of doing so as a Policy, or adhoc. A policy allows you to revoke a SAS at a later time, but you can only have so many of these defined at a time (likely too big of a restriction for a mobile scenario if you are doing by device). The adhoc approach allows you pretty much as many as you need (I think), but you don't have the ability to revoke it, it just has to expire.
Another option is to look at Windows Azure Mobile Services. This service runs on servers managed by Microsoft and you can use it to get at just about anything you want. You'd want to look at the "Custom API" feature. Also, make sure you understand the pricing model of mobile services (or really, that stands for any option you decide to go with).
It's called managed identities in Azure

Resources