Azure blob storage and security best practices - azure

When exploring Azure storage I've noticed that access to a storage container is done through a shared key. There is concern where I work that if a developer is using this key for an application they're building and then leave the company that they could still login to the storage account and delete anything they want. The workaround for this would be to re-generate the secondary key for the account but then we'd have to change all of the keys in every application that uses those keys.
Is it best practice to have an entire storage account per application per environment (dev, test, staging, production)?
Is it possible to secure the storage account behind a virtual network?
Should we use signatures on our containers on a per application basis?
Anybody have experiences similar and have found a good pattern for dealing with this?

I have a bit different scenario – external applications, but the problem is the same - data access security
I use Shared Access Signatures(SAS) to grant an access to a container.
In your scenario you can create Shared Access Policy per application on a container and generate SAS using this Stored Access Policy with long expiration time, you can revoke it at any point by removing Shared Access Policy from container. So in you scenario you can revoke current SAS and generate new one when your developer leaves. You can't generate single SAS for multiple containers so if you application uses multiple containers you would have to generate multiple SAS.
Usage, from the developers perspective, stays the same:
You can use SAS token to create CloudStorageAccount or CloudBlobClient so it’s almost like a regular access key.
Longer term, I would probably think about creating one internal service(internal API) responsible for generating SAS and renewing them. This way you can have completely automated system and with access keys only disclosed to this main service. You can then restrict access to this service with virtual network, certificates, authentication etc. And if something goes wrong (developer who wrote that service leaves :-) ) you can regenerate access keys and change them, but this time only in one place.
Few things:
Storage account per application (and/or environment) is a good strategy, but you have to be aware of the limit – max 100 storage accounts per subscription.
There is no option to limit access to a storage account with virtual network
You can have maximum 5 Shared Access Policies on a single container

I won't get into subjective / opinion answers, but from an objective perspective: If a developer has a storage account key, then they have full access to the storage account. And if they left the company and kept a copy of the key? The only way to block them out is to regenerate the key.
You might assume that separating apps with different storage accounts helps. However, just keep this in mind:if a developer had access to a subscription, they had access to keys for every single storage account in that subscription.
When thinking of key regeneration, think about the total surface area of apps having knowledge of the key itself. If storage manipulation is solely a server-side operation, the impact of changing a key is minimal (a small app change in each deployment, along with updating any storage browsing tools you use). If you embedded the key in a desktop/mobile application for direct storage access, you have a bigger problem with having to push out updated clients, but you already have a security problem anyway.

Related

Azure storage and security

For a new web application that is going to be built in Azure we are thinking of storing sensitive personal documents (scans of passports, educational transcripts, etc) in Azure blob.
Is this a secure enough approach or is there a better way of doing this? Thanks for your input.
Storage service encryption is enabled by default for all storage accounts. You have the option to use your own encryption keys if you wish.
https://learn.microsoft.com/en-us/azure/storage/common/storage-service-encryption
If you wish to create custom keys managed by Azure Key Vault, you can follow instructions here: https://learn.microsoft.com/en-us/azure/storage/common/storage-service-encryption-customer-managed-keys
However, if you worry about the data as it is being transferred to Azure blob, you will need to use client-side encryption as well. Here is the link for that: https://learn.microsoft.com/en-us/azure/storage/common/storage-client-side-encryption?toc=%2fazure%2fstorage%2fqueues%2ftoc.json
Like many things in Azure, it can be secure, but takes effort to do so.
1) As of last year, all storage accounts are encrypted at rest using Microsoft-managed keys using AES-256. You can also bring your own key, as mentioned here.
2) Employ client side encryption - that way, if the account was compromised, the attacker can't read the data; they also need the key to decrypt the data. This does influence performance, but is often acceptable for most scenarios.
3) Use the storage account's firewall to permit only the addresses that require access to the storage account.
Side note: If you're accessing Storage from an App Service, the outbound IP addresses will not change unless you scale the App Service Plan up or down. Auto-scaling your app service horizontally does not change the outbound IP addresses.
4) Integrate the storage account with Azure KeyVault to automatically rotate the keys and generate SAS tokens, as documented here. I wish this could be done via the portal, as most people aren't aware that this exists.
5) Do not use the storage account keys - generate and hand out short lived SAS tokens. KeyVault integration can help with this.
6) Enable storage diagnostics metrics and logging. While not a defensive measure by itself, it can be helpful after the fact.
7) Enable soft delete; this may reduce the impact of certain attacks if a breach were to occur.
8) Enable the 'secure transfer required' setting, to permit traffic only over HTTPS.

How many access policies can I create and add on the same one Azure container?

I didn't find the answer by search. I thought I should be able to create many stored access policies on one container (at least thousands). But by tests, my program can only add up to 5 policies on one container.
Then I tried Microsoft Azure Storage Explorer, it also has this restriction, can add only up to 5. But I cannot find any description about this. Is there any way to remove this restriction? Thanks.
The access policy limit is indeed 5 stored access policies per container, file share, table, or queue. And there's no way to alter this. The limit is documented within Azure's Storage scalability and Performance Targets document, here.
Note: You are able to generate Shared Access Signatures independently of the stored access policies. These are just more limited (e.g. you cannot revoke a SAS; it's active until the time expires, unless you delete the blob).

Azure:limit the access of ARM PaaS services to certain storage accounts

I have a security question related to Azure that I could really do with some guidance on the art of what is possible.
I would like to know if it is possible to restrict what services can be called (i.e what storage account endpoints can be used to write data to) from PaaS services such as service fabric or web apps (ASE). i.e. if I have a web app that writes to storage and someone maliciously altered the code to write to a third party storage account on Azure; is this something I could mitigate in advance by saying this application (i.e. this web app or this SF cluster) can only talk to a particular set of storage accounts or a particular database. So that even if the code was changed to talk to another storage account, it wouldnt be able to. I.e can I explicitly define as part of an environment what storage items an application can talk to; Is this something that is possible?
Azure Storage Accounts have Access Keys and Shared Access Keys that are used to authenticate REST calls to read / write data to them. Your app will be able to perform read / write operations against the Azure Storage Account that it has an access key and connection string for that it uses to connect to it with.
It's not possible to set any kind of firewall rule on an Azure App Service app to prevent it from communicating with certain internet or Azure endpoints. You can set NSG firewall rules with App Service Environment, but you still can only either open or close access; not restrict on certain DNS names or IP Addresses.
Perhaps you should look for a mitigation to this threat in the way applications are deployed, connection strings are managed and code is deployed:
Use Azure Role Based Access control to limit access to the resource in Azure, so unauthorized persons cannot modify deployments
Use a secure way of managing your source code. Remember it is not on the PaaS service, because that only holds the binaries.
Use SAS tokens for application access to storage accounts, not the full access keys. For example, a SAS key could be given write access, not read or list access to a storage account.
If, as a developer, you don't trust the person managing the application deployment, you could even consider signing your application parameters/connection strings. That only protects against tampering though, not against extraction of the connection string.

how to share Azure mobile service tokens across different web app instances

I am planning to have multiple azure mobile service instances, so the first requirement I have is to share the access token of authenticated user across different app instances. I found this article https://cgillum.tech/2016/03/07/app-service-token-store/ that states that right now we can not share the tokens as it is stored locally on machine, and placing it to blob storage is not recommended for production apps. What is the possible solution I have at this time?
I have read the blog you mentioned about App Service Token Store. As mentioned about where the tokens live:
Internally, all these tokens are stored in your app’s local file storage under D:/home/data/.auth/tokens. The tokens themselves are all encrypted in user-specific .json files using app-specific encryption keys and cryptographically signed as per best practice.
I found this article https://cgillum.tech/2016/03/07/app-service-token-store/ that states that right now we can not share the tokens as it is stored locally on machine.
As Azure-runtime-environment states about the Persisted files that an Azure Web App can deal with:
They are rooted in d:\home, which can also be found using the %HOME% environment variable.
These files are persistent, meaning that you can rely on them staying there until you do something to change them. Also, they are shared between all instances of your site (when you scale it up to multiple instances). Internally, the way this works is that they are stored in Azure Storage instead of living on the local file system.
Moreover, Azure app service would enable ARR Affinity to keep a client subsequent requests talking to the same instance. You could disable the session affinity cookie, then requests would be distributed across all the instances. For more details, you could refer to this blog.
Additionally, I have tried to disable ARR Affinity and scale my mobile service to multiple instances, then I could always browser https://[my-website].azurewebsites.net/.auth/me to retrieve information about the current logged-in user.
Per my understanding, you could accomplish the authentication/authorization by yourself to use auth middle-ware into your app. But, this requires more works to be done. Since the platform takes care of it for you, I assume that you could leverage Easy Auth and Token Store and scale your mobile service to multiple instances without worrying about anything.

Practical Limit on number of Azure Shared Access Signatures?

I'm looking to avoid having to use a handler/module in my Webrole to protect images being served up from Block Blob storage on Azure. Shared Access Signatures (SAS) seems to be the way to go.
My question, is there a practical limit on the number of SAS I can issue - Could I issue one every 1 minute, say? Is there a performance issue (time to issue SAS) that would be the limiting factor?
I had initially thought that one SAS per user session would protect me better than a single SAS, but since there is nothing tying a SAS to a user, that won't help...
Shared Access Signatures have an optional component called a "container-level access policy." If you used a container-level access policy, that actually gets stored in blob storage and has a limit of five per container.
If you don't use a container-level access policy, you can make as many Shared Access Signatures as you want, and the server isn't even involved. (The signature is generated locally, meaning in your web role instance.) Generated the signature does involve some crypto, so you may eventually peg the CPU, but I suspect it's "fast enough."

Resources