I'm looking to avoid having to use a handler/module in my Webrole to protect images being served up from Block Blob storage on Azure. Shared Access Signatures (SAS) seems to be the way to go.
My question, is there a practical limit on the number of SAS I can issue - Could I issue one every 1 minute, say? Is there a performance issue (time to issue SAS) that would be the limiting factor?
I had initially thought that one SAS per user session would protect me better than a single SAS, but since there is nothing tying a SAS to a user, that won't help...
Shared Access Signatures have an optional component called a "container-level access policy." If you used a container-level access policy, that actually gets stored in blob storage and has a limit of five per container.
If you don't use a container-level access policy, you can make as many Shared Access Signatures as you want, and the server isn't even involved. (The signature is generated locally, meaning in your web role instance.) Generated the signature does involve some crypto, so you may eventually peg the CPU, but I suspect it's "fast enough."
Related
Is there a limit on number of Keys, Certificates etc in a Key Vault?
There is no documented limit for the number of resources in a Key Vault - only the number of operations per second.
However, the more resources you have in a vault, the longer it will take to enumerate them all. If you have no need to enumerate them, this may not affect performance but also is not documented.
But if you're using a configuration library - common, for example, with ASP.NET applications - they often fetch all secrets to find the ones they need, and with a limit to the number of operations, this can either fail (too many calls) or take a long while if a retry policy is used, which built-in and enabled by default with Azure.Security.KeyVault.* packages - we recommend developers use instead of Microsoft.Azure.KeyVault, which will only get critical fixes going forward.
As common workaround is to stuff more secrets into something like a JSON blob, though make sure you want anyone with access to the blob to have access to all the secrets contained therein.
I didn't find the answer by search. I thought I should be able to create many stored access policies on one container (at least thousands). But by tests, my program can only add up to 5 policies on one container.
Then I tried Microsoft Azure Storage Explorer, it also has this restriction, can add only up to 5. But I cannot find any description about this. Is there any way to remove this restriction? Thanks.
The access policy limit is indeed 5 stored access policies per container, file share, table, or queue. And there's no way to alter this. The limit is documented within Azure's Storage scalability and Performance Targets document, here.
Note: You are able to generate Shared Access Signatures independently of the stored access policies. These are just more limited (e.g. you cannot revoke a SAS; it's active until the time expires, unless you delete the blob).
I am planning to have multiple azure mobile service instances, so the first requirement I have is to share the access token of authenticated user across different app instances. I found this article https://cgillum.tech/2016/03/07/app-service-token-store/ that states that right now we can not share the tokens as it is stored locally on machine, and placing it to blob storage is not recommended for production apps. What is the possible solution I have at this time?
I have read the blog you mentioned about App Service Token Store. As mentioned about where the tokens live:
Internally, all these tokens are stored in your app’s local file storage under D:/home/data/.auth/tokens. The tokens themselves are all encrypted in user-specific .json files using app-specific encryption keys and cryptographically signed as per best practice.
I found this article https://cgillum.tech/2016/03/07/app-service-token-store/ that states that right now we can not share the tokens as it is stored locally on machine.
As Azure-runtime-environment states about the Persisted files that an Azure Web App can deal with:
They are rooted in d:\home, which can also be found using the %HOME% environment variable.
These files are persistent, meaning that you can rely on them staying there until you do something to change them. Also, they are shared between all instances of your site (when you scale it up to multiple instances). Internally, the way this works is that they are stored in Azure Storage instead of living on the local file system.
Moreover, Azure app service would enable ARR Affinity to keep a client subsequent requests talking to the same instance. You could disable the session affinity cookie, then requests would be distributed across all the instances. For more details, you could refer to this blog.
Additionally, I have tried to disable ARR Affinity and scale my mobile service to multiple instances, then I could always browser https://[my-website].azurewebsites.net/.auth/me to retrieve information about the current logged-in user.
Per my understanding, you could accomplish the authentication/authorization by yourself to use auth middle-ware into your app. But, this requires more works to be done. Since the platform takes care of it for you, I assume that you could leverage Easy Auth and Token Store and scale your mobile service to multiple instances without worrying about anything.
When exploring Azure storage I've noticed that access to a storage container is done through a shared key. There is concern where I work that if a developer is using this key for an application they're building and then leave the company that they could still login to the storage account and delete anything they want. The workaround for this would be to re-generate the secondary key for the account but then we'd have to change all of the keys in every application that uses those keys.
Is it best practice to have an entire storage account per application per environment (dev, test, staging, production)?
Is it possible to secure the storage account behind a virtual network?
Should we use signatures on our containers on a per application basis?
Anybody have experiences similar and have found a good pattern for dealing with this?
I have a bit different scenario – external applications, but the problem is the same - data access security
I use Shared Access Signatures(SAS) to grant an access to a container.
In your scenario you can create Shared Access Policy per application on a container and generate SAS using this Stored Access Policy with long expiration time, you can revoke it at any point by removing Shared Access Policy from container. So in you scenario you can revoke current SAS and generate new one when your developer leaves. You can't generate single SAS for multiple containers so if you application uses multiple containers you would have to generate multiple SAS.
Usage, from the developers perspective, stays the same:
You can use SAS token to create CloudStorageAccount or CloudBlobClient so it’s almost like a regular access key.
Longer term, I would probably think about creating one internal service(internal API) responsible for generating SAS and renewing them. This way you can have completely automated system and with access keys only disclosed to this main service. You can then restrict access to this service with virtual network, certificates, authentication etc. And if something goes wrong (developer who wrote that service leaves :-) ) you can regenerate access keys and change them, but this time only in one place.
Few things:
Storage account per application (and/or environment) is a good strategy, but you have to be aware of the limit – max 100 storage accounts per subscription.
There is no option to limit access to a storage account with virtual network
You can have maximum 5 Shared Access Policies on a single container
I won't get into subjective / opinion answers, but from an objective perspective: If a developer has a storage account key, then they have full access to the storage account. And if they left the company and kept a copy of the key? The only way to block them out is to regenerate the key.
You might assume that separating apps with different storage accounts helps. However, just keep this in mind:if a developer had access to a subscription, they had access to keys for every single storage account in that subscription.
When thinking of key regeneration, think about the total surface area of apps having knowledge of the key itself. If storage manipulation is solely a server-side operation, the impact of changing a key is minimal (a small app change in each deployment, along with updating any storage browsing tools you use). If you embedded the key in a desktop/mobile application for direct storage access, you have a bigger problem with having to push out updated clients, but you already have a security problem anyway.
I see nothing documented on this, so does anyone know if it is possible to restrict the domains that can access a resource placed in blob storage? When you make a container your only choices are public or private.
That's right. Currently there is no way to restrict access based on domains or IP. Your only solution to manage security on blob storage is by working with Shared Access Signatures (SAS).
The signature would be generated server side and should be appended to the blob's URL. The signature can be limited in time (making the signature only valid for 5min for example).
And this could be done in a web application to display images or videos for example. Even if someone 'steals' your content, the url would be invalid after a few minutes. Not exactly the same as limiting based on IP or domain, but still very effective.