Preventing cross linking to images on Azure blob storage - azure

I see nothing documented on this, so does anyone know if it is possible to restrict the domains that can access a resource placed in blob storage? When you make a container your only choices are public or private.

That's right. Currently there is no way to restrict access based on domains or IP. Your only solution to manage security on blob storage is by working with Shared Access Signatures (SAS).
The signature would be generated server side and should be appended to the blob's URL. The signature can be limited in time (making the signature only valid for 5min for example).
And this could be done in a web application to display images or videos for example. Even if someone 'steals' your content, the url would be invalid after a few minutes. Not exactly the same as limiting based on IP or domain, but still very effective.

Related

How to share an image publicly in Azure without opening myself to b/w abuse/costs

I've (think I've) thought through every way I can think of sharing an image in Azure and they all leave me open to someone abusing the download and costing me in bandwidth costs.
The goal is an AMI-like experience, except that seems right out, so settle for a solution that forces the user to copy the image to their subscription first, then create a Shared Image Gallery from that. But again, without exposing a raw download to the Internet, or allowing cross-region intra-Azure pulls that would also cost money.
public blob in Azure StorageV2 account- exposes you to bw attack
public blob in Azure StorageV2 account with Firewall - Microsoft Trusted Services that are default allowed doesn't seem to include the image service, though I didn't test this myself. If it did this might work, as the Image service blocks cross-location replication from blob storage by default IIRC.
Shared Image Gallery - cross tenant sharing is clunky, not at all feasible for AMI-like scenarios
???
I do not want to go through the process of being a Marketplace certified image, which as far as I can tell, is the only publicly available route for making a truly public image and not incurring costs.
Why not just put it in a storage account and user Shared Access Signatures?
Then its still possible to download over internet if you have the SAS, its easy to withdraw the SAS and you can limit it both in time and by IP if needed.

How to delegate permission to generate SAS to 3rd party in Azure

I know nothing about azure. Experienced with AWS and GCP. What I've been tasked to do is set up a system where customers of ours have data in Azure storage containers (blobs), which are not public. The idea is that we would want to be able to serve content from our customers storage container to OUR customers using a SAS link that expires in a short period of time (say, a few seconds).
I'm finding the docs highly confusing and am just hoping I can get a pointer in the correct direction. Looking at the Ruby api, it looks like I need to supply an account name and an access secret to generate a SAS. So, presumably our customer would have to provide that to us. How would they create that account/key with minimal permissions so that all we can do is create the SAS link and serve it to our customers?
I'm reading all kinds of things about Active Directory and RBAC and SAS and Service Principles and all this stuff, but not sure how it fits together. I've implemented something similar in AWS, where the customer just creates an IAM user with GetObject permission on the bucket, they give us the key/secret, and we use it to generate a signed url. But azure is a whole different beast.

Azure blob storage and security best practices

When exploring Azure storage I've noticed that access to a storage container is done through a shared key. There is concern where I work that if a developer is using this key for an application they're building and then leave the company that they could still login to the storage account and delete anything they want. The workaround for this would be to re-generate the secondary key for the account but then we'd have to change all of the keys in every application that uses those keys.
Is it best practice to have an entire storage account per application per environment (dev, test, staging, production)?
Is it possible to secure the storage account behind a virtual network?
Should we use signatures on our containers on a per application basis?
Anybody have experiences similar and have found a good pattern for dealing with this?
I have a bit different scenario – external applications, but the problem is the same - data access security
I use Shared Access Signatures(SAS) to grant an access to a container.
In your scenario you can create Shared Access Policy per application on a container and generate SAS using this Stored Access Policy with long expiration time, you can revoke it at any point by removing Shared Access Policy from container. So in you scenario you can revoke current SAS and generate new one when your developer leaves. You can't generate single SAS for multiple containers so if you application uses multiple containers you would have to generate multiple SAS.
Usage, from the developers perspective, stays the same:
You can use SAS token to create CloudStorageAccount or CloudBlobClient so it’s almost like a regular access key.
Longer term, I would probably think about creating one internal service(internal API) responsible for generating SAS and renewing them. This way you can have completely automated system and with access keys only disclosed to this main service. You can then restrict access to this service with virtual network, certificates, authentication etc. And if something goes wrong (developer who wrote that service leaves :-) ) you can regenerate access keys and change them, but this time only in one place.
Few things:
Storage account per application (and/or environment) is a good strategy, but you have to be aware of the limit – max 100 storage accounts per subscription.
There is no option to limit access to a storage account with virtual network
You can have maximum 5 Shared Access Policies on a single container
I won't get into subjective / opinion answers, but from an objective perspective: If a developer has a storage account key, then they have full access to the storage account. And if they left the company and kept a copy of the key? The only way to block them out is to regenerate the key.
You might assume that separating apps with different storage accounts helps. However, just keep this in mind:if a developer had access to a subscription, they had access to keys for every single storage account in that subscription.
When thinking of key regeneration, think about the total surface area of apps having knowledge of the key itself. If storage manipulation is solely a server-side operation, the impact of changing a key is minimal (a small app change in each deployment, along with updating any storage browsing tools you use). If you embedded the key in a desktop/mobile application for direct storage access, you have a bigger problem with having to push out updated clients, but you already have a security problem anyway.

Azure BLOB upload - how to do it best way?

We're developing a web-application with Azure and the question is - how to upload into BLOB large images effectively directly from browser and make it secure and reliable?
Probably, we're experiencing bad performance because we're from Russia and currently using a trial Azure. Maybe with the full subscription this problem will go away?
Anyway, my concern is that our application has to pass our image through the following path:
WebBrowser > (image.jpg) > Azure WebRole [store name in DB] > (image.jpg) > Azure BLOB
So there is an overhead involving WebRole. What I'd like to do is to upload my large file to BLOB directly and send image name to WebRole in parallel:
WebBrowser > (image.jpg) > Azure BLOB
WebBrowser > WebRole [store name in DB]
The problem here is security. I'm talking about uploading user pictures and don't want hackers being able to write into one's container.
Is it reasonable at all?
Silverlight is an option, using Shared Access Signatures (special URLs that allow write access on a time-limited basis). See my series of blog posts: http://blog.smarx.com/posts/uploading-windows-azure-blobs-from-silverlight-part-1-shared-access-signatures
+1 for #smarx's suggestion of uploading via Shared Access Signature - that offers a time-limited URL that lets you access a private blob as if it were public. Someone would need to be running a network sniffer to attempt to discover an SAS-encoded URL, and even then, it would only be valid for a short period of time.
Just wanted to add that having a trial subscription is no different from a paid subscription, when it comes to performance. That's just a billing thing and has nothing to do with resource allocation.

Practical Limit on number of Azure Shared Access Signatures?

I'm looking to avoid having to use a handler/module in my Webrole to protect images being served up from Block Blob storage on Azure. Shared Access Signatures (SAS) seems to be the way to go.
My question, is there a practical limit on the number of SAS I can issue - Could I issue one every 1 minute, say? Is there a performance issue (time to issue SAS) that would be the limiting factor?
I had initially thought that one SAS per user session would protect me better than a single SAS, but since there is nothing tying a SAS to a user, that won't help...
Shared Access Signatures have an optional component called a "container-level access policy." If you used a container-level access policy, that actually gets stored in blob storage and has a limit of five per container.
If you don't use a container-level access policy, you can make as many Shared Access Signatures as you want, and the server isn't even involved. (The signature is generated locally, meaning in your web role instance.) Generated the signature does involve some crypto, so you may eventually peg the CPU, but I suspect it's "fast enough."

Resources