ImageResizer with Private Blobs on Azure - azure

I have a system providing access to private blobs based on a users login credentials. If they have permission they will be given a SAS Blob url to view document or image stored in Azure.
I want to be able to resize the images, but still maintain the integrity of the short window of access via the SAS.
What is the best approach with ImageResizer? Should I user the AzureReader2 plugin, or should i just use the RemoteReader with the SAS Url?
Thanks

ImageResizer would disk cache the resized result images indefinitely, regardless of restrictions on the source file.
You need to implement your authorization logic within the application using Authorize_Request or Config.Current.Pipeline.AuthorizeImage .
There's no way to pass-through storage authorization unless you disable all caching.

Related

Temporary public URL for Azure Blob storage?

I currently work with a VOIP product that allows our customers to record their phone calls. Recorded phone calls by default are stored on our servers, and a URL is stored which points to the recording, which is embedded in our customers' portals.
We are working on a feature that allows our customers to provide their own Azure Blob details, such that recordings are stored in their own container. The only problem we are having is that the container needs to be set to public so that the recording can be embedded dynamically in the browser.
The paths to the recordings contain multiple UUIDs, providing some kind of security through obscurity, although we still aren't too keen on requiring the containers to be public.
Does there exist a method in Azure Blob to generate temporary URLs/tokens for accessing files, such that we can refresh links (daily, for example) so that a bad actor couldn't share a recording with a link that will never cease to be valid?
What you are looking for is SAS token.
A shared access signature (SAS) provides secure delegated access to
resources in your storage account. With a SAS, you have granular
control over how a client can access your data. For example:
What resources the client may access.
What permissions they have to those resources.
How long the SAS is valid.

Best approach to create logical separation between users in Azure Blob Storage

I'm trying to understand how cloud storage works with Azure Blob Storage. The use case is a microservice architecture with a few APIs and a frontend where the users can upload, download and delete blobs.
I have two types of blobs, one type are profile pictures and assets that may be accessed by all the users and the other type are blobs that the user has ownership and only certain users can see/download (users of the same company, website admins...).
We have 3 concepts that I'm trying to figure out the purpose:
Storage account, that's me, the Azure account holder.
Container, that can be used one for every entity/user.
Blobs
Upload blobs can only be possible using a frontend of my microservice architecture, so the authentication will be service to service with the new service I want to build.
Download blobs it will be exposing an URL and (here start doubts) when the user click the URL, I'm going to check against AuthService if the user has a session logged (if not, redirect to login frontend) and then I need to request if the user has permissions to download this blob.
How can I do this?
I think about click URL, check with AuthService that the user is logged, download service ask for user information and then check against blob metadata what is the blob ownership. That needs to store in the upload process information into metadata like entity_id, user_id. I don't know...
Did you consider implementing an API/capability in your frontend to generate a SAS URL to the specific blob the user should have access to?
That way this API can verify the user permissions however you wish, and if the user's request checks out you provide him with SAS URL that will expire whenever you choose and can have read/write/delete (you choose) on a specific blob.
Also I'd highly recommend to separate storage accounts that hold system data that is entirely internal to the system, and storage accounts with blobs accessible to the user. This is become the SAS URL does contain the storage account DNS, which exposes it to DDOS and other DNS-based attacks, and therefore in my opinion you should limit their scope to only the blobs you need to let users access anyways.

Encryption of csv before Upload

We have a windows service which monitors a folder (using filewatcher of C#) for files and uploads the files to a blob. Windows service retrieves the Write only SAS token , which is used to generate the blob client to upload to a blob, from a WebAPI endpoint(TLS 1.2) secured with ADFS 2.0 by passing the JWT retrieved from ADFS WS-Trust 1.3 endpoint passing user name and password.
My experience is limited in the area of security. I have two questions.
1- Should there be an encryption before I upload the data to blob? If yes, how can I implement it.
2- Would retrieving the SAS token from an endpoint, even though it is secured with ADFS and is over https, possess any kind of security risk
1- Should there be an encryption before I upload the data to blob? If yes, how can I implement it.
Per my understanding, if you want extra security during transit and your stored data to be encrypted, you could leverage Client-side encryption and refer to this tutorial. At this point, you need to make programmatic changes to your application.
Also, you could leverage Storage Service Encryption (SSE) which does not provide for the security of the data in transit, but it provides the following benefit:
SSE allows the storage service automatically encrypt the data when writing it to Azure Storage. When you read the data from Azure Storage, it will be decrypted by the storage service before being returned. This enables you to secure your data without having to modify code or add code to any applications.
I would recommend you could just leverage HTTPs for your data in transit and SSE to encrypt your blobs. For how to enable SSE, you could refer to here. Additionally, you could follow here about Azure Storage security guide.
2- Would retrieving the SAS token from an endpoint, even though it is secured with ADFS and is over https, possess any kind of security risk
SAS provides you with a way to grant the limited permissions to resources in your storage account to other clients. For security consideration, you could set interval over which your SAS is valid. Also, you could limit the IP addresses which could Azure Storage would accept the SAS. Per my understanding, the endpoint for generating SAS token is secured with ADFS 2.0 over HTTPs, I assumed that it is safe enough.

Shared access policy for storing images in Azure blob storage

Is it possible to update expiry time of shared access policy of blob container through Azure portal?
Is it possible to set indefinite expiry time for shared access policy? I have an application which has html editor where users upload images to Azure blob storage. So they can upload images and see them through generated uri. I used shared access policy with READ permission so that users can see images inside html? Is it good practice to set indefinite expiry time of shared access policy with READ permission?
I don't want my images to be public, I just want authenticated users to be able to see the images. I don't understand advantage of using SAS in my case as any user having SAS can see image (for example my friend who receives image uri with sas). So, is there any advantage? Can anyone explain me this?
Is it possible to update expiry time of shared access policy of blob
container through Azure portal?
As of today, it is not possible to manage shared access policies on the blob container using Azure Portal. You would need to use some other tools or write code to do so.
Is it possible to set indefinite expiry time for shared access policy?
You can create a shared access policy without specifying an expiry time. What that means is that you would need to specify an expiry time when creating a shared access signature. What you could do (though it is not recommended - more on this below) is use something like 31st December 9999 as expiry date for shared access policy.
Is it good practice to set indefinite expiry time of shared access
policy with READ permission?
What is recommended is that you set expiry time to an appropriate value based on your business needs. Generally it is recommended that you keep the expiry time in the shared access signature to a small value so that the SAS is not misused as you're responsible paying for the data in your storage account and the outbound bandwidth.
I don't understand advantage of using SAS in my case as any user
having SAS can see image (for example my friend who receives image uri
with sas). So, is there any advantage? Can anyone explain me this?
Biggest advantage of SAS is that you can share resources in your storage account without sharing storage access keys. Furthermore you can restrict the access to these resources by specifying appropriate permissions and shared expiry. While it is true that anyone with SAS URL can access the resource (in case your user decides to share the SAS URL with someone else) and it is not a 100% fool-proof solution but there are ways to mitigate these. You could create short-lived SAS URLs and also restrict the usage of SAS URLs to certain IP addresses only (IP ACLing).
You may find this link helpful regarding some of the best practices around shared access signature: https://azure.microsoft.com/en-in/documentation/articles/storage-dotnet-shared-access-signature-part-1/#best-practices-for-using-shared-access-signatures.
Firstly, if you want to change the expiry time of an ad-hoc Shared Access Signature (SAS), you need to regenerate it (re-sign it) and redistribute this to all users of the SAS. This doesn't affect the validity of the existing SAS you're currently using.
If you want to revoke a SAS, you need to regenerate the Access Key for the Storage Account that signed the SAS in the first place (which also revokes all other SASs that it has signed). If you've used the Access Key elsewhere, you'll need to update those references as well.
A good practice is to use an Access Policy rather than ad-hoc SASs, as this provides a central point of control for the:
Start Time
End Time
Access permissions (Read, Write etc)
SASs linked to an Access Policy can be revoked by changing the expiry time to the past. Whilst you can delete the Access Policy to achieve the same effect, the old SAS will become valid again if you re-create the Access Policy with the same name.
It's not possible to set an indefinite expiry time for a SAS, and neither would it be good practice to - think of SASs as like shortcuts in file system to a file that bypass file permissions. A shortcut can't be revoked per-se or modified after you've created it - anyone anywhere in the world that obtains a copy would receive the same access.
For example, anyone with access to your application (or anyone with access to the network traffic, if you are using HTTP) could keep a copy of the SAS URL, and access any of the resources in that container - or distribute the URL and allow other unauthorised users to do so.
In your case, without SASs you would have served the images from a web server that required authentication (and maybe authorisation) before responding to requests. This introduces overhead, costs and potential complexity which SASs were partly designed to solve.
As you require authentication/authorisation for the application, I suggest you setup a service that generates SASs dynamically (programatically) with a reasonable expiry time, and refers your users to these URLs.
Reference: Using Shared Access Signatures (SAS)
Edit: Microsoft Azure Storage Explorer is really useful for managing Access Policies and generating SASs against them.
You can set a very long expiry time but this is not recommended by Microsoft and no security expert will ever recommend such thing as it defies the idea of SAS
https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-shared-access-signature-part-1/
As for the second part of your question, I am not sure how do you allow them to upload the images, is it directly using SAS on the container level or they post to some code in your application and the application connect to Azure Storage and upload the files?
If you have a back end service, then you can eliminate SAS and make your service works as a proxy, only your service will read and write to Azure Storage using Storage Account Access Keys and the clients will access your service to read and write the images they need, in this case the clients will not have a direct access to Azure Storage.

Shared Access Signatures in Azure for client blob access

Here's what I am trying to accomplish:
We have files stored in azure blobs and need to secure access to them so that only our installed Windows 8 Store App can download these blobs. My first thought was to use some sort of certificate. When the app is installed, it is installed with a certificate that it then passes in the header of the request to he server to obtain the blob.
I read about Shared Access Signatures and it kind of makes sense to me. It seems like an API that the client could use to obtain a temporary token granting access to the blobs. Great. How do I restrict access to the API for obtaining SAS tokens to only our installed client apps?
Thank you.
Using SAS urls is the proper way to do this, this way you can give up a specific resource for a limited amount of time (15 minutes for example) and with limited permissions (only read for example).
Since this app is installed on the users machine you can assume the user can see whatever the App is doing so there is no absolute way to secure your API to only be accessed by only your App, but you can make it a little more difficult to replicate by using SSL (https) endpoint and providing some "secret key" only your App knows.

Resources