Shared Access Signatures in Azure for client blob access - azure

Here's what I am trying to accomplish:
We have files stored in azure blobs and need to secure access to them so that only our installed Windows 8 Store App can download these blobs. My first thought was to use some sort of certificate. When the app is installed, it is installed with a certificate that it then passes in the header of the request to he server to obtain the blob.
I read about Shared Access Signatures and it kind of makes sense to me. It seems like an API that the client could use to obtain a temporary token granting access to the blobs. Great. How do I restrict access to the API for obtaining SAS tokens to only our installed client apps?
Thank you.

Using SAS urls is the proper way to do this, this way you can give up a specific resource for a limited amount of time (15 minutes for example) and with limited permissions (only read for example).
Since this app is installed on the users machine you can assume the user can see whatever the App is doing so there is no absolute way to secure your API to only be accessed by only your App, but you can make it a little more difficult to replicate by using SSL (https) endpoint and providing some "secret key" only your App knows.

Related

Added value of using Secret Manager

I have a pretty standard application written in Java which also runs queries against a DB. The application resides on GCP and the DB on Atlas.
For understandable reasons, I don't want to keep the username and password for the DB in the code.
So option number 1 that I had in mind, is to pass the username and password as environment variables to the application container in GCP.
Option number 2 is using Secret Manager in GCP and store my username and password there, and pass the GCP Credentials as an environment variable to the application container in GCP.
My question is, what is the added value of option number 2 if it has any? It seems that option 2 is even worse from a security aspect since if some hacker gets the google credentials, it has access to all of the secrets stored in the Secret Manager.
I don't know what are the best practices and what is advised to do in such cases. Thank you for your help.
Having credentials in GCP secret manager will help you to keep track of all the secrets and changes in a centralized location and access globally from any of your app.
For a standard application where one JAVA is connecting to a DB, may not add much values.
You may look into kubernetes secret for that reason.
If your application resides in GCP, you don't need a service account key file (which is your security concern, and you are right. I wrote an article on this)
TL;DR use ADC (Application Default Credential) to automatically get the service account credential provided automatically on Google Cloud Component (look at metadata server for more details).
Then grant this component identity (by default or user defined, when supported), i.e. the service account email, to access to your secrets.
And that's all! You haven't secrets in your code and your environment variable, neither the login/password, nor the service account key file.
If you have difficulties to use ADC in Java, don't hesitate to share your code. I will be able to help your to achieve this.
To use Secret Manager on Google Cloud you need to install the Secret Manager Java SDK Libraries. This documentation shows how to get started with the Cloud Client Libraries for the Secret Manager API, you only need to go to the Java section.
This Libraries helps you to access your keys in order that it can be used by your app.
The following link shows how to get details about a secret by viewing its metadata. Keep in mind that viewing a secret's metadata requires the Secret Viewer role (roles/secretmanager.viewer) on the secret, project, folder, or organization.
I recommend you to create a special Service Account to handle the proper permissions for your app, because if you don’t have a SA defined, the default SA is what is going to generate the request, and it is not secure. you can learn more about how to create a service account in this link
On the other hand, you can find an example on how you can use the following guide that contains a good example of finding your credentials automatically, that's more convenient and secure than manually passing credentials.

GCP - Compute Engine to Storage per group/user access ACL with custom Auth system

I've built a simple messaging application with NodeJS on GCP that, at the moment consists of a single compute engine instance. I'm using this to learn the stack and how GCP sits together.
My application has it's own user/password registration system and allows users to message each other in 'groups'. These groups can consist of 1...n users and one user is responsible for adding/removing users from a group. They can do this at any time.
I want to allow the users to upload pics and ideally I will then store them in Google Storage. But I want to make sure that only users in a room where an image is uploaded can view that file.I know that GCP makes use of IAM roles etc. but with the authentication being in my system am I expected to update IAM policies every time via the API? In a scaled solution would this work?
My initial thought is that I should do the authentication at an app level. e.g my Compute Engine instance can talk to Storage so when a user requests to a view an image by its url - such as example.com/uploads/:id
I then validate that the current user can view the upload with id :id and if they can, then serve the image from the app. Would this work? Would it be compatible with utilising Google CDN? Is there a preferred solution for doing something like this bearing in mind I'm not using firebase (which I understand can use access tokens for auth) but I'm using my own authentication based on username/password combos with sessions?
For examples of sharing and collaboration scenarios that involve setting bucket and object ACLs, you may take a look at Sharing and Collaboration. As you mentioned and also noted here, you can create a service that authenticates users and redirects them to a URL signed by a service account and this solution helps the scaling amount of users as well.
You must give Cloud CDN permission to read the objects by adding the Cloud CDN service account to Cloud Storage's ACLs for restricting the users on reading the objects.
I should also add that Cloud Storage is integrated with Firebase and you may acquire Firebase Security Rules for Cloud Storage to authenticate and authorize the users.
So it looks like I actually have 2 options here.
I can use signed urls https://cloud.google.com/storage/docs/access-control/signed-urls#signed-urls and grant temporary access to the files to the users in question. I would just need to regenerate this whenever required.
My second option (event though I said I don't want to migrate) is to use Firebase Auth. I wasn't aware it actually supported email/password migration and validation and is actually free regardless of number of users. The only thing I'm not sure on here is how Storage is configured with relation to my current GCP project.

Encryption of csv before Upload

We have a windows service which monitors a folder (using filewatcher of C#) for files and uploads the files to a blob. Windows service retrieves the Write only SAS token , which is used to generate the blob client to upload to a blob, from a WebAPI endpoint(TLS 1.2) secured with ADFS 2.0 by passing the JWT retrieved from ADFS WS-Trust 1.3 endpoint passing user name and password.
My experience is limited in the area of security. I have two questions.
1- Should there be an encryption before I upload the data to blob? If yes, how can I implement it.
2- Would retrieving the SAS token from an endpoint, even though it is secured with ADFS and is over https, possess any kind of security risk
1- Should there be an encryption before I upload the data to blob? If yes, how can I implement it.
Per my understanding, if you want extra security during transit and your stored data to be encrypted, you could leverage Client-side encryption and refer to this tutorial. At this point, you need to make programmatic changes to your application.
Also, you could leverage Storage Service Encryption (SSE) which does not provide for the security of the data in transit, but it provides the following benefit:
SSE allows the storage service automatically encrypt the data when writing it to Azure Storage. When you read the data from Azure Storage, it will be decrypted by the storage service before being returned. This enables you to secure your data without having to modify code or add code to any applications.
I would recommend you could just leverage HTTPs for your data in transit and SSE to encrypt your blobs. For how to enable SSE, you could refer to here. Additionally, you could follow here about Azure Storage security guide.
2- Would retrieving the SAS token from an endpoint, even though it is secured with ADFS and is over https, possess any kind of security risk
SAS provides you with a way to grant the limited permissions to resources in your storage account to other clients. For security consideration, you could set interval over which your SAS is valid. Also, you could limit the IP addresses which could Azure Storage would accept the SAS. Per my understanding, the endpoint for generating SAS token is secured with ADFS 2.0 over HTTPs, I assumed that it is safe enough.

Shared access policy for storing images in Azure blob storage

Is it possible to update expiry time of shared access policy of blob container through Azure portal?
Is it possible to set indefinite expiry time for shared access policy? I have an application which has html editor where users upload images to Azure blob storage. So they can upload images and see them through generated uri. I used shared access policy with READ permission so that users can see images inside html? Is it good practice to set indefinite expiry time of shared access policy with READ permission?
I don't want my images to be public, I just want authenticated users to be able to see the images. I don't understand advantage of using SAS in my case as any user having SAS can see image (for example my friend who receives image uri with sas). So, is there any advantage? Can anyone explain me this?
Is it possible to update expiry time of shared access policy of blob
container through Azure portal?
As of today, it is not possible to manage shared access policies on the blob container using Azure Portal. You would need to use some other tools or write code to do so.
Is it possible to set indefinite expiry time for shared access policy?
You can create a shared access policy without specifying an expiry time. What that means is that you would need to specify an expiry time when creating a shared access signature. What you could do (though it is not recommended - more on this below) is use something like 31st December 9999 as expiry date for shared access policy.
Is it good practice to set indefinite expiry time of shared access
policy with READ permission?
What is recommended is that you set expiry time to an appropriate value based on your business needs. Generally it is recommended that you keep the expiry time in the shared access signature to a small value so that the SAS is not misused as you're responsible paying for the data in your storage account and the outbound bandwidth.
I don't understand advantage of using SAS in my case as any user
having SAS can see image (for example my friend who receives image uri
with sas). So, is there any advantage? Can anyone explain me this?
Biggest advantage of SAS is that you can share resources in your storage account without sharing storage access keys. Furthermore you can restrict the access to these resources by specifying appropriate permissions and shared expiry. While it is true that anyone with SAS URL can access the resource (in case your user decides to share the SAS URL with someone else) and it is not a 100% fool-proof solution but there are ways to mitigate these. You could create short-lived SAS URLs and also restrict the usage of SAS URLs to certain IP addresses only (IP ACLing).
You may find this link helpful regarding some of the best practices around shared access signature: https://azure.microsoft.com/en-in/documentation/articles/storage-dotnet-shared-access-signature-part-1/#best-practices-for-using-shared-access-signatures.
Firstly, if you want to change the expiry time of an ad-hoc Shared Access Signature (SAS), you need to regenerate it (re-sign it) and redistribute this to all users of the SAS. This doesn't affect the validity of the existing SAS you're currently using.
If you want to revoke a SAS, you need to regenerate the Access Key for the Storage Account that signed the SAS in the first place (which also revokes all other SASs that it has signed). If you've used the Access Key elsewhere, you'll need to update those references as well.
A good practice is to use an Access Policy rather than ad-hoc SASs, as this provides a central point of control for the:
Start Time
End Time
Access permissions (Read, Write etc)
SASs linked to an Access Policy can be revoked by changing the expiry time to the past. Whilst you can delete the Access Policy to achieve the same effect, the old SAS will become valid again if you re-create the Access Policy with the same name.
It's not possible to set an indefinite expiry time for a SAS, and neither would it be good practice to - think of SASs as like shortcuts in file system to a file that bypass file permissions. A shortcut can't be revoked per-se or modified after you've created it - anyone anywhere in the world that obtains a copy would receive the same access.
For example, anyone with access to your application (or anyone with access to the network traffic, if you are using HTTP) could keep a copy of the SAS URL, and access any of the resources in that container - or distribute the URL and allow other unauthorised users to do so.
In your case, without SASs you would have served the images from a web server that required authentication (and maybe authorisation) before responding to requests. This introduces overhead, costs and potential complexity which SASs were partly designed to solve.
As you require authentication/authorisation for the application, I suggest you setup a service that generates SASs dynamically (programatically) with a reasonable expiry time, and refers your users to these URLs.
Reference: Using Shared Access Signatures (SAS)
Edit: Microsoft Azure Storage Explorer is really useful for managing Access Policies and generating SASs against them.
You can set a very long expiry time but this is not recommended by Microsoft and no security expert will ever recommend such thing as it defies the idea of SAS
https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-shared-access-signature-part-1/
As for the second part of your question, I am not sure how do you allow them to upload the images, is it directly using SAS on the container level or they post to some code in your application and the application connect to Azure Storage and upload the files?
If you have a back end service, then you can eliminate SAS and make your service works as a proxy, only your service will read and write to Azure Storage using Storage Account Access Keys and the clients will access your service to read and write the images they need, in this case the clients will not have a direct access to Azure Storage.

Is there anyway to restrict access to Azure blob to a single IP?

I am trying to limit access to Azure blobs. I presently can provide a link that is time restricted to 5 minutes using Shared Access Signature. However just wondering if there is any mechanism to require more security such as an IP address?
If not I assume I just have to make the client go via a web role and then check there?
Update: This is supported now! Details above on Best Answer. The rest is still of interest so left it in.
N̶o̶ ̶I̶P̶ ̶f̶i̶l̶t̶e̶r̶s̶ ̶s̶u̶p̶p̶o̶r̶t̶e̶d̶ ̶d̶i̶r̶e̶c̶t̶l̶y̶ ̶-̶ ̶o̶f̶ ̶c̶o̶u̶r̶s̶e̶ ̶y̶o̶u̶ ̶c̶a̶n̶ ̶d̶o̶ ̶t̶h̶i̶s̶ ̶i̶n̶ ̶y̶o̶u̶r̶ ̶o̶w̶n̶ ̶W̶e̶b̶ ̶R̶o̶l̶e̶ ̶a̶s̶ ̶y̶o̶u̶ ̶s̶u̶g̶g̶e̶s̶t̶.̶ But this is why you should be confident with Shared Access Tokens*:
The only way that SAS blob URL could get mass published and attacked in 5 mins is if there was malicious intent from the recipient. So whatever the method of securing it (e.g. IP restriction) you would be vulnerable because you have given an attacker access. They could just download the data and publish that instead if it was IP restricted.
The shared access token combined with the timeout really prevents brute force attacks guessing the URL or any carelessness in leaving it lying about in an unsecured location over time.
So as long as you trust the person you are sharing with and you transport it to them in a secure manner you are fine.
*in most scenarios
Looks like Azure Storage Service has a new feature (shared access signatures) which allows IP-address whitelisting.
A SAS gives you granular control over the type of access you grant to clients who have the SAS, including:
The interval over which the SAS is valid, including the start time and the expiry time.
The permissions granted by the SAS. For example, a SAS for a blob might grant read and write permissions to that blob, but not
delete permissions.
An optional IP address or range of IP addresses from which Azure Storage will accept the SAS. For example, you might specify a range
of IP addresses belonging to your organization.
The protocol over which Azure Storage will accept the SAS. You can use this optional parameter to restrict access to clients using HTTPS.
Source: MSDN
There's no additional mechanism for IP filtering. You can direct all traffic through your Web Role and filter traffic there, or use SAS (as you already suggested).

Resources