I have a web app (asp.net core) which authenticates using Azure B2C for user accounts (OIDC). I now want to allow the users to access ‘protected resources’ such as images etc. My plan is to put these in Azure blob storage but I need to protect these so that only the authorized user can access their own image. I don’t want the scenario where anyone who knows the URL of a file can access it, only the logged in user.
Is this possible with Azure B2C and Blob storage, and if so, what is the best approach to secure these?
I was thinking of creating a container per user, with their B2C Object ID as the container name, so the structure may look like:
Files/04aaffcc-c725-4ff5-9565-cc2fb3d7b4df/image1.jpg
Files/04aaffcc-c725-4ff5-9565-cc2fb3d7b4df/image2.jpg
Files/04aaffcc-c725-4ff5-9565-cc2fb3d7b4df/movie1.mp4
Files/81f052a1-c8c2-4db5-9872-c16c803d1c3f/image66.jpg
Files/81f052a1-c8c2-4db5-9872-c16c803d1c3f/movie-19.mp4
So I need to restrict access so that only the logged in user with the correct object id (e.g. 81f052a1-c8c2-4db5-9872-c16c803d1c3f) can access their own resources (e.g. image66.jpg)
Any ideas on how best to implement this and what constructs Azure supports?
Thanks
I am assuming that users can't access the blob storage files directly. The storage should be abstracted by your service since storage and implementation can change anytime.
I would also have another folder (named images) inside objectId container because there might be different types of file in future.
Then lets say service is hosted on http://contoso.com. The image url to the user provided will be http://contoso.com/userImages/image123.jpg
When someone tries to access the resource, I would read objectId from the token and grant access accordingly.
You should think of sharing scenarios as well, you will need to build another table for the same as who owns the resource and who is it shared with. ObjectId based containers are not useful in such cases, it might be a flat container with image names as guid. and then mapping image names to the file name and other properties.
Usa Shared Access Signatures: https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview.
Related
I need to enable one external user, to be able to access a single directory in a single container in my datalake, in order to upload some data. From what I see in the documentation, it should be possible to simply use RBAC & ACL, so that the user can authenticate himself later on using Powershell and Connect-AzureAD(or to obtain a OAuth2 token).
However, I am having trouble with all those inherited permissions. Once I add a user to my active directory, he is not able to see anything, unless I give him at least reader access on the subscription level. This gives him at least reader permission on all the resources in this subscription, which cannot be removed.
Is it possible to configure this access in such a way, that my user is only able to see a single datalake, single container, and a single folder within this container?
If you want just the one user to access only a single directory/container in your storage account, you should rather look at Shared Access Signatures or Stored Access policies.
For SAS : https://husseinsalman.com/securing-access-to-azure-storage-part-4-shared-access-signature/
For SAS built on top of Stored Acess Policies : https://husseinsalman.com/securing-access-to-azure-storage-part-5-stored-access-policy/
Once you have configured the permissions just for that directory/container, you can send that Shared Access Signature to the user and he/she can use Azure Storage Explorer to perform and file upload/delete etc actions on your container.
Download Azure storage explorer here : https://azure.microsoft.com/en-us/features/storage-explorer/#overview
For how to use Azure Storage Explorer : https://www.red-gate.com/simple-talk/cloud/azure/using-azure-storage-explorer/
More on using Azure storage explorer with azure data lake Gen 2 : https://medium.com/microsoftazure/guidance-for-using-azure-storage-explorer-with-azure-ad-authorization-for-azure-storage-data-access-663c2c88efb
Azure Storage Account
In one of our use case, we would like to use Azure Storage for sharing it with customers so that they can upload their data to us.
In that context, we are planning to create storage account per customer. In order for customer to access the account, we are planning
to share the storage account keys.
We are facing following issues
How to create keys specific to azure storage account container, so that customer can only access specific container.
Is it possible to have individual keys and access at container level.
For certain container, we want to give read-write access.
For others, we want to give only read access.
If i have storage account keys, does that mean i have access to everything under that storage account.
Is there a better solution to this ? Essentially we need a ftp site for customers to upload data.
Sounds like you want to use a shared access signature (SAS):
https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview
A shared access signature (SAS) provides secure delegated access to resources in your storage account without compromising the security of your data. With a SAS, you have granular control over how a client can access your data. You can control what resources the client may access, what permissions they have on those resources, and how long the SAS is valid, among other parameters.
You can't have access key for a container level, there are for the whole Storage Account
To give access at a container level (or even finer grain) you need a Shared Access Signature. Documentation here
You can have as many SAS as you need, and you are allowed to define them with the desired permissions (read, read-write etc...)
I have two Azure Blob Storage containers. Container A and B. I would like to grant Read only access to another Azure User for Container-A. The second container Container-B should not be visible to the Azure user. The Azure user will be accessing the blobs in Container-A from his Azure Virtual Machine. How do I achieve this? Reading on the web seems that I would need to generate Shared Access Signature, but how I am not sure.
Exactly, that is the scenario where you want to use SAS.
First, please read the Azure Storage security guidance to make sure that you are aware of all of the available options.
Here is the very helpful guidance on the SAS model.
Second, you need to generate the SAS with policies (please, refer to the guidances above). It can be done programmatically (sources are available in the guidance) and then you may give that SAS link to user you want anyway you want - it can be the online page where the user can grab the string, or you can write the simple tool to generate the SAS. Be aware, however, that they have the "life" and you need to renew them periodically.
currently I'm playing around with Azure and thinking about a multi-tanent web app where users can create an instance of the app, where more users can register to upload and share files within this instance. I've created a blob storage service and created several containers. However, I'm not sure how customers may think about the fact, that they share their blob service with other users and files are only separated by containers. I would like that each user gets instead his own blob service. However the web app should be shared still by a single web worker role.
This sounds easy for every instance you create by hand, however I want the blob service to be created automatically as the user registers and creates his instance of the web app. Unfortunately I haven't found yet any information about how I could accomplish this. I've found only the blob storage api to query the service, not for creating it.
Can anybody lead me in the right direction? Is this even possible?
You can create a storage account programmatically (see "Create Storage Account": http://msdn.microsoft.com/en-us/library/hh264518.aspx), but I wouldn't recommend creating a different account for each user. The limit on how many storage accounts can be created per subscription is fairly low. (I believe the default is five and you can call to get your quota increased to twenty.)
In general, the recommendation is to go ahead and use the same storage account for all your customers. I believe your concern is about data security, but adding multiple storage accounts doesn't really change the security dynamic. (The trust boundary is still between you and the end user, since only your code will directly access storage.)
Given a stored file on Azure Storage (blobs, tables or queues -- doesn't matter), is it possible to allow access to it for all, but only based on permissions?
For example, I have a big storage of images, and a DB containing users and authorizations. I want user X to only be able to access images Y and Z. So, the URL will be generally inaccessible, unless you provide some sort of a temporary security token along with it. How's that possible?
I know I can shut the storage from the outside world, and allow access to it only through an application checking this stuff, but this would require the application to be on Azure as well, and on-premise app won't be able to deliver any content from Azure Storage.
It is from my understanding that most CDNs provide such capability, and I sure hope so Azure provides a solution for this as well!
Itamar.
I don't think you can achieve this level of access filtering. The only methods I'm aware of are described in this msdn article
Managing Access to Containers and Blobs
and here a blog that describes a little part of code to implement it
Using Container-Level Access Policies in Windows Azure Storage
I'm not sure this would fit your need. If I understood it right I would do it this way :
1. Organize your content in container that match the roles
2. On your on premise application check if user has access and if yes generate the right URL to give him a temporary access to the resource.
Of course this only works if the users have to go through a central point to get access to the content in the blob. If they bookmark the generated link it will fail once the expiration date is passed.
Good luck.
This is actually possible to implement with Blob storage. Consider (a) a UI that is like explorer, and (b) that users are already authenticated (could use Access Control Service, but don't need to).
The explorer-like UI could expose resources that are appropriate to the authenticated user. The underlying access to these resources would be Shared Access Signature-controlled at the granularity appropriate for the objects - for example, restrict only see one file in a folder, or the whole folder, or ability to create a file in a folder, etc., can all be expressed.
This explorer-like UI but would need access to logic that would present the right files for a given user, while also creating the appropriate Shared-Access-Signatures as needed. Note that this logic would not need to be hosted in Azure, rather would just need access to the proper storage key (from the Azure portal).