Creating blob storage service programmatically in Azure - azure

currently I'm playing around with Azure and thinking about a multi-tanent web app where users can create an instance of the app, where more users can register to upload and share files within this instance. I've created a blob storage service and created several containers. However, I'm not sure how customers may think about the fact, that they share their blob service with other users and files are only separated by containers. I would like that each user gets instead his own blob service. However the web app should be shared still by a single web worker role.
This sounds easy for every instance you create by hand, however I want the blob service to be created automatically as the user registers and creates his instance of the web app. Unfortunately I haven't found yet any information about how I could accomplish this. I've found only the blob storage api to query the service, not for creating it.
Can anybody lead me in the right direction? Is this even possible?

You can create a storage account programmatically (see "Create Storage Account": http://msdn.microsoft.com/en-us/library/hh264518.aspx), but I wouldn't recommend creating a different account for each user. The limit on how many storage accounts can be created per subscription is fairly low. (I believe the default is five and you can call to get your quota increased to twenty.)
In general, the recommendation is to go ahead and use the same storage account for all your customers. I believe your concern is about data security, but adding multiple storage accounts doesn't really change the security dynamic. (The trust boundary is still between you and the end user, since only your code will directly access storage.)

Related

Protecting assets in BLOB storage using Azure B2C

I have a web app (asp.net core) which authenticates using Azure B2C for user accounts (OIDC). I now want to allow the users to access ‘protected resources’ such as images etc. My plan is to put these in Azure blob storage but I need to protect these so that only the authorized user can access their own image. I don’t want the scenario where anyone who knows the URL of a file can access it, only the logged in user.
Is this possible with Azure B2C and Blob storage, and if so, what is the best approach to secure these?
I was thinking of creating a container per user, with their B2C Object ID as the container name, so the structure may look like:
Files/04aaffcc-c725-4ff5-9565-cc2fb3d7b4df/image1.jpg
Files/04aaffcc-c725-4ff5-9565-cc2fb3d7b4df/image2.jpg
Files/04aaffcc-c725-4ff5-9565-cc2fb3d7b4df/movie1.mp4
Files/81f052a1-c8c2-4db5-9872-c16c803d1c3f/image66.jpg
Files/81f052a1-c8c2-4db5-9872-c16c803d1c3f/movie-19.mp4
So I need to restrict access so that only the logged in user with the correct object id (e.g. 81f052a1-c8c2-4db5-9872-c16c803d1c3f) can access their own resources (e.g. image66.jpg)
Any ideas on how best to implement this and what constructs Azure supports?
Thanks
I am assuming that users can't access the blob storage files directly. The storage should be abstracted by your service since storage and implementation can change anytime.
I would also have another folder (named images) inside objectId container because there might be different types of file in future.
Then lets say service is hosted on http://contoso.com. The image url to the user provided will be http://contoso.com/userImages/image123.jpg
When someone tries to access the resource, I would read objectId from the token and grant access accordingly.
You should think of sharing scenarios as well, you will need to build another table for the same as who owns the resource and who is it shared with. ObjectId based containers are not useful in such cases, it might be a flat container with image names as guid. and then mapping image names to the file name and other properties.
Usa Shared Access Signatures: https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview.

is it possible to aggregate Azure resources from different subscriptions?

Our team has Windows Azure MSDN - Visual Studio Premium subscriptions for all our devs. I have been taking advantage of the $100 per month allowance and am building more infrastructure in the cloud.
However, I would like other members of our team to access certain of the assets. I am quite new to the Azure infrastructure, so this might be a dumb question. But can they access my blobs? and can I control exactly who can access my blobs?
They can obviously RDP into my VMs, that's not an issue. I assume they can hit my VMs too, via the IP address, inside Azure, etc. However, I am more interested in the Blobs. Mostly because I am starting to upload a lot of utility data (large sample datasets, common software we all install, etc.) and I would like to avoid all of us having to upload all of it again for each subscriptions.
As of today (11/8/2013), you cannot "pool" MSDN resources meaning..have 4 subscriptions add up to $400/month and do ala carte cloud services
You can have one admin/or several for multiple subscriptions, this will allow you to view the different subscriptions in the portal and manage them in a single spot
You can also have different deployment profiles, so one Visual Studio instance can deploy to different Azure accounts.
Specific to your question, you have blob access keys and if you share the name of the storage account and key...yes they can access your data located there.
Yes, it is possible to control access to your blobs by using SAS (Shared Access Signatures)
SAS grants granular access to container, blob, table, & queue
This should be a good resource to start with :
Manage Access to Windows Azure Storage Resources
Create and Use a Shared Access Signature
However, I would like other members of our team to access certain of
the assets. I am quite new to the Azure infrastructure, so this might
be a dumb question. But can they access my blobs? and can I control
exactly who can access my blobs?
To answer specifically this question, Yes your team members can access the data stored in any blob storage account in any of your subscription. There are two ways by which you can provide them access to blob storage:
By giving them account name/account key: Using this, they get full access to storage account and essentially become owners of that storage account.
By using Shared Access Signature: If you want to give them restricted access to blob storage, you would need to use SAS as described by Dan Dinu. SAS basically gives you a URL using which users in possession of that URL can explore storage (by writing some code), however it is not possible to identify which user accessed which storage. For that you would need to write something on your own.

Is this a sensible Azure Blob Storage setup and are there restructuring tools to help me migrate to it?

I think we have gone slightly wrong on the way we have used Azure storage in a SAAS system. We created a storage account per client (Securtiy was prime consideration) and containers per system area e.g. Vehicle, Work etc
Having done further reading it seems a suggestion would be that we should have used one account for all clients. Each client should have a container (so we can programmatically create it) which we then secure. Then files should just be structured using "virtual" folder structure e.g. Container called "Client A". Then Files for the Jobs (in Work area of system) stored like Work/Jobs/{entity id}/blah.pdf. Does this sound sensible?
If so we now have about 10 accounts that we need to restructure. Are there any tools that will let us easily copy one accounts contents to another containers account? I appreciate we probably can't move the files between accounts (as we set them up ages ago so can't use native copy function) so I guess some sort of copy. There are GB of files across all the accounts.
It may not be such a bad idea to keep different storage accounts per client. The benefits of doing that (to me) are:
Better security as mentioned by you.
You'll be able to achieve better throughput / client as each client will have their own storage account. If you keep one storage account for all clients, and if one client starts hitting that account badly other clients will be impacted.
Better scalability. Each storage account can hold up to 200 TB of data. So if you keep just one storage account and assuming each client consumes 100 GB of data, you'll be able to accommodate only 2000 clients (I hope my math is right :)). With individual storage accounts, you won't be restricted in that sense.
There're some downsides as well. Some of them are:
Management would be a nightmare. Imagining you have 2000 customers then you would end up managing 2000 storage accounts.
You may be limited by Windows Azure. Currently by default you get about 10 or 20 storage accounts per subscription and you would need to contact support to manually up that limit. They can do that for you but I would imagine you would want this to be a self-service model where you would be able to create as many storage accounts as you want without contacting support.
Now coming to your question about tooling, you could possibly write something on your own which makes use of Copy Blob functionality. This functionality allows you to copy blob data across storage accounts asynchronously. Basically this is what you would do is:
First create a blob container for each client in the target storage account.
Enumerate all blob containers in source storage account.
For each blob container in source storage account, enumerate the blobs.
Copy each blob asynchronously to target storage account in the client's blob container.
If you're a PowerShell fan, you can look into Cerebrata's Azure Management Cmdlets (http://www.cerebrata.com/Products/AzureManagementCmdlets) as well which wraps this functionality. I could have recommended Cerebrata's Azure Management Studio as well but I haven't tried this functionality just yet there [Disclosure: I'm one of the devs on Cerebrata team].
Hope this helps.
Adding to Gaurav Mantri answer...
You can have shared storage account for customers and use Shared Access Signature(SAS) to limiting access to particular container or blobs(as well as for tables and queues)...
http://msdn.microsoft.com/en-us/library/windowsazure/hh508996.aspx

Getting Started with Azure Question

I'm trying to get up-and-going with Windows Azure. I understand that I need to create a "Storage Account". However, what I'm confused about is, how I should set it up. For instance, my Azure subscription is set to my company name. I intend to have multiple ASP.NET web applications (web roles) associated with my subscription. Each web application will have its own database.
My question is, should each web application have its own storage account? Or should only one storage account be used for all of my projects?
Thank you!
There's no one way to answer this, but here are some thoughts to help your decision:
Each storage account is limited to 100TB. If you feel that you will push the limits of this across multiple websites, then create multiple storage accounts for sure.
To make billing easier, I'd suggest separate storage accounts
Storage accounts have an SLA of a few thousand transactions per second across the entire storage account. For performance purposes, it's probably better to have separate storage accounts
Consider putting your diagnostic data in a separate storage account. This way, you can safely give your Storage Account key to a 3rd-party like ParaLeap (creators of AzureWatch) for monitoring your app, while not giving away the key to real customer data, for instance.
If you need more than 5 storage accounts, you'll need to contact Customer Support to increase this number.
Windows Azure Storage server is for simple blob storage. This is for when your app needs a file store. Any application, not just Azure web roles, can target a storage service. It's kind of like Amazon S3 if you're familiar with that.
Storage services are not required to run Azure applications. You just need a "compute" instance.

Allowing access to Azure Storage nodes to select users?

Given a stored file on Azure Storage (blobs, tables or queues -- doesn't matter), is it possible to allow access to it for all, but only based on permissions?
For example, I have a big storage of images, and a DB containing users and authorizations. I want user X to only be able to access images Y and Z. So, the URL will be generally inaccessible, unless you provide some sort of a temporary security token along with it. How's that possible?
I know I can shut the storage from the outside world, and allow access to it only through an application checking this stuff, but this would require the application to be on Azure as well, and on-premise app won't be able to deliver any content from Azure Storage.
It is from my understanding that most CDNs provide such capability, and I sure hope so Azure provides a solution for this as well!
Itamar.
I don't think you can achieve this level of access filtering. The only methods I'm aware of are described in this msdn article
Managing Access to Containers and Blobs
and here a blog that describes a little part of code to implement it
Using Container-Level Access Policies in Windows Azure Storage
I'm not sure this would fit your need. If I understood it right I would do it this way :
1. Organize your content in container that match the roles
2. On your on premise application check if user has access and if yes generate the right URL to give him a temporary access to the resource.
Of course this only works if the users have to go through a central point to get access to the content in the blob. If they bookmark the generated link it will fail once the expiration date is passed.
Good luck.
This is actually possible to implement with Blob storage. Consider (a) a UI that is like explorer, and (b) that users are already authenticated (could use Access Control Service, but don't need to).
The explorer-like UI could expose resources that are appropriate to the authenticated user. The underlying access to these resources would be Shared Access Signature-controlled at the granularity appropriate for the objects - for example, restrict only see one file in a folder, or the whole folder, or ability to create a file in a folder, etc., can all be expressed.
This explorer-like UI but would need access to logic that would present the right files for a given user, while also creating the appropriate Shared-Access-Signatures as needed. Note that this logic would not need to be hosted in Azure, rather would just need access to the proper storage key (from the Azure portal).

Resources