Azure StorageV2 public containers - azure

We have stored 200.000+ images in a classic azure blob account with standard performance. We include the blob URLs in the HTML of our application so the browser downloads the images directly from the blob storage. However, this is really slow. A simple 2kb image can take up to 200ms to download. Download speeds are irregular.
I made a new storage account, now V2 with premium performance. However, now I can't make any public containers anymore. The portal returns the error: 'This is a premium 'StorageV2 (general purpose v2)' account. Containers within this storage account must be set to 'private' access level.'
How can I host images in an Azure environment with good performance without having to deploy them on my web role?

Azure storage V2 with premium only supports private access level. You should consider using BlockBlobStorage accounts with premium in your case, which supports the public access.
And here is the benefit of BlockBlobStorage accounts:
Compared with general-purpose v2 and BlobStorage accounts, BlockBlobStorage accounts provide low and consistent latency, and higher transaction rates.
Here is the screenshot of create a BlockBlobStorage accounts with premium:

Azure storage account have certain limits (like, 20000 IOPS limit per account) which might interfere with performance at the scale you are talking about. Steps you can take to check if this is the root case - split your images into several storage accounts and see if that fixes performance.
Alternatively (and probably better) you should use Azure CDN attached to the storage account to fix this performance issue (and even make it faster).
https://learn.microsoft.com/en-us/azure/cdn/cdn-create-a-storage-account-with-cdn

Related

Can I use azure functionapp storage account for other purposes like storing files in blob storage?

Can I use azure function app storage account for other purposes like storing files in blob storage? If yes will it according to Microsoft guidelines and also will it cause any performance issue? Specially when size of blob storage get increased to GBs?
I am near to production, so please come up with any suggestions, best practices, solutions as soon as possible.
Can I use azure function app storage account for other purposes like
storing files in blob storage?
Yes, you can.
If yes will it according to Microsoft guidelines and also will it
cause any performance issue? Specially when size of blob storage get
increased to GBs?
It depends. Each Azure Storage account has some pre-defined throughput limits. As long as you stay within those limits, you should be fine.
Having said this, ideally you should have a separate storage account. Considering creation of storage account doesn't cost you anything till the time you do some transactions in it, you may be better off creating a separate account to store data required by your application.

Azure Storage and WebApps Relation

I have two webapps on separate plans each has multiple instances of large size (P3) and it says I get 250GB of storage on P3.
I also have azure storage to store photos.
I want to know, how is Azure storage related to the webapp plans... meaning, what if I reduce the webapp to S3 where it's only 50GB, how will that affect storage?
Also, do I get 50GB for each instances or for the entire plan?
Thank you
Azure App Service plans represent the collection of physical resources used to host your apps.
App Service plans define:
Region (West US, East US, etc.)
Scale count (one, two, three instances, etc.)
Instance size (Small, Medium, Large) SKU (Free, Shared, Basic, Standard, Premium)
If you scale down your App Service plan to S3, yes you will get 50GB storage.
This storage includes/stores all of the resources and deployment files, logs etc.
You can only store data/files up to the available Storage according to the pricing tier that you choose. To increase the storage you can scale up your pricing tier.
Also, note that increase/decreasing instances is nothing but increase/decrease the number of VM instances that run your app. You get only one Storage account for all the instances not individual Storage.
Before scaling based on instance count, you should consider that scaling is affected by Pricing tier in addition to instance count. Different pricing tiers can have different numbers cores and memory, and so they will have better performance for the same number of instances (which is Scale up or Scale down).
For more details, you may refer the Azure App Service plans in-depth overview and App Service pricing.
Hope this answers your questions.
App Service storage is completely different than Azure Storage (blobs/tables/queues).
App Service Storage
For a given tier size (e.g. S1), you get a specific amount of durable storage, shared across all instances of your web app. So, if you get 50GB for a given tier, and you have 5 instances, all 5 instances share that 50GB storage (and all see and use the same directories/files).
All files in your Web App's allocated storage are manipulated via standard file I/O operations.
App Service Storage is durable (meaning there's no single disk to fail, and you won't lose any info stored), until you delete your web app. Then all resources (including the allocated storage, in this example 50GB) are removed.
Azure Storage
Azure Storage, such as blobs, is managed completely independently of web apps. You must access each item in storage (a table, a queue, a blob / container) via REST or a language-specific SDK. A single blob can be as large as 4.75TB, far larger than the largest App Service plan's storage limit.
Unlike App Service / Web App storage, you cannot work with a blob with normal file I/O operations. As I mentioned already, you need to work via API/SDK. If, say, you needed to perform an operation on a blob (e.g. opening/manipulating a zip file), you would typically copy that blob down to working storage in your Web App instance (or VM, etc.), manipulate the file there, then upload the updated file back to blob storage.
Azure Storage is durable (triple-replicated within a region), but has additional options for replication to secondary regions, and even further, allowing for read-only access to the secondary region. Azure Storage also supports additional features such as snapshots, public access to private blobs (through Shared Access Policies & Signatures), and global caching via CDN. Azure Storage will remain in place even if you delete your Web App.
Note: There is also Azure File Storage (backed by Azure Storage), which provides a 5TB file share, and acts similarly to the file share provided by Web Apps. However: You cannot mount an Azure File Storage share with a Web App (though you can access it via API/SDK).

Understanding Azure Storage (blobs) with accounts and containers. Test containers?

I am beginning to use Azure Storage (blob specifically) in my application but wanted to know what the norm was in the case of testing versus production storage.
So is it routine to create one storage account? ie:
http:// <storage-account-name>.blob.core.windows.net/
and then have different containers for each environment? ie:
http://<storage-account-name>.blob.core.windows.net/testContainer
http://<storage-account-name>.blob.core.windows.net/productionContainer
so then it would end up looking like with populated data:
http://<storage-account-name>.blob.core.windows.net/testContainer/<whateverkey>
http://<storage-account-name>.blob.core.windows.net/productionContainer/<whateverkey>
or is should I be creating two different storage accounts? I had assumed that the connectionString generated was for just the storage account name and then later in my logic I would be specifying the containers and keys when adding data.
Thanks
There is no standard way, but... keep in mind: Azure storage isn't multi-level regarding subfolders (though the paths can be simulated). So, using containers to organize test vs production will hinder your ability to take advantage of conainers properly within your app (e.g. if you want /images/foo.png ... now you must have /productioncontainer/images/foo.png).
Remember that storage accounts are free: You pay only for storage used. So it costs nothing extra to have both a test and a production storage account. And then, the only thing that changes is the base address (storage account name).
You're correct regarding connection string: You just have accountname.blob.core.windows.net/container/object .
You should use different Storage Accounts - that way in addition to having storage isolation you can also ensure you have different security protection for accessing your development environment vs your production environment.

Azure - Multiple Cloud Services, Single Storage Account

I want to create a couple of cloud services - Int, QA, and Prod. Each of these will connect to separate Db's.
Do these cloud services require "storage accounts"? Conceptually the cloud services have executables and they must be physically located somewhere.
Note: I do not use any blobs/queues/tables.
If so, must I create 3 separate storage accounts or link them up to one?
Storage accounts are more like storage namespaces - it has a url and a set of access keys. You can use storage from anywhere, whether from the cloud or not, from one cloud service or many.
As #sharptooth pointed out, you need storage for diagnostics with Cloud Services. Also for attached disks (Azure Drives for cloud services), deployments themselves (storing the cloud service package and configuration).
Storage accounts are free: That is, create a bunch, and still only pay for consumption.
There are some objective reasons why you'd go with separate storage accounts:
You feel that you could exceed the 20,000 transaction/second advertised limit of a single storage account (remember that storage diagnostics are using some of this transaction rate, which is impacted by your logging-aggressiveness).
You are concerned about security/isolation. You may want your dev and QA folks using an entirely different subscription altogether, with their own storage accounts, to avoid any risk of damaging a production deployment
You feel that you'll exceed 200TB 500TB (the limit of a single storage account)
Azure Diagnostics uses Azure Table Storage under the hood (and it's more convenient to use one storage account for every service, but it's not required). Other dependencies your service has might also use some of the Azure Storage services. If you're sure that you don't need Azure Storage (and so you don't need persistent storage of data dumped through Azure Diagnostics) - okay, you can go without it.
The service package of your service will be stored and managed by Azure infrastructure - that part doesn't require a storage account.

Is this a sensible Azure Blob Storage setup and are there restructuring tools to help me migrate to it?

I think we have gone slightly wrong on the way we have used Azure storage in a SAAS system. We created a storage account per client (Securtiy was prime consideration) and containers per system area e.g. Vehicle, Work etc
Having done further reading it seems a suggestion would be that we should have used one account for all clients. Each client should have a container (so we can programmatically create it) which we then secure. Then files should just be structured using "virtual" folder structure e.g. Container called "Client A". Then Files for the Jobs (in Work area of system) stored like Work/Jobs/{entity id}/blah.pdf. Does this sound sensible?
If so we now have about 10 accounts that we need to restructure. Are there any tools that will let us easily copy one accounts contents to another containers account? I appreciate we probably can't move the files between accounts (as we set them up ages ago so can't use native copy function) so I guess some sort of copy. There are GB of files across all the accounts.
It may not be such a bad idea to keep different storage accounts per client. The benefits of doing that (to me) are:
Better security as mentioned by you.
You'll be able to achieve better throughput / client as each client will have their own storage account. If you keep one storage account for all clients, and if one client starts hitting that account badly other clients will be impacted.
Better scalability. Each storage account can hold up to 200 TB of data. So if you keep just one storage account and assuming each client consumes 100 GB of data, you'll be able to accommodate only 2000 clients (I hope my math is right :)). With individual storage accounts, you won't be restricted in that sense.
There're some downsides as well. Some of them are:
Management would be a nightmare. Imagining you have 2000 customers then you would end up managing 2000 storage accounts.
You may be limited by Windows Azure. Currently by default you get about 10 or 20 storage accounts per subscription and you would need to contact support to manually up that limit. They can do that for you but I would imagine you would want this to be a self-service model where you would be able to create as many storage accounts as you want without contacting support.
Now coming to your question about tooling, you could possibly write something on your own which makes use of Copy Blob functionality. This functionality allows you to copy blob data across storage accounts asynchronously. Basically this is what you would do is:
First create a blob container for each client in the target storage account.
Enumerate all blob containers in source storage account.
For each blob container in source storage account, enumerate the blobs.
Copy each blob asynchronously to target storage account in the client's blob container.
If you're a PowerShell fan, you can look into Cerebrata's Azure Management Cmdlets (http://www.cerebrata.com/Products/AzureManagementCmdlets) as well which wraps this functionality. I could have recommended Cerebrata's Azure Management Studio as well but I haven't tried this functionality just yet there [Disclosure: I'm one of the devs on Cerebrata team].
Hope this helps.
Adding to Gaurav Mantri answer...
You can have shared storage account for customers and use Shared Access Signature(SAS) to limiting access to particular container or blobs(as well as for tables and queues)...
http://msdn.microsoft.com/en-us/library/windowsazure/hh508996.aspx

Resources