Azure Storage and WebApps Relation - azure

I have two webapps on separate plans each has multiple instances of large size (P3) and it says I get 250GB of storage on P3.
I also have azure storage to store photos.
I want to know, how is Azure storage related to the webapp plans... meaning, what if I reduce the webapp to S3 where it's only 50GB, how will that affect storage?
Also, do I get 50GB for each instances or for the entire plan?
Thank you

Azure App Service plans represent the collection of physical resources used to host your apps.
App Service plans define:
Region (West US, East US, etc.)
Scale count (one, two, three instances, etc.)
Instance size (Small, Medium, Large) SKU (Free, Shared, Basic, Standard, Premium)
If you scale down your App Service plan to S3, yes you will get 50GB storage.
This storage includes/stores all of the resources and deployment files, logs etc.
You can only store data/files up to the available Storage according to the pricing tier that you choose. To increase the storage you can scale up your pricing tier.
Also, note that increase/decreasing instances is nothing but increase/decrease the number of VM instances that run your app. You get only one Storage account for all the instances not individual Storage.
Before scaling based on instance count, you should consider that scaling is affected by Pricing tier in addition to instance count. Different pricing tiers can have different numbers cores and memory, and so they will have better performance for the same number of instances (which is Scale up or Scale down).
For more details, you may refer the Azure App Service plans in-depth overview and App Service pricing.
Hope this answers your questions.

App Service storage is completely different than Azure Storage (blobs/tables/queues).
App Service Storage
For a given tier size (e.g. S1), you get a specific amount of durable storage, shared across all instances of your web app. So, if you get 50GB for a given tier, and you have 5 instances, all 5 instances share that 50GB storage (and all see and use the same directories/files).
All files in your Web App's allocated storage are manipulated via standard file I/O operations.
App Service Storage is durable (meaning there's no single disk to fail, and you won't lose any info stored), until you delete your web app. Then all resources (including the allocated storage, in this example 50GB) are removed.
Azure Storage
Azure Storage, such as blobs, is managed completely independently of web apps. You must access each item in storage (a table, a queue, a blob / container) via REST or a language-specific SDK. A single blob can be as large as 4.75TB, far larger than the largest App Service plan's storage limit.
Unlike App Service / Web App storage, you cannot work with a blob with normal file I/O operations. As I mentioned already, you need to work via API/SDK. If, say, you needed to perform an operation on a blob (e.g. opening/manipulating a zip file), you would typically copy that blob down to working storage in your Web App instance (or VM, etc.), manipulate the file there, then upload the updated file back to blob storage.
Azure Storage is durable (triple-replicated within a region), but has additional options for replication to secondary regions, and even further, allowing for read-only access to the secondary region. Azure Storage also supports additional features such as snapshots, public access to private blobs (through Shared Access Policies & Signatures), and global caching via CDN. Azure Storage will remain in place even if you delete your Web App.
Note: There is also Azure File Storage (backed by Azure Storage), which provides a 5TB file share, and acts similarly to the file share provided by Web Apps. However: You cannot mount an Azure File Storage share with a Web App (though you can access it via API/SDK).

Related

Azure blob storage streaming performance issue

My application till this day was working with local zip files,
meaning I was using a direct return new FileStream()
in the application and the local zip file that was located on the SDD/Network drive path (zip files can be hundreds of GB).
I configured the application to work with Azure Blob Storage, meaning each FileStream that was returned in now return as the Azure Blob SDK method:
GetBlobStreamAsync(ContainerName, BlobName).ConfigureAwait(false).GetAwaiter().GetResult()
I uploaded some zip files to a container in the blob storage and set the connection string in the application to work with that storage account.
The application was deployed and running on a virtual windows machine located in the same region of the Azure Storage Blob.
Note: This is a private cloud network.
When the app is streaming the zip file on Azure blob storage it seems that the performance has decreased by at least 8-9 times (problematic with hundreds of GB).
Speed comparison is between local C: drive on the same windows virtual machine that the application is running on an Azure Storage account which is located in the same region.
Note: NW Bandwidth - is 50 GB on the VM on azure
Solutions that I tried:
Azure blob Premium Performance storage - Didn’t improve performance
.Net Core - advantage of performance enhancements (we work with .Net framework so this is irrelevant).
Network File System (NFS) 3.0 performance considerations in Azure Blob storage - (Does not work with private cloud).
Hot, Cool, and Archive access tiers for blob data - The default is Hot so we already tried this scenario with no improvements.
Solutions I want to try:
Azure Files Share Storage as a cache solution
.Net Framework configuration - lists several quick configuration settings that you can use to make significant performance improvements
Question:
Does anyone have any suggestions on how can I optimize the streaming in front of the Azure Storage Blob?
Azure Files (share) or Storage Blob services are likely not the right services to be utilized for this scenario. There are two possible paths:
Break a single file into multiple files and leverage Storage Blob service that handles throughput better than Azure Files. Azure Files performs better with small(er) files which are typical to user documents (PDFs, Word, Excel, etc.)
Switch over to a more dedicated service that is designed specifically for large-size data transfer if breaking up a single file into multiple blobs is not an option.
The recommendation for each option will highly depend on the implementation details, requirements and constraints of the system.

Azure StorageV2 public containers

We have stored 200.000+ images in a classic azure blob account with standard performance. We include the blob URLs in the HTML of our application so the browser downloads the images directly from the blob storage. However, this is really slow. A simple 2kb image can take up to 200ms to download. Download speeds are irregular.
I made a new storage account, now V2 with premium performance. However, now I can't make any public containers anymore. The portal returns the error: 'This is a premium 'StorageV2 (general purpose v2)' account. Containers within this storage account must be set to 'private' access level.'
How can I host images in an Azure environment with good performance without having to deploy them on my web role?
Azure storage V2 with premium only supports private access level. You should consider using BlockBlobStorage accounts with premium in your case, which supports the public access.
And here is the benefit of BlockBlobStorage accounts:
Compared with general-purpose v2 and BlobStorage accounts, BlockBlobStorage accounts provide low and consistent latency, and higher transaction rates.
Here is the screenshot of create a BlockBlobStorage accounts with premium:
Azure storage account have certain limits (like, 20000 IOPS limit per account) which might interfere with performance at the scale you are talking about. Steps you can take to check if this is the root case - split your images into several storage accounts and see if that fixes performance.
Alternatively (and probably better) you should use Azure CDN attached to the storage account to fix this performance issue (and even make it faster).
https://learn.microsoft.com/en-us/azure/cdn/cdn-create-a-storage-account-with-cdn

Backup files to Azure Storage

We are migrating from an on-premises virtual machine to Azure cloud. The virtual machine will eventually be decommissioned and we have many files and folders that we don't want to lose, like old websites and databases, scripts, programs etc.
We use an Azure storage account for storing and retrieving images via blob containers for the live websites.
Q: What is the best and most cost effective way to backup large amount of files unused in production, rarely accessed, from an on-premises virtual machine to Azure cloud?
Changing the Access tier to Azure Archive Storage(if storing data in Blobs) would be your best option. A few notes:
The Archive storage tier is only available at the blob level and not at the storage account level.
Archive storage is offline and offers the lowest storage costs but also the highest access costs
Hot, Cool, and Archive tiers can be set at the object level.
Additional info can be found here:https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers
recommendation would be to move those unused files to Azure storage archives, which is cost effective and easily accessible when required.
https://azure.microsoft.com/en-us/services/storage/archive/

Azure - Multiple Cloud Services, Single Storage Account

I want to create a couple of cloud services - Int, QA, and Prod. Each of these will connect to separate Db's.
Do these cloud services require "storage accounts"? Conceptually the cloud services have executables and they must be physically located somewhere.
Note: I do not use any blobs/queues/tables.
If so, must I create 3 separate storage accounts or link them up to one?
Storage accounts are more like storage namespaces - it has a url and a set of access keys. You can use storage from anywhere, whether from the cloud or not, from one cloud service or many.
As #sharptooth pointed out, you need storage for diagnostics with Cloud Services. Also for attached disks (Azure Drives for cloud services), deployments themselves (storing the cloud service package and configuration).
Storage accounts are free: That is, create a bunch, and still only pay for consumption.
There are some objective reasons why you'd go with separate storage accounts:
You feel that you could exceed the 20,000 transaction/second advertised limit of a single storage account (remember that storage diagnostics are using some of this transaction rate, which is impacted by your logging-aggressiveness).
You are concerned about security/isolation. You may want your dev and QA folks using an entirely different subscription altogether, with their own storage accounts, to avoid any risk of damaging a production deployment
You feel that you'll exceed 200TB 500TB (the limit of a single storage account)
Azure Diagnostics uses Azure Table Storage under the hood (and it's more convenient to use one storage account for every service, but it's not required). Other dependencies your service has might also use some of the Azure Storage services. If you're sure that you don't need Azure Storage (and so you don't need persistent storage of data dumped through Azure Diagnostics) - okay, you can go without it.
The service package of your service will be stored and managed by Azure infrastructure - that part doesn't require a storage account.

Azure cloudapp storage

I have a very unique question. In azure when you look at the pricing calculator and your deciding which size of VM to deploy for your cloud service the pricing calculator at the following URL
http://www.windowsazure.com/en-us/pricing/calculator/?scenario=cloud
shows storage along with the the size of the VM. For example the extra small instance says
"Extra small VM (1GHz CPU, 768MB RAM, 20GB Storage)" while the large instance shows "Large VM (4 x 1.6GHz CPU, 7GB RAM, 1,000GB Storage)".
My question is this. If I link a storage account to this cloud service do I get the listed storage in my storage account included with my payment for the cloud service. EG. I have a Large instance with a linked storage account and in the storage account I have 500GB of data stored. Do I pay 251.06 for the cloud service and an additional $36.91 for the 500 gb or is the storage free because it is under the 1000 gb limit listed as included storage for the cloud service?
Your question not unique, but rather common. The answer is - you pay for VM once and for Cloud Storage - second time. The point is that if you do Cloud Service (Web and Worker Roles) the storage that comes with the VM is NOT persistent storage. This means that the VM storage (the one that is from 20GB to 2TB depending ot VM size) can go away at any point of time. While the Cloud Storage (the cloud storage account - BLob / Tables / Queues) is absolutely durable, secure, persistent and optionally even geo-replicated.

Resources