We're usually using Azure Functions + SPA (e.g. Angular) for a lot of different projects. That means technically we can host the Functions and the web frontend inside the same Azure Storage Account, as long as it is v2 to support static website hosting.
However, whenever I create an Azure Function App and let it auto-create the storage account it creates a v1 account. Is there any reason why v1 would be better for Functions than v2?
From Microsofts docs:
General-purpose v2 accounts: Basic storage account type for blobs, files, queues, and tables. Recommended for most scenarios using Azure Storage.
General-purpose v1 accounts: Legacy account type for blobs, files, queues, and tables. Use general-purpose v2 accounts instead when possible.
I haven't seen any issues running Azure Functions in a v2 Storage Account so I'm wondering why v1 is still the default option?
Azure Storage V1 has lower transaction costs than V2. Azure Functions, especially durable functions heavily use storage blobs and tables as synchronisation database generating extremely high bills for storage account. therefore recommendation is to use storage V1 due to lower costs.
See here: https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview#legacy-storage-account-types
General-purpose v1 accounts may not have the latest features or the
lowest per-gigabyte pricing. Consider using it for these scenarios:
Your applications require the Azure classic deployment model.
Your applications are transaction-intensive or use significant geo-replication bandwidth, but don’t require large capacity. In this
case, a general-purpose v1 account may be the most economical
choice.
You use a version of the Azure Storage REST API that is earlier than February 14, 2014, or a client library with a version lower than 4.x,
and you can’t upgrade your application.
You're selecting a storage account to use as a cache for Azure Site Recovery. Because Site Recovery is transaction-intensive, a
general-purpose v1 account may be more cost-effective. For more
information, see Support matrix for Azure VM disaster recovery between
Azure regions.
As far as I know, the General-purpose v1 storage account is the legacy version of Azure storage account and can basically meet most of the needs, so it was the best choice for setting it with Azure Function by default.
As we can see from the official docs:
General-purpose v2 storage accounts incorporate all of the functionality of general-purpose v1 and Blob storage accounts.
Upgrading a general-purpose v1 or Blob storage account to general-purpose v2 is permanent and cannot be undone.
For better user experience, I think it's not a problem to set GPv1 storage account by default. If you need more functionality about Storage, you could consider upgrade to GPv2.
Related
From experience does it make a difference if Azure Function App Consumption plan uses Blob Storage V1 or V2.
It seem that the Storage transaction on V1 is 1/10 of what is on V2?
Or am I miss understanding something?
From experience does it make a difference if Azure Function App
COnsumption plane uses Blob Storage V1 or V2.
It would depend on the features you're using. If you're using storage as a simple object store, then it would not make any difference whether you use V1 or V2. You would want to use V2 if you have a need for features that are available in V2 only. Some of these features are:
Support for blob access tiers (Hot, Cool, Archive).
Support for blob versioning.
Support for blob tags etc.
V2 is definitely way more expensive than V1 so going for V2 makes sense only if you're using the features available in V2 only.
I read somewhere that Azure Storage Team is making the price of V1 and V2 same (not sure if they are increasing V1 pricing or decreasing V2 pricing) however looking at the pricing page, I can still see that V2 is about 10 times more expensive than V1.
We were already using BlobStorage in Openshift cluster as PVs for applications and for other micro services. I just came to know, Point-in-time restore only supports for general purpose v2 storage account. So, before updgrading to general purpose v2 storage account, I just want to know what are impacts like access URL change for storage account's container
There are no impacts if BlobStorage account is upgraded to General-purpose v2 account. The container / blob urls are still the same after upgraded.
General-purpose v2 account contains all the features in the legacy BlobStorage account. This is mentioned in the doc, adding the screenshot of the doc:
There is no issue on upgradation to the existing file and objects. It will enable the additional new feature of the v2 version. But if you are accessing application programmatically then you needed to upgrade your package otherwise you may get exception .
In my Azure subscription, I have used both Storage Accounts that are of the type BlobStorage and some that say Storage or StorageV2...
I know the difference that my BlobStorage types do NOT support Tables, Files, etc containers.
But, are there other differences that I should be aware of? Is StorageV2 any faster that Blob only storage?
General-purpose v2 storage accounts support the latest Azure Storage
features and incorporate all of the functionality of general-purpose
v1 and Blob storage accounts. General-purpose v2 accounts deliver the
lowest per-gigabyte capacity prices for Azure Storage, as well as
industry-competitive transaction prices.
From the description, the v2 General Storage account which takes the features of the blob storage accounts and combines then with the general storage account, plus tiering. And I think the most important to the custom is the price. Follow this link, there is an exmaple analysing the difference in price of the two type storages.
No real 'nuances', just feature differences as stated in document, new storage you want on V2, older storage you want to migrate to V2 if possible.
Example:
General-purpose V2
Blob Tier: Hot, Cool, Archive
Replication: LRS, ZRS4, GRS, RA-GRS
Resource Manager
General-purpose V1
Blob Tier:N/A
Replication: LRS, GRS, RA-GRS
Resource Manager, Classic
Hope this help.........
I deployed WorkerRole to Azure Cloud Service (classic) in new portal. With this, I also created Azure Storage account for queue.
Try to add AutoScale rule, the storage account is not listed. Tried to select Other Resource and put Resource Identifier of storage, there's no Metric name listed.
Is it by design that classic Cloud Service and new Storage account not working together?
Storage account data (e.g. blobs, queues, containers, tables) are accessible simply with account name + key. Any app can work with them.
However, to manage/enumerate available storage accounts, there are Classic-created and ARM-created accounts, each with different API's.
The original Azure Service Management (ASM) API doesn't know anything about ARM resources. There's a fairly good chance that, since you're deploying to a Classic cloud service, it's using ASM only and will not be able to enumerate ARM-created storage accounts.
If you create a Classic storage account (which has zero difference in functionality), you should be able to see it as an option for auto-scale.
I have a bit more details on the differences in this answer.
At this time, it is not possible to autoscale anything based on a new "v2" storage account. It has nothing to do with the fact that you are using the classic Azure Cloud Service. I am having the same issue with using Azure App Services. In the end, I just created a classic storage account to use for the autoscaling. There is no difference in how you interact with the different types of storage accounts.
I want to create a couple of cloud services - Int, QA, and Prod. Each of these will connect to separate Db's.
Do these cloud services require "storage accounts"? Conceptually the cloud services have executables and they must be physically located somewhere.
Note: I do not use any blobs/queues/tables.
If so, must I create 3 separate storage accounts or link them up to one?
Storage accounts are more like storage namespaces - it has a url and a set of access keys. You can use storage from anywhere, whether from the cloud or not, from one cloud service or many.
As #sharptooth pointed out, you need storage for diagnostics with Cloud Services. Also for attached disks (Azure Drives for cloud services), deployments themselves (storing the cloud service package and configuration).
Storage accounts are free: That is, create a bunch, and still only pay for consumption.
There are some objective reasons why you'd go with separate storage accounts:
You feel that you could exceed the 20,000 transaction/second advertised limit of a single storage account (remember that storage diagnostics are using some of this transaction rate, which is impacted by your logging-aggressiveness).
You are concerned about security/isolation. You may want your dev and QA folks using an entirely different subscription altogether, with their own storage accounts, to avoid any risk of damaging a production deployment
You feel that you'll exceed 200TB 500TB (the limit of a single storage account)
Azure Diagnostics uses Azure Table Storage under the hood (and it's more convenient to use one storage account for every service, but it's not required). Other dependencies your service has might also use some of the Azure Storage services. If you're sure that you don't need Azure Storage (and so you don't need persistent storage of data dumped through Azure Diagnostics) - okay, you can go without it.
The service package of your service will be stored and managed by Azure infrastructure - that part doesn't require a storage account.