Impact of upgrading BlobStorage to general purpose v2 storage - azure

We were already using BlobStorage in Openshift cluster as PVs for applications and for other micro services. I just came to know, Point-in-time restore only supports for general purpose v2 storage account. So, before updgrading to general purpose v2 storage account, I just want to know what are impacts like access URL change for storage account's container

There are no impacts if BlobStorage account is upgraded to General-purpose v2 account. The container / blob urls are still the same after upgraded.
General-purpose v2 account contains all the features in the legacy BlobStorage account. This is mentioned in the doc, adding the screenshot of the doc:

There is no issue on upgradation to the existing file and objects. It will enable the additional new feature of the v2 version. But if you are accessing application programmatically then you needed to upgrade your package otherwise you may get exception .

Related

Why is Azure Functions creating V1 Storage Accounts?

We're usually using Azure Functions + SPA (e.g. Angular) for a lot of different projects. That means technically we can host the Functions and the web frontend inside the same Azure Storage Account, as long as it is v2 to support static website hosting.
However, whenever I create an Azure Function App and let it auto-create the storage account it creates a v1 account. Is there any reason why v1 would be better for Functions than v2?
From Microsofts docs:
General-purpose v2 accounts: Basic storage account type for blobs, files, queues, and tables. Recommended for most scenarios using Azure Storage.
General-purpose v1 accounts: Legacy account type for blobs, files, queues, and tables. Use general-purpose v2 accounts instead when possible.
I haven't seen any issues running Azure Functions in a v2 Storage Account so I'm wondering why v1 is still the default option?
Azure Storage V1 has lower transaction costs than V2. Azure Functions, especially durable functions heavily use storage blobs and tables as synchronisation database generating extremely high bills for storage account. therefore recommendation is to use storage V1 due to lower costs.
See here: https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview#legacy-storage-account-types
General-purpose v1 accounts may not have the latest features or the
lowest per-gigabyte pricing. Consider using it for these scenarios:
Your applications require the Azure classic deployment model.
Your applications are transaction-intensive or use significant geo-replication bandwidth, but don’t require large capacity. In this
case, a general-purpose v1 account may be the most economical
choice.
You use a version of the Azure Storage REST API that is earlier than February 14, 2014, or a client library with a version lower than 4.x,
and you can’t upgrade your application.
You're selecting a storage account to use as a cache for Azure Site Recovery. Because Site Recovery is transaction-intensive, a
general-purpose v1 account may be more cost-effective. For more
information, see Support matrix for Azure VM disaster recovery between
Azure regions.
As far as I know, the General-purpose v1 storage account is the legacy version of Azure storage account and can basically meet most of the needs, so it was the best choice for setting it with Azure Function by default.
As we can see from the official docs:
General-purpose v2 storage accounts incorporate all of the functionality of general-purpose v1 and Blob storage accounts.
Upgrading a general-purpose v1 or Blob storage account to general-purpose v2 is permanent and cannot be undone.
For better user experience, I think it's not a problem to set GPv1 storage account by default. If you need more functionality about Storage, you could consider upgrade to GPv2.

Azure Blob storage lifecycle management

I am currently using Azure Blobs to store data for a project. I want Azure to automatically delete old entries (data points) which are older then X number of days. I have found the following documentation:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts?tabs=azure-portal
It essentially says that this can be done using lifecycle management and defining a new rule.
However, this documentation is over 6 months old and I cannot seem to find an option to select lifecycle management and define a new rule.
Has anyone else encountered this problem or know where I can access lifecycle management for an Azure Blob as of 2020?
Yes, this is a feature available today, I just confirmed on a storage account. You need to make sure you are using a V2 storage account, it will not be present on a v1, or blob only storage account.
I was experiencing the same issue, where the option for Life cycle management wasn't available but it was available on other storage accounts.
Check the performance/access tier. If it's set to Premium then its Life cycle management isn't available. Try creating a storage account with Standard.
If your using an arm template try Standard_RAGRS for the sku parameter.
screenshot of storage account in portal:

Difference between new and classic storage accounts in Azure

Azure has Storage accounts and Storage accounts (classic) in the Azure Portal.
What are the differences between them? Is there any reason to migrate from a classic storage account to a new storage account?
Classic storage accounts are created using existing Service Management API's (the REST API stack that's been available for the past several years). The newer storage accounts are created with the new Azure Resource Manager (ARM) API's (which are also wrapped in PowerShell and CLI now). Ultimately they provide the same resources to your apps, but they're created/managed differently, and there are a few nuanced differences (such as the ability to tag resources that are created via ARM scripts).
You can't convert a classic storage account (or any classic resource) to a newer type. You don't really need to anyway, unless you're trying to mix resources from classic and new, such as adding ARM-based virtual machines to a classic-based virtual network, or spin up an ARM-based VM from a vhd image sitting in a classic storage account (and for that example, you could always just copy the vhd to a new storage account). Note that, for general storage usage (blobs/tables/queues), you just need the URI and the primary (or secondary) key. With those, you can access your storage resources from anywhere, from any VM/website/etc, regardless if you're accessing storage from classic or new virtual machines, for example.
Check out this link for a general list of differences between classic and new resources.
One advantage of the new over the classic storage accounts is Storage Service Encryption (SSE):
Q: I have an existing classic storage account. Can I enable SSE on it?
A: No, SSE is only supported on Resource Manager storage accounts.
Q: How can I encrypt data in my classic storage account?
A: You can create a new Resource Manager storage account and copy your
data using AzCopy from your existing classic storage account to your
newly created Resource Manager storage account.
There is now a way to migrate Classic resources to the new ARM model. I've done a few myself and it worked as expected. Here's a guide from Microsoft:
https://learn.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-ps-migration-classic-resource-manager
In addition to #David Makogon's answer, the new Azure Storage offers reselling resources to sub-subscriptions.
This means that you are able to buy storage from Azure and sell it to your customers.
You can now migrate Classic Storage Accounts to ARM from within Azure.
Settings --> Migrate to ARM
With Azure
With Powershell

Rich ACLs with Azure Storage - delegating to AD?

How do I build a rich storage ACL policy system with Azure storage?
I want to have a blob container that has the following users:
public - read-only against some set of blobs
Uploader - read-write against some subset of blob names, these keys are shared out to semi-trusted build machines
shared admin - full capabilities against this blob subset
Ideally these users are accounts driven through Azure AD, so I can use the full directory service power with them... :)
My understanding of shared access keys is that they are (1) time-limited and (2) have to be created with hand-tooled code. My desire is that I can do something similar to AWS IAM policies on S3... :-)
Thing like AWS IAM Policies for S3 does not exist for Azure Blob Storage today. Azure recently started a Role Based Access Control (RBAC) and is available for Azure Storage but it is limited to performing management activities only like creating storage accounts etc. It is yet not available for perform data management activities like uploading blobs etc.
You may want to look at Azure Rights Management Service (Azure RMS) and see if it is a right solution for your needs. If you search for Azure RMS Blob you will find one of the search results link to a PDF file which talks about securing blob storage with this service (the link directly downloads the PDF file and hence I could not include it here).
If you're looking for a 3rd party service to do this, do take a look at the "Team Edition" of Cloud Portam (a service I am building currently). We recently released the Team Edition. In short, Cloud Portam is a browser-based Azure Explorer and it supports managing Azure Storage, Search Service and DocumentDB accounts. The Team Edition makes use of your Azure AD for user authentication and you can grant permissions (None, Read-Only, Read-Write and Read-Write-Delete) on the Azure resources you manage through this application.
Paul,
While Gaurav is correct in that Azure Storage does not have AD integration today, I wanted to point out a couple of things about shared access signatures from your post:
My understanding of shared access keys is that they are (1) time-limited and (2) have to be created with hand-tooled code
1) A sas token/uri does not need to have an expiry date on it (it's an optional field), so in that sense they are not time-limited and need not be regenerated unless you change the shared key with which you generated the token
2) You can use PS cmdlets to do this for e.g.: https://msdn.microsoft.com/en-us/library/dn806416.aspx. Some storage explorers also support creation of sas tokens/uris statically without you having to write code for it.

Azure - Multiple Cloud Services, Single Storage Account

I want to create a couple of cloud services - Int, QA, and Prod. Each of these will connect to separate Db's.
Do these cloud services require "storage accounts"? Conceptually the cloud services have executables and they must be physically located somewhere.
Note: I do not use any blobs/queues/tables.
If so, must I create 3 separate storage accounts or link them up to one?
Storage accounts are more like storage namespaces - it has a url and a set of access keys. You can use storage from anywhere, whether from the cloud or not, from one cloud service or many.
As #sharptooth pointed out, you need storage for diagnostics with Cloud Services. Also for attached disks (Azure Drives for cloud services), deployments themselves (storing the cloud service package and configuration).
Storage accounts are free: That is, create a bunch, and still only pay for consumption.
There are some objective reasons why you'd go with separate storage accounts:
You feel that you could exceed the 20,000 transaction/second advertised limit of a single storage account (remember that storage diagnostics are using some of this transaction rate, which is impacted by your logging-aggressiveness).
You are concerned about security/isolation. You may want your dev and QA folks using an entirely different subscription altogether, with their own storage accounts, to avoid any risk of damaging a production deployment
You feel that you'll exceed 200TB 500TB (the limit of a single storage account)
Azure Diagnostics uses Azure Table Storage under the hood (and it's more convenient to use one storage account for every service, but it's not required). Other dependencies your service has might also use some of the Azure Storage services. If you're sure that you don't need Azure Storage (and so you don't need persistent storage of data dumped through Azure Diagnostics) - okay, you can go without it.
The service package of your service will be stored and managed by Azure infrastructure - that part doesn't require a storage account.

Resources