Azure Blob Storage - automatically move to Archive storage tier - azure

Is there a feature in Azure to move blobs in Hot/Cool tiers to Archive automatically if they haven't been used in a period of time?
For example, if I have a blob stored in Archive, I access it by rehydrating it to Hot/Cool. Once I am done, is there a way Azure can automatically downtier it?

Moving to another tier not accessed blobs is possible using native functionality but for the moment this is limited to France Central, Canada East, and Canada Central as the feature is in preview.
In order to use the Last accessed option, select Access tracking enabled on the Lifecycle Management page in the Azure portal.
And then define a rule based on the Last accessed
More details you may find here

This is now generally available as of 2019 from Microsoft. Now you can -
Automatically change the blob tier after N days.
Automatically remove the blob after N days.
Azure Blob lifecycle management overview

All tier changes must be performed by you; there is no automatic tier-change method built-in. You'll need to make a specific call to set the tier for each tier change (note - I pointed to the REST API, but various language-specific SDKs wrap the call as well).

Please see this Azure Feedback question for updates on automated object lifecycle policies for Azure Storage Blobs (as well as a description of a workaround using Logic Apps). The question pertains to blob TTL, but tiering policies will also be possible with both the workaround and ultimately using the policy framework.

Related

Azure Blob storage lifecycle management

I am currently using Azure Blobs to store data for a project. I want Azure to automatically delete old entries (data points) which are older then X number of days. I have found the following documentation:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts?tabs=azure-portal
It essentially says that this can be done using lifecycle management and defining a new rule.
However, this documentation is over 6 months old and I cannot seem to find an option to select lifecycle management and define a new rule.
Has anyone else encountered this problem or know where I can access lifecycle management for an Azure Blob as of 2020?
Yes, this is a feature available today, I just confirmed on a storage account. You need to make sure you are using a V2 storage account, it will not be present on a v1, or blob only storage account.
I was experiencing the same issue, where the option for Life cycle management wasn't available but it was available on other storage accounts.
Check the performance/access tier. If it's set to Premium then its Life cycle management isn't available. Try creating a storage account with Standard.
If your using an arm template try Standard_RAGRS for the sku parameter.
screenshot of storage account in portal:

What does the "Storage Account" setting of an Azure Function App do?

It has a default selection of "functionb7be452dbab0" in my case, but I can change it to select other storage accounts. There is no documentation that I can see which explains the storage account setting.
It is used for several things:
In Consumption mode, it holds your files, using Azure Files. i.e. all you function files exist in there.
In addition, the script runtime (based on the WebJobs SDK) uses Blobs, Queues and Tables as part of its infrastructure. e.g. it uses that to synchronize the work between multiple instances. It also stores logging information there.
Note that you can easily see all this by using Microsoft Azure Storage Explorer and looking at all the things in there.
As an aside, you can optionally also make use of this storage account for your own queues and blobs that you want to use in your functions.

Delete blob in azure after certain time

Is it possible to make a blob be able to auto delete after a certain time?
I need to delete my blobs after few hours they were uploaded to azure, I don't need store them more than 10 days.
Not at this time, unfortunately. Using Webjobs or something similar this is something that could be accomplished on top of Azure Storage, but there is nothing offered from the platform itself.
Since March 2019, this is possible with Lifecycle management support in Azure Blob Storage. See https://stackoverflow.com/a/57305518/347805
Azure Blob storage lifecycle management offers a rich, rule-based
policy for GPv2 and Blob storage accounts. Use the policy to
transition your data to the appropriate access tiers or expire at the
end of the data's lifecycle.
The lifecycle management policy lets you:
Transition blobs to a cooler storage tier (hot to cool, hot to archive, or cool to archive) to optimize for performance and cost
Delete blobs at the end of their lifecycles
Define rules to be run once per day at the storage account level Apply rules to containers or a subset of blobs (using prefixes as filters)
In short, it is NOT POSSIBLE to make a blob auto-delete after a certain time by any setting/configuration on the blob itself in Azure at this time.
You will need to rely on other services such as Azure WebJobs or Azure Automation to automate such task.

Is this a sensible Azure Blob Storage setup and are there restructuring tools to help me migrate to it?

I think we have gone slightly wrong on the way we have used Azure storage in a SAAS system. We created a storage account per client (Securtiy was prime consideration) and containers per system area e.g. Vehicle, Work etc
Having done further reading it seems a suggestion would be that we should have used one account for all clients. Each client should have a container (so we can programmatically create it) which we then secure. Then files should just be structured using "virtual" folder structure e.g. Container called "Client A". Then Files for the Jobs (in Work area of system) stored like Work/Jobs/{entity id}/blah.pdf. Does this sound sensible?
If so we now have about 10 accounts that we need to restructure. Are there any tools that will let us easily copy one accounts contents to another containers account? I appreciate we probably can't move the files between accounts (as we set them up ages ago so can't use native copy function) so I guess some sort of copy. There are GB of files across all the accounts.
It may not be such a bad idea to keep different storage accounts per client. The benefits of doing that (to me) are:
Better security as mentioned by you.
You'll be able to achieve better throughput / client as each client will have their own storage account. If you keep one storage account for all clients, and if one client starts hitting that account badly other clients will be impacted.
Better scalability. Each storage account can hold up to 200 TB of data. So if you keep just one storage account and assuming each client consumes 100 GB of data, you'll be able to accommodate only 2000 clients (I hope my math is right :)). With individual storage accounts, you won't be restricted in that sense.
There're some downsides as well. Some of them are:
Management would be a nightmare. Imagining you have 2000 customers then you would end up managing 2000 storage accounts.
You may be limited by Windows Azure. Currently by default you get about 10 or 20 storage accounts per subscription and you would need to contact support to manually up that limit. They can do that for you but I would imagine you would want this to be a self-service model where you would be able to create as many storage accounts as you want without contacting support.
Now coming to your question about tooling, you could possibly write something on your own which makes use of Copy Blob functionality. This functionality allows you to copy blob data across storage accounts asynchronously. Basically this is what you would do is:
First create a blob container for each client in the target storage account.
Enumerate all blob containers in source storage account.
For each blob container in source storage account, enumerate the blobs.
Copy each blob asynchronously to target storage account in the client's blob container.
If you're a PowerShell fan, you can look into Cerebrata's Azure Management Cmdlets (http://www.cerebrata.com/Products/AzureManagementCmdlets) as well which wraps this functionality. I could have recommended Cerebrata's Azure Management Studio as well but I haven't tried this functionality just yet there [Disclosure: I'm one of the devs on Cerebrata team].
Hope this helps.
Adding to Gaurav Mantri answer...
You can have shared storage account for customers and use Shared Access Signature(SAS) to limiting access to particular container or blobs(as well as for tables and queues)...
http://msdn.microsoft.com/en-us/library/windowsazure/hh508996.aspx

Getting Started with Azure Question

I'm trying to get up-and-going with Windows Azure. I understand that I need to create a "Storage Account". However, what I'm confused about is, how I should set it up. For instance, my Azure subscription is set to my company name. I intend to have multiple ASP.NET web applications (web roles) associated with my subscription. Each web application will have its own database.
My question is, should each web application have its own storage account? Or should only one storage account be used for all of my projects?
Thank you!
There's no one way to answer this, but here are some thoughts to help your decision:
Each storage account is limited to 100TB. If you feel that you will push the limits of this across multiple websites, then create multiple storage accounts for sure.
To make billing easier, I'd suggest separate storage accounts
Storage accounts have an SLA of a few thousand transactions per second across the entire storage account. For performance purposes, it's probably better to have separate storage accounts
Consider putting your diagnostic data in a separate storage account. This way, you can safely give your Storage Account key to a 3rd-party like ParaLeap (creators of AzureWatch) for monitoring your app, while not giving away the key to real customer data, for instance.
If you need more than 5 storage accounts, you'll need to contact Customer Support to increase this number.
Windows Azure Storage server is for simple blob storage. This is for when your app needs a file store. Any application, not just Azure web roles, can target a storage service. It's kind of like Amazon S3 if you're familiar with that.
Storage services are not required to run Azure applications. You just need a "compute" instance.

Resources