I'm currently trying to implement lifecycle management on a container where i have embeded folders as follows:
container1/1/test/done_filename.txt
container1/1/processed/complete_filename.txt
container1/1/wip_filename.txt
i tried creating a rule filter to delete files under the test folder that start with done however i waited 48 hours and it's still not removed.
Rule filter:
delete after 1 day
Prefix Match:
file with this prefix: container1/1/test/done_
All samples I see from microsoft and online only mention "myContainer/prefix", am i to understand that the lifecycle management feature doesn't support subfolders/full paths? do all files have to be within the the root of the container? or what am i doing wrong here.
Updated showing rule definition:
I expect the above to delete any file under "container1/1/test/" that starts with "done_" but the file is still there when i check after a few days even though the file has not been modified.
If you have enabled the rule, then your policy and prefix match are ok.
You need to check some settings which can block the blob delete operation:
1.Make sure you can manually delete these blobs. You can try to delete one of them manually. Why we need to do that? If you have leased these blobs or set immutability policies, the delete operation are not allowed.
2.Check the settings in Networking pane of your storage account. And make sure you select the option "Allow access from All networks":
I was able to find the solution here: https://learn.microsoft.com/answers/answers/154361/view.html
When you are using selected networks because of security restrictions like we do then correct solution is to enable “Allow trusted Microsoft services to access this storage account” under Networking.
Related
So I have a quick question regarding filter prefixes related to lifecycle management for azure storage account V2.
So the scenario I'm faced with is that I have a blob directory/container which in turn contains sub directories created dynamically via a function that pushes/creates blobs depending on conditions, so the directories are created depending on that logic.
The problem I want to solve is that I want to delete the blobs after 7 days.
In the documentation for lifecycle management it says that I can set a filter prefix for which container I want to apply the "retention rule" for, so to speak.
So the question related to what I'm trying to do is the following:
When setting the filter prefix for a blob container to: "containerName/",
as it says to do in the documentation will it also look in the subfolders?
In the Microsoft documentation it says:
"A prefix match string like container1/ applies to all blobs in the
container named container1."
Does that also include all the blobs in all the subfolders automatically. or do I have to specify each subfolder after the slash as it says further down in the same part of the documentation?
I would like to include all blobs in that first container regardless if they are in subfolders or not as the subfolders are created dynamically as mentioned before.
Does that also include all the blobs in all the subfolders
automatically. or do I have to specify each subfolder after the slash
as it says further down in the same part of the documentation?
Yes, when you set the prefix as container name, all blobs (including those in subfolders) will be considered thus you need not specify subfolders specifically.
You would specify subfolder in prefix only when you want to lifecycle management to manage blobs inside a specific subfolder.
I am having set of folders in container named records (Azure storage account). In general what ever the blobs(folders) present in records container it will be deleted as per lifecycle management rule.
Rule: if blob exists more than 30days than it will delete the blob.
But As per my case, All blobs (folders) should delete except one blob (folder) where the blob(folder) name is Backup in the container.
Is there any way to add a rule for not deleting particular blob(In my case it is folder)?
So backup folder shouldn't delete when the existing rule run.
Create a lease for the particular blob using the azure portal for example. A lease prevents processes from doing anything with the blob. This includes lifecycle management rules.
You can also acquire or break a lease using the rest api or one of the many storage SKDs.
Another option would be to not use the lifecycle management rules but write a scheduled azure function that deletes blob older than 30 days except the ones having backup in their name.
Please do note: if you have enabled "Hierarchical namespace" then you have the concept of directories, but those cannot be leased. If you did not then you should realise that folders are a virtual construct and as such cannot be leased as they are actually blobs. See the docs. So in that case you have to individually take a lease on each blob or write a script that does it once.
While doing something I got option to execute shell commands from azure portal. It required to configure shell.azure.com first time.
In first step it is giving option of selecting Subscription & create storage. When I select required subscription & click on create storage, it is giving error:
Error: 409
{"error":{"code":"StorageAccountAlreadyTaken", "message":"The storage account named ... is already taken"}}
Can't create a storage account. Please try again.
I tried multiple times but no avail.
I opened Show advanced settings & tried to play with combinations but here using existing storage account is disabled(in advanced settings) and create storage is also disabled.
strong text
PS I have rights to create storage account on subscription, so that is not an issue.
I also face the same issue before. You need to directly edit (manually type the name) the existing storage account in the box, just ignore the using existing checkbox. It seems like a UI bug.
When you add the existing storage account on the UI, please note that the cloud shell region matches the storage account region. You can see the Supported storage regions from https://learn.microsoft.com/en-us/azure/cloud-shell/persisting-shell-storage.
Refer to the familiar threads,
Unable to open Cloud Shell because of Storage Account error
Azure Cloud shell requires storage account
I have a test restore point (it's older than 30 days) I wanted to delete to save on the cost. I tried via azure portal the only option I found is to delete the backup data, not the restore point of a particular vm.
I'm currently using classic deployment and the VMs are deployed using classic deployment.
Now it is possible thought portal, powershell and CLI.
First of all you need to stop backup (retain backup).
Disable soft delete in RSV properties.
After that find appropriate RG - AzureBackupRG-location_of_vm, example: AzureBackupRG_westus2_1 Remember to check "Show hidden types" in RG.
Last step is to delete restore points in this RG. You only delete restore point, not backup from vault
*
*PORTAL**:
Temporarily stop the backup and retain backup data.
To move virtual machines configured with Azure Backup, do the following steps:
Find the location of your virtual machine.
Find a resource group with the following naming pattern: AzureBackupRG_<location of your VM>_1. For example, AzureBackupRG_westus2_1
In the Azure portal, check Show hidden types.
Find the resource with type Microsoft.Compute/restorePointCollections that has the naming pattern AzureBackup_<name of your VM that you're trying to move>_###########.
Delete this resource. This operation deletes only the instant recovery points, not the backed-up data in the vault.
After the delete operation is complete, you can move your virtual machine.
Move the VM to the target resource group.
Resume the backup.
Link to MS Doc
As for as I know, It's impossible to delete a restore point in Azure backup. According to delete backup date in the official doc.
Unlike the process for restoring recovery points, when you delete
backup data, you can't choose specific recovery points to delete. If
you delete your backup data, you delete all associated recovery
points.
Moreover, It's not necessary to select specific recovery points to delete as you could customize retention range in backup policy. The Retention means how long data needs to be stored. Refer to this.
I got an 'internal system error' while trying to delete the restore point. This was because the VM had already been deallocated. I turned it back on, then was able the delete the restore point successfully, then shut the VM down again.
I am cleaning out some old items from my azure account and cannot remove an older version Bacup Vault.
I get the following error when I try to delete it:
Vault cannot be deleted as there are existing resources within the
vault. Please ensure there are no backup items, protected servers or
backup management servers associated with this vault. Unregister the
following containers associated with this vault before proceeding for
deletion : COMPUTER-NAME. Unregister all containers from the vault and then
retry to delete vault
Notice the COMPUTER-NAME
That is the name of my computer, but I can not find the Azure back up agent installed on that computer. I also cannot find the computer name container in any storage containers in my entire azure account.
Can someone help me figure out how to remove these items
thanks in advance
First screenshot shows the Backup vault and the error message I get when I try to delete.
the second screenshot shows the BackupItems that remain, but I cannot delete them.
the red boxes cover my COMPUTER-NAME
Looks like my previous answer was turned into a comment due to brevity. Here's an update to make it a better answer anyway. Answer from that link quoted below for reference.
I have not mapped this answer to the corresponding Azure commands, but I was able to find my way to a solution via the Azure Portal. The steps were as follows:
Selected my Recovery Service resource
Under the Manage section, clicked Backup Infrastructure
Under Management Servers, clicked Protected Servers
In the list that followed, clicked on the row where my Protected Server > Count was greater than 0, in my case, Azure Backup Agent (because the backup agent was installed on my Windows Desktop)
Clicked on my server name in the Protected Server list
Clicked Delete in the card for my protected server
After that completed, I was able to delete the entire vault. These steps may be helpful if you have other Backup Infrastructure resources and possibly even Site Recovery Infrastructure resources associated with a vault.
Update: It seems like there's an open issue for Get-AzureRmRecoveryServicesBackupItem not having any capacity to return MARS backup items which is ultimately what the issue here was.