Azure ADLS v2 - Lifecycle management logs - azure

I'm running a policy that moves files to the archive access tier after one day.
Is there a log indicating what policy triggered the move and when/how it was executed?.
Perhaps Log Analytics may be helpful in this question, what would the settings look like?
Please see code below for the policies I'm testing with, not sure if the one with the front slash prefix is the one working or not. I'm testing both methods since I was getting no results, also added one policy for modified and one for accessed files. This works now but I don't know which policy was the one that worked.
Thanks for any answers you may provide.
'''{
"rules": [
{
"enabled": true,
"name": "move to archive storage amcont01 accessed",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"tierToArchive": {
"daysAfterLastAccessTimeGreaterThan": 1
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"/amcont01",
"/amcont01/folder1",
"amcont01",
"amcont01/folder1"
]
}
}
},
{
"enabled": true,
"name": "move to archive rule amcont01 modified folder1",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"tierToArchive": {
"daysAfterModificationGreaterThan": 1
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"amcont01",
"amcont01/folder1",
"/amcont01",
"/amcont01/folder1"
]
}
}
}
]
}'''

Related

How to delete blobs from blob storage using Lifecycle management

I have a storage account and inside that I have container "mycontainer"
I have two virtual folders
preview and final
I want to configure life cycle rule to delete all blobs from preview folder which is created a day ago along with that
I want to configure another rule that deletes all blobs from final which is created a day ago, only if the blob has an index tag "candelete" : "true"
When I tried configuring Lifecycle rule, it get deletes blobs from preview, but not from final
My rules looks like
{
"rules": [
{
"enabled": true,
"name": "deletepreview",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"delete": {
"daysAfterCreationGreaterThan": 1
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"mycontainer/preview"
]
}
}
},
{
"enabled": true,
"name": "deletefinal",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"delete": {
"daysAfterCreationGreaterThan": 1
}
}
},
"filters": {
"blobIndexMatch": [
{
"name": "candelete",
"op": "==",
"value": "true"
}
],
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"mycontainer/final"
]
}
}
}
]
}
Blob Lifecycle Management
I have created blob1 container having t1 and t2 folders as follows
there are two ways to manage blob lifecycle through portal and through code. following method illustrates creating rules through portal
in storage account created go to lifecycle management -> add rules
we also have code view where we can edit code
Give Rule name, Rule scope (limit blob as filters), Blob type (Block blobs) , Blob subtype (Base blob) and next
Give conditions as required in this case Base blobs were = Created, days 0(for demonstration), Then = Delete the blob
In next filter set give Blob prefix as container_name/folder_name and click add
For adding filters with Indexed tag make sure to add relevant Blob index tag while creating the file or can be updated after creating the file under the properties
Similarly add another rule with required details and in filter set tab give appropriate Blob prefix and add the key/value under Blob index match then add
The two are created as seen in image
In code view tab the code for the created rules is automatically generated as follows
{
"rules": [
{
"enabled": true,
"name": "deletet1",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"delete": {
"daysAfterCreationGreaterThan": 0
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"blob1/t1"
]
}
}
},
{
"enabled": true,
"name": "deletet2",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"delete": {
"daysAfterCreationGreaterThan": 0
}
}
},
"filters": {
"blobIndexMatch": [
{
"name": "frp",
"op": "==",
"value": "true"
}
],
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"blob1/t2"
]
}
}
}
]
}
Note: I noticed that sometimes the files will be deleted after one day, it just takes more than 24 hours to process the rule. In my case
as I set the rule to 0, they are deleted within a day.

Azure Storage - Data Lake Lifecycle Management question

Using a lifecycle management policy to move the contents of a container to archive from cool access tier.I'm trying the following policy in the hopes that it will move all files in that container to archive tier after one day but it's not working. I've set the selection criteria "after one day of not being used".
Here's the json code :
{ "rules": [ { "enabled": true, "name": "move to cool storage", "type": "Lifecycle", "definition": { "actions": { "baseBlob": { "tierToArchive": { "daysAfterLastAccessTimeGreaterThan": 1 } } }, "filters": { "blobTypes": [ "blockBlob" ], "prefixMatch": [ "/amcont1" ] } } } ] }
I'm checking the container after one day and two days and nothing has changed, the access tier still remains the same, cool instead of archive. Is there a way to test this interactively? 
You also need to give the container name in "prefixMatch" value to implement the rule.
Try the below given rule and check the result. Change the values as per your storage account. This should work fine.
{
"enabled": true,
"name": "last-accessed-one-day-ago",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"tierToArchive": {
"daysAfterLastAccessTimeGreaterThan": 1
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"mycontainer/logfile"
]
}
}
}
Refer: Move data based on last accessed time

Limiting `declarative_net_request` rules to subdomain in Chrome Extension manifest version 3

In Manifest V3 Chrome's team introduced declarativeNetRequest. We can't seem to make sure those rules are applied only to sites of a certain domain:
[
{
"id": 1,
"priority": 1,
"action": {
"type": "block"
},
"condition": {
"urlFilter": "main.js",
"resourceTypes": ["script"]
}
}
]
We defined, those rules are fired in every webpage you visit. Can we filter the rules by this host of rule and not by the destination of the script? We couldn't find indication for it in the docs or the examples.
Needless to say any off-docs improvisation in the manifest.json failed to leave a mark. For instance:
{
...
"declarative_net_request": {
"matches": ["https://SUB_DOMAIN_NAME.domain.com/*"], <====== this
"rule_resources": [
{
"id": "ruleset_1",
"enabled": true,
"path": "rules.json"
}
]
}
}

Unable to add new rule in Storage Management Policy on Azure

I am using az cli to add a storage management-policy to delete the blobs in the containers whose modification time is more than 7 days.
Here is the policy.json file:
"rules": [
{
"name": "expirationRule1",
"enabled": true,
"type": "Lifecycle",
"definition": {
"filters": {
"blobTypes": [ "blockBlob" ],
"prefixMatch": [ "container1" ]
},
"actions": {
"baseBlob": {
"delete": { "daysAfterModificationGreaterThan": 7 }
}
}
}
}
]
}
I create this lifecycle management policy using the command:
az storage account management-policy create --account-name <my_acc_name> --policy <policy_file> --resource-group <my_res_group>
This step succeeds. Now I want to add another policy on a different container. The policy.json remains same with prefixMatch changed to container2 and name changed to expirationRule2. Now when I apply this new policy with the same command mentioned above, I cannot see the older policy applied, but can only see the new policy.
Here are the steps:
$az storage account management-policy create --account-name resacc1 --resource-group resgrp1 --policy /tmp/azure_lifecycle.json
{
"id": "<some_id_here>",
"lastModifiedTime": "2021-05-10T10:10:32.261245+00:00",
"name": "DefaultManagementPolicy",
"policy": {
"rules": [
{
"definition": {
"actions": {
"baseBlob": {
"delete": {
"daysAfterLastAccessTimeGreaterThan": null,
"daysAfterModificationGreaterThan": 7.0
},
"enableAutoTierToHotFromCool": null,
"tierToArchive": null,
"tierToCool": null
},
"snapshot": null,
"version": null
},
"filters": {
"blobIndexMatch": null,
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"container1" << container1 is prefixMatch
]
}
},
"enabled": true,
"name": "expirationRule1",
"type": "Lifecycle"
}
]
},
"resourceGroup": "resgrp1",
"type": "Microsoft.Storage/storageAccounts/managementPolicies"
}
Now I add new policy with container2:
$ az storage account management-policy create --account-name resacc1 --resource-group resgrp1 --policy /tmp/azure_lifecycle.json
{
"id": "<some_id_here>",
"lastModifiedTime": "2021-05-10T10:11:54.622184+00:00",
"name": "DefaultManagementPolicy",
"policy": {
"rules": [
{
"definition": {
"actions": {
"baseBlob": {
"delete": {
"daysAfterLastAccessTimeGreaterThan": null,
"daysAfterModificationGreaterThan": 7.0
},
"enableAutoTierToHotFromCool": null,
"tierToArchive": null,
"tierToCool": null
},
"snapshot": null,
"version": null
},
"filters": {
"blobIndexMatch": null,
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"container2" << container2 in prefixMatch
]
}
},
"enabled": true,
"name": "expirationRule2",
"type": "Lifecycle"
}
]
},
"resourceGroup": "resgrp1",
"type": "Microsoft.Storage/storageAccounts/managementPolicies"
}
Now after applying the 2 rules, when I do a show command it only shows that a single policy is applied on the storage account.
$ az storage account management-policy show --account-name resacc1 --resource-group resgrp1
{
"id": "<some_id_here>",
"lastModifiedTime": "2021-05-10T10:11:54.622184+00:00",
"name": "DefaultManagementPolicy",
"policy": {
"rules": [
{
"definition": {
"actions": {
"baseBlob": {
"delete": {
"daysAfterLastAccessTimeGreaterThan": null,
"daysAfterModificationGreaterThan": 7.0
},
"enableAutoTierToHotFromCool": null,
"tierToArchive": null,
"tierToCool": null
},
"snapshot": null,
"version": null
},
"filters": {
"blobIndexMatch": null,
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"container2"
]
}
},
"enabled": true,
"name": "expirationRule2",
"type": "Lifecycle"
}
]
},
"resourceGroup": "resgrp1",
"type": "Microsoft.Storage/storageAccounts/managementPolicies"
}
Can someone please help me out on knowing how to append a new rule to an already existing policy OR create a new policy all together, so that I have both the rules applied on the containers in my storage account.
Looking at AZ CLI documentation, the only options available to you are either creating a new policy or updating an existing policy (i.e. replacing a policy completely). No command is available to add a rule to an existing policy.
The reason you're seeing the behavior is because you're updating the entire policy which is overriding the previous policy contents.
What you would need to do is modify your policy.json file and include both rules and then update the policy on the storage account. Or you could get the existing policy using az storage account management-policy show, parse the policy JSON, add new rule and then update the policy using az storage account management-policy update.

ARM: Solution Template: Multiple VMs: how to configure content of BackendAddresses array without hard coding ips

I tried to RTFM in the usual place, but none of the examples in 101-application-gateway exhibited this pattern. I have a Solution Template ARM template that deploys N VMs. I need to cause an Azure Application Gateway to be provisioned and configured with BackendAddresses that refer to those N VMs. I am familiar with the copy and copyIndex() pattern, but I don't see how to apply it here. The examples have code like:
"backendAddressPools": [
{
"name": "appGatewayBackendPool",
"properties": {
"BackendAddresses": [
{
"IpAddress": "[parameters('backendIpAddress1')]"
},
{
"IpAddress": "[parameters('backendIpAddress2')]"
}
]
}
}
],
but I would like to do something like:
"backendAddressPools": [
{
"name": "appGatewayBackendPool",
"properties": {
"BackendAddresses": [
{
"IpAddress": "[concat(variables('managedVMPrefix'), copyIndex(),variables('nicName'))]"
}
]
}
}
],
I'm sure that won't work because I need N entries in the BackendAddressess array.
Any ideas how to do this?
Thanks,
Ed
After looking at the reference docs for the copy facility, I realized this is the correct syntax:
"backendAddressPools": [
{
"name": "appGatewayBackendPool",
"properties": {
"copy": [{
"name": "BackendAddresses",
"count": "[parameters('numberOfInstances')]",
"input": {
"IpAddress": "[reference(concat(variables('managedVMPrefix'), copyIndex('BackendAddresses', 1), variables('publicIPAddressName'))).properties.ipAddress]"
}
}]
}
}
]
Given NumberOfInstances is 3 and reference(concat(variables('managedVMPrefix'), copyIndex('BackendAddresses'), variables('nicName'))) resolves to mspVM1publicIp, mspVM2publicIp, mspVM1publicIp, which itself is passed through the reference() function to yield 10.0.1.10, 10.0.1.11, 10.0.1.12, this produces the following output:
"backendAddressPools": [
{
"name": "appGatewayBackendPool",
"properties": {
"BackendAddresses": [
{
"IpAddress": "[10.0.1.10]"
},
{
"IpAddress": "[10.0.1.11]"
},
{
"IpAddress": "[10.0.1.12]"
}
]
}
}
I must say I think the ARM template syntax is terribly difficult to work with, understand, and maintain, but maybe it gets easier with more experience.

Resources