Unable to add new rule in Storage Management Policy on Azure - azure

I am using az cli to add a storage management-policy to delete the blobs in the containers whose modification time is more than 7 days.
Here is the policy.json file:
"rules": [
{
"name": "expirationRule1",
"enabled": true,
"type": "Lifecycle",
"definition": {
"filters": {
"blobTypes": [ "blockBlob" ],
"prefixMatch": [ "container1" ]
},
"actions": {
"baseBlob": {
"delete": { "daysAfterModificationGreaterThan": 7 }
}
}
}
}
]
}
I create this lifecycle management policy using the command:
az storage account management-policy create --account-name <my_acc_name> --policy <policy_file> --resource-group <my_res_group>
This step succeeds. Now I want to add another policy on a different container. The policy.json remains same with prefixMatch changed to container2 and name changed to expirationRule2. Now when I apply this new policy with the same command mentioned above, I cannot see the older policy applied, but can only see the new policy.
Here are the steps:
$az storage account management-policy create --account-name resacc1 --resource-group resgrp1 --policy /tmp/azure_lifecycle.json
{
"id": "<some_id_here>",
"lastModifiedTime": "2021-05-10T10:10:32.261245+00:00",
"name": "DefaultManagementPolicy",
"policy": {
"rules": [
{
"definition": {
"actions": {
"baseBlob": {
"delete": {
"daysAfterLastAccessTimeGreaterThan": null,
"daysAfterModificationGreaterThan": 7.0
},
"enableAutoTierToHotFromCool": null,
"tierToArchive": null,
"tierToCool": null
},
"snapshot": null,
"version": null
},
"filters": {
"blobIndexMatch": null,
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"container1" << container1 is prefixMatch
]
}
},
"enabled": true,
"name": "expirationRule1",
"type": "Lifecycle"
}
]
},
"resourceGroup": "resgrp1",
"type": "Microsoft.Storage/storageAccounts/managementPolicies"
}
Now I add new policy with container2:
$ az storage account management-policy create --account-name resacc1 --resource-group resgrp1 --policy /tmp/azure_lifecycle.json
{
"id": "<some_id_here>",
"lastModifiedTime": "2021-05-10T10:11:54.622184+00:00",
"name": "DefaultManagementPolicy",
"policy": {
"rules": [
{
"definition": {
"actions": {
"baseBlob": {
"delete": {
"daysAfterLastAccessTimeGreaterThan": null,
"daysAfterModificationGreaterThan": 7.0
},
"enableAutoTierToHotFromCool": null,
"tierToArchive": null,
"tierToCool": null
},
"snapshot": null,
"version": null
},
"filters": {
"blobIndexMatch": null,
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"container2" << container2 in prefixMatch
]
}
},
"enabled": true,
"name": "expirationRule2",
"type": "Lifecycle"
}
]
},
"resourceGroup": "resgrp1",
"type": "Microsoft.Storage/storageAccounts/managementPolicies"
}
Now after applying the 2 rules, when I do a show command it only shows that a single policy is applied on the storage account.
$ az storage account management-policy show --account-name resacc1 --resource-group resgrp1
{
"id": "<some_id_here>",
"lastModifiedTime": "2021-05-10T10:11:54.622184+00:00",
"name": "DefaultManagementPolicy",
"policy": {
"rules": [
{
"definition": {
"actions": {
"baseBlob": {
"delete": {
"daysAfterLastAccessTimeGreaterThan": null,
"daysAfterModificationGreaterThan": 7.0
},
"enableAutoTierToHotFromCool": null,
"tierToArchive": null,
"tierToCool": null
},
"snapshot": null,
"version": null
},
"filters": {
"blobIndexMatch": null,
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"container2"
]
}
},
"enabled": true,
"name": "expirationRule2",
"type": "Lifecycle"
}
]
},
"resourceGroup": "resgrp1",
"type": "Microsoft.Storage/storageAccounts/managementPolicies"
}
Can someone please help me out on knowing how to append a new rule to an already existing policy OR create a new policy all together, so that I have both the rules applied on the containers in my storage account.

Looking at AZ CLI documentation, the only options available to you are either creating a new policy or updating an existing policy (i.e. replacing a policy completely). No command is available to add a rule to an existing policy.
The reason you're seeing the behavior is because you're updating the entire policy which is overriding the previous policy contents.
What you would need to do is modify your policy.json file and include both rules and then update the policy on the storage account. Or you could get the existing policy using az storage account management-policy show, parse the policy JSON, add new rule and then update the policy using az storage account management-policy update.

Related

How to delete blobs from blob storage using Lifecycle management

I have a storage account and inside that I have container "mycontainer"
I have two virtual folders
preview and final
I want to configure life cycle rule to delete all blobs from preview folder which is created a day ago along with that
I want to configure another rule that deletes all blobs from final which is created a day ago, only if the blob has an index tag "candelete" : "true"
When I tried configuring Lifecycle rule, it get deletes blobs from preview, but not from final
My rules looks like
{
"rules": [
{
"enabled": true,
"name": "deletepreview",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"delete": {
"daysAfterCreationGreaterThan": 1
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"mycontainer/preview"
]
}
}
},
{
"enabled": true,
"name": "deletefinal",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"delete": {
"daysAfterCreationGreaterThan": 1
}
}
},
"filters": {
"blobIndexMatch": [
{
"name": "candelete",
"op": "==",
"value": "true"
}
],
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"mycontainer/final"
]
}
}
}
]
}
Blob Lifecycle Management
I have created blob1 container having t1 and t2 folders as follows
there are two ways to manage blob lifecycle through portal and through code. following method illustrates creating rules through portal
in storage account created go to lifecycle management -> add rules
we also have code view where we can edit code
Give Rule name, Rule scope (limit blob as filters), Blob type (Block blobs) , Blob subtype (Base blob) and next
Give conditions as required in this case Base blobs were = Created, days 0(for demonstration), Then = Delete the blob
In next filter set give Blob prefix as container_name/folder_name and click add
For adding filters with Indexed tag make sure to add relevant Blob index tag while creating the file or can be updated after creating the file under the properties
Similarly add another rule with required details and in filter set tab give appropriate Blob prefix and add the key/value under Blob index match then add
The two are created as seen in image
In code view tab the code for the created rules is automatically generated as follows
{
"rules": [
{
"enabled": true,
"name": "deletet1",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"delete": {
"daysAfterCreationGreaterThan": 0
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"blob1/t1"
]
}
}
},
{
"enabled": true,
"name": "deletet2",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"delete": {
"daysAfterCreationGreaterThan": 0
}
}
},
"filters": {
"blobIndexMatch": [
{
"name": "frp",
"op": "==",
"value": "true"
}
],
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"blob1/t2"
]
}
}
}
]
}
Note: I noticed that sometimes the files will be deleted after one day, it just takes more than 24 hours to process the rule. In my case
as I set the rule to 0, they are deleted within a day.

Azure Storage - Data Lake Lifecycle Management question

Using a lifecycle management policy to move the contents of a container to archive from cool access tier.I'm trying the following policy in the hopes that it will move all files in that container to archive tier after one day but it's not working. I've set the selection criteria "after one day of not being used".
Here's the json code :
{ "rules": [ { "enabled": true, "name": "move to cool storage", "type": "Lifecycle", "definition": { "actions": { "baseBlob": { "tierToArchive": { "daysAfterLastAccessTimeGreaterThan": 1 } } }, "filters": { "blobTypes": [ "blockBlob" ], "prefixMatch": [ "/amcont1" ] } } } ] }
I'm checking the container after one day and two days and nothing has changed, the access tier still remains the same, cool instead of archive. Is there a way to test this interactively? 
You also need to give the container name in "prefixMatch" value to implement the rule.
Try the below given rule and check the result. Change the values as per your storage account. This should work fine.
{
"enabled": true,
"name": "last-accessed-one-day-ago",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"tierToArchive": {
"daysAfterLastAccessTimeGreaterThan": 1
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"mycontainer/logfile"
]
}
}
}
Refer: Move data based on last accessed time

Azure ADLS v2 - Lifecycle management logs

I'm running a policy that moves files to the archive access tier after one day.
Is there a log indicating what policy triggered the move and when/how it was executed?.
Perhaps Log Analytics may be helpful in this question, what would the settings look like?
Please see code below for the policies I'm testing with, not sure if the one with the front slash prefix is the one working or not. I'm testing both methods since I was getting no results, also added one policy for modified and one for accessed files. This works now but I don't know which policy was the one that worked.
Thanks for any answers you may provide.
'''{
"rules": [
{
"enabled": true,
"name": "move to archive storage amcont01 accessed",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"tierToArchive": {
"daysAfterLastAccessTimeGreaterThan": 1
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"/amcont01",
"/amcont01/folder1",
"amcont01",
"amcont01/folder1"
]
}
}
},
{
"enabled": true,
"name": "move to archive rule amcont01 modified folder1",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"tierToArchive": {
"daysAfterModificationGreaterThan": 1
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"amcont01",
"amcont01/folder1",
"/amcont01",
"/amcont01/folder1"
]
}
}
}
]
}'''

aws cli : s3api append lifecycle policy to existing ones

I have multiple lifecycle policies for a bucket, whenever I try to create a new one, I have to manually do the following things:
STEP 1:
Run:
aws s3api get-bucket-lifecycle-configuration --bucket my-bucket > lifecycle.json
STEP 2:
then edit the lifecycle.json and append the new policy
STEP 3:
Run:
aws s3api put-bucket-lifecycle-configuration --bucket <bucket_name> --lifecycle-configuration file://lifecycle.json
otherwise, it will replace the old policies.
Is there any way to directly append the new policy with the existing ones?
For Example:
My Existing policies:
{
"Rules": [
{
"Expiration": {
"Days": 365
},
"ID": "Policy 1",
"Filter": {
"Prefix": "dir1/dir2/code/"
},
"Status": "Enabled"
},
{
"Expiration": {
"Days": 90
},
"ID": "Policy 2",
"Filter": {
"Prefix": "Name/Address/code/"
},
"Status": "Enabled"
}
]
}
I want to add this policy:
{
"Rules": [
{
"Expiration": {
"Days": 20
},
"ID": "TEST_POLICY_09082021_ARANI2",
"Filter": {
"Prefix": "backup/ARANI2/files/"
},
"Status": "Enabled"
}
]
}

How to install AKS with Calico enabled

This definition clearly mentions you can use networkPolicy property as part of the networkProfile and set it to Calico, but that doesnt work. AKS creating just times out with all the nodes being in Not Ready state.
you need enable underlying provider feature:
az feature list --query "[?contains(name, 'Container')].{name:name, type:type}" # example to list all features
az feature register --name EnableNetworkPolicy --namespace Microsoft.ContainerService
az provider register -n Microsoft.ContainerService
after that you can just use REST API\ARM Template to create AKS:
{
"location": "location1",
"tags": {
"tier": "production",
"archv2": ""
},
"properties": {
"kubernetesVersion": "1.12.4", // has to be 1.12.x, 1.11.x doesnt support calico AFAIK
"dnsPrefix": "dnsprefix1",
"agentPoolProfiles": [
{
"name": "nodepool1",
"count": 3,
"vmSize": "Standard_DS1_v2",
"osType": "Linux"
}
],
"linuxProfile": {
"adminUsername": "azureuser",
"ssh": {
"publicKeys": [
{
"keyData": "keydata"
}
]
}
},
"servicePrincipalProfile": {
"clientId": "clientid",
"secret": "secret"
},
"addonProfiles": {},
"enableRBAC": false,
"networkProfile": {
"networkPlugin": "azure",
"networkPolicy": "calico", // set policy here
"serviceCidr": "xxx",
"dnsServiceIP": "yyy",
"dockerBridgeCidr": "zzz"
}
}
}
ps.
Unfortunately, helm doesnt seem to work at the time of writing (I suspect this is because kubectl port-forward which helm relies on doesnt work as well ).

Resources