Azure Blob Storage - Define Automation Task in Powershell - azure

I have defined within the Azure Portal a scheduled task to delete blobs older than x days. Now I need to script this to Powershell, however I couldn't find any information about this.
At this moment, feb 2023, Azure Blob Storage Automation Tasks are in Preview mode as it can be seen in this picture.
Does anybody know if defining Automation Tasks in Powershell is possible at this moment? Thanks!

Azure Blob Storage - Define Automation Task in Powershell
You can use Azure Automation Runbook for this action:
You can use below PowerShell code to delete the blobs 10 days older:
$a="Storage account name "
$ak="ETxeSAVNp7NkvWAYwri6M7+AStOQtSZQ=="
$context = New-AzStorageContext -StorageAccountName $a -StorageAccountKey $ak
Get-AzStorageBlob -Container "name of container" -Context $context | where {$_.LastModified -le (Get-Date).AddDays(-10)}
$ak is the storage account key
If you dont want to give container name you can use below code:
$a="Storage account name "
$ak="ETxeSAVNp7NkvWAYwri6M7+AStOQtSZQ=="
$context = New-AzStorageContext -StorageAccountName $a -StorageAccountKey $ak
$c=Get-AzStorageContainer -Context $context
foreach($con in $c)
{
Get-AzStorageBlob -Container $con -Context $context | where {$_.LastModified -le (Get-Date).AddDays(-10)
}
You can also use the below command to only delete specific type of blob:
Get-AzStorageBlob -Container "name of conatiner" -Blob *.txt -Context $ctx | where {$_.LastModified -le (Get-Date).AddDays(-10)}
In place of txt you can use jpg, etc.
Here -10 means 10 days older (you can give your required number of days).
Now create a automation account and then create a powershell run book in it.
You can paste the above code in powershell run book and then you can schedule the runbook.
By this way you can create an automation task of PowerShell.

Related

how can i make a text file on azure blob storage by using PowerShell Runbook?

I am using Azure and trying to make a text file on azure blob strage by PowerShell Runbook like this:
echo "test" > test.txt
But I cannot. I think Set-AzStorageBlobContent is good to do this but I have no idea what arguments should be specified.
What arguments should be added to the following command line?
$ctx = New-AzStorageContext -StorageAccountName "test123" -UseConnectedAccount
Set-AzStorageBlobContent -Container "test_files" -Context $ctx (what should I add here?)
You first need to create a temp file since the Set-AzStorageBlobContent command wants a file.
Set-AzStorageBlobContent -Container "ContosoUpload" -File ".\PlanningData" -Blob "Planning2015"
If you need to create temporary files as part of your runbook logic, you can use the Temp folder (that is, $env:TEMP) in the Azure sandbox for runbooks running in Azure. The only limitation is you cannot use more than 1 GB of disk space, which is the quota for each sandbox. When working with PowerShell workflows, this scenario can cause a problem because PowerShell workflows use checkpoints and the script could be retried in a different sandbox.
https://learn.microsoft.com/en-us/azure/automation/automation-runbook-execution#temporary-storage-in-a-sandbox

'Could not get the storage context' error when using Set-AzureStorageBlobContent in VSTS Azure Powershell task

I am using an Azure Powershell task in an Azure (VSTS) Pipeline to try and upload a file to an Azure Classic container.
As part of setting up the Azure Powershell task (ver 3.1.10), I added the Azure Classic subscription that this container lives in to the Project Settings\Service connections:
I then select that subscription in the pipeline task:
When I execute the script, the logs seem to show that the task is both setting and selecting the expected subscription:
However, if I don't explicitly (re-)create and pass in the AzureStorageContext to the Set-AzureStorageBlobContent function, the task fails with:
[error]Could not get the storage context. Please pass in a storage context or set the current storage context.
Is this expected behavior?
Is there any way around having to re-create and pass in the context when it appears that it already exists?
For example, is there an environment variable that might contain the context that was automatically created/selected that I can just pass in?
Update:
I suspect that if the
Select-AzureSubscription call seen in the logs used the -Current switch, this would work as I'm expecting it to.
However, since that command is automatically ran with no way to configure it via the pipeline task, it's not verifiable.
Perhaps this needs to be a feature request?
Excerpts from script:
#Not passing in the context results in the error
Set-AzureStorageBlobContent -File "$file" -Blob "$blobpath/$blah" -Container $blobname -Properties $properties -Force
#Passing in the context works:
$azureKey = Get-AzureStorageKey "$accountName"
$storagekey = [string]$azureKey.Primary
$context = New-AzureStorageContext "$accountName" -StorageAccountKey $storagekey
Set-AzureStorageBlobContent -File "$file" -Blob "$blobpath/$blah" -Container $blobname -Properties $properties -Context $context -Force
While I probably should just delete this question (upon discovering the "answer"), I suppose I will provide what I found after more debugging, etc.
TLDR; -- This is mostly me not grepping the concept that an Azure Subscription (context) does not correlate to an Azure Storage (context).
Is this expected behavior?
Yes.
Simply having a currently set subscription does not mean there's a currently set storage context.
Come to find out, our company has multiple storage accounts in the subscription I was using.
It could be that if a subscription only has one storage account, the function would succeed without specifying a context? Maybe I will research that later.
Is there any way around having to re-create and pass in the context when it appears that it already exists?
No (perhaps because of the multiple storage accounts in the subscription).
I will have to specify/select the current storage context from the current subscription (as I did in the "Passing in the context works" part in my question).
Here's how I arrived at this:
First, I verified what actually was being set (if anything) as the current [subscription] context and then explicitly (re-)setting it.
Running the command still failed.
So, it wasn't that the subscription wasn't being set (since it was).
$current = (Get-AzureSubscription -Current).SubscriptionName
Write-Host "current subscription is $current"
$setCurrent = $false
Write-Host "setCurrent is $setCurrent"
$setCurrent = Select-AzureSubscription -Current -SubscriptionName "CDN Subscription" -PassThru
if ($setCurrent)
{
Write-Host "current set"
$current = (Get-AzureSubscription -Current).SubscriptionName
Write-Host "current subscription is $current"
}
else
{
Write-Host "current not set"
}
It then dawned on me that maybe 'subscription' did not equal 'storage'.
To verify that, I then ran the following:
$current = (Get-AzureSubscription -Current).SubscriptionName
Write-Host "current subscription is $current"
$table = Get-AzureStorageAccount | Format-Table -AutoSize -Property #{Label="Name";Expression={$_.StorageAccountName}},"Label","Location" | Out-String
Write-Host "$table"
The result - 4 storage accounts in the subscription.
Ergo, I will need to specify the account I want to upload to

Get Container list in one azure storage account and create the same in another storage account

Team,
Thanks in Advance, I'm an IT pro and also learning PowerShell. I have 2 storage account Storage A and Storage B in different diff region and I want to create identical containers in the secondary storage account.
I've found below command which can get the list of all the containers in the primary storage account and then I can use the foreach loop to create the container on secondary storage. But I want to make sure that in the secondary storage account if it has the container name already then my command should skip that container and move on to create the next storage container.
Get-AzureStorageContainer -Context $storageAccountContext | ForEach-object { New-AzureStorageContainer -Name $_.Name -Context $Destinationcontext }
"VM" is incorrect argument against the "-Property" switch for the values we are getting from pipe(|).
Please use the correct name of the property. I have used "VM" above only as an example. VM is property in Esxi(VC) environment.
Please run the below command & check the output.
Get-AzureStorageContainer -Context $Destinationcontext
It should give you output something like below:
Property_ID Property_NAME Property_3
Value 1 Value_A Value_1A
Value 2 Value_B Value_2A
So, now the correct syntax should be:
$Vm_to_be_created|Select-object -Property "Property_ID"| ForEach-object { New-AzureStorageContainer -Name $_.Name -Context $Destinationcontext }
:)
Hope this helps!
$storageAccountContext ="Storage_Account_A"
$Destinationcontext ="Storage_Account_B"
$Vm_in_A =#(Get-AzureStorageContainer -Context $storageAccountContext )
$Vm_in_b =#(Get-AzureStorageContainer -Context $Destinationcontext)
$Vm_to_be_created =#(Compare-Object -ReferenceObject $Vm_in_A -DifferenceObject $Vm_in_b )
$Vm_to_be_created|Select-object -Property VM| ForEach-object { New-AzureStorageContainer -Name $_.Name -Context $Destinationcontext }

azure can't delete storage account - "in use"

I've used Azure cli v2 to delete a resource group.
All resources get deleted except a single storage account.
There are no other resources in the subscription, no containers in the storage account, yet I get the "in use" error when I try to delete the storage account.
(there are 2 storage accounts now because I've managed to create this situation twice now - neither is deletable)
Steps take so far:
I've confirmed there are no disks or images assigned to any VMs via the classic console, the arm console, and the az cli.
I've deleted VHDs I found in the storage account, then retried the storage account delete but get same in use error.
I've tried to delete via the cli as well as web console (both arm and classic)
According to the error message, you can use PowerShell to list all the VHDs in the storage account, here is the script:
Login-AzureRmAccount
$RGName = "jason"
$SAName = "jasondisks690"
$ConName = "vhds"
$TempObj = New-Object -TypeName PSCustomObject
$TempObj |Add-Member -Name BlobName -MemberType NoteProperty -Value $null
$TempObj |Add-Member -Name LeaseState -MemberType NoteProperty -Value $null
$Keylist = Get-AzureRmStorageAccountKey -ResourceGroupName $RGName -StorageAccountName $SAName
$Key = $Keylist[0].Value
$Ctx = New-AzureStorageContext -StorageAccountName $SAName -StorageAccountKey $Key
$List = Get-AzureStorageBlob -Blob *.vhd -Container $ConName -Context $Ctx
$List | ForEach-Object { $TempObj.BlobName = $_.Name; $TempObj.LeaseState = $_.ICloudBlob.Properties.LeaseState; $TempObj }
replace the $RGName $SAName $ConName with your name.
Also, we can via new portal to check the storage account, and delete all container.
UPdate:
Here is a workaround:
1. creating a new VM in the same Resource Group as the problematic Storage Account. 2. Added the drive to the same Resource Group, same region, etc. 3. After creation I deleted out the new VM, then deleted out the VHD container for the VM in the problematic Storage Account. 4. After this I was able to remove the problematic Storage Account.

how to delete old files in azure container

I plan to backup my azure vhd files by shutting down my vm and then copying the vhd files from the production container to a backup container. How can I automate deleting vhd files that are a week old in the backup container?
If you can accept using PowerShell, then this would do it for you. It will register a scheduled job to run daily and remove PageBlob's in the container specified.
$taskTrigger = New-ScheduledTaskTrigger -Daily -At 12:01AM
Register-ScheduledJob -Name DeleteMyOldFiles -Trigger $taskTrigger -ScriptBlock {
$isOldDate = [DateTime]::UtcNow.AddDays(-7)
Get-AzureStorageBlob -Container "[YOUR CONTAINER NAME]" |
Where-Object { $_.LastModified.UtcDateTime -lt $isOldDate -and $_.BlobType -eq "PageBlob" } |
Remove-AzureStorageBlob
}
This is something not available right out of the box. You would have to write some code yourself. Essentially the steps would be:
List all blobs in your backup container. Blob list would return blobs along with its properties. One of the properties would be LastModifiedDate (this would be in UTC).
You could then put your logic to find blobs which have been modified "x" days ago. You would then go ahead and delete those blobs.
A few other things:
You mentioned that your backup container contains some VHDs which are essentially page blobs. When you list blobs, you would also get blob type so you could further filter the list by blob type (= PageBlob)
As far as automating the process go, you could either write this in a PowerShell script and then schedule it using Windows Scheduler. If you're comfortable writing node.js, you could write the same logic using node.js and make use of Windows Azure Mobile Service Scheduler.
I found this answer when trying to delete the whole container. Using Rick's answer as a template I came up with this trial what-if to determine which containers would be deleted:
$ctx = New-AzureStorageContext -StorageAccountName $AzureAccount `
-StorageAccountKey $AzureAccountKey
$isOldDate = [DateTime]::UtcNow.AddDays(-7)
Get-AzureStorageContainer -Context $ctx `
| Where-Object { $_.LastModified.UtcDateTime -lt $isOldDate } `
| Remove-AzureStorageContainer -WhatIf
Then when I was satisfied that the list should be deleted, I used -Force so as to not have to confirm every delete:
Get-AzureStorageContainer -Context $ctx `
| Where-Object { $_.LastModified.UtcDateTime -lt $isOldDate } `
| Remove-AzureStorageContainer -Force

Resources