Azure CLI SQL DB Restore time format - azure

I am writing a Powershell script using Azure CLI for doing an Azure SQL Instance restore. This is my script so far:
az login
$AzureSubscription = "SubscriptionName"
az account set --subscription $AzureSubscription
$RGName = "ResourceGroupName"
$SrvName = "AzureSQLServerName"
$RestoreDateTime = (Get-Date).ToUniversalTime().AddHours(-1).ToString()
$RestoreDateTimeString = (Get-Date).ToUniversalTime().AddHours(-1).ToString("yyyy-MM-dd_HH:mm")
$RestoreName = $SrvName + "_" + $RestoreDateTimeString
az sql db restore --dest-name $RestoreName --resource-group $RGName --server $SrvName --name $SrvName --time = $RestoreDateTime
When I run this, I get the following error:
az: error: unrecognized arguments: 7/10/2019 10:39:21 AM
usage: az [-h] [--verbose] [--debug]
[--output {json,jsonc,table,tsv,yaml,none}] [--query JMESPATH]
{sql} ...
I have tried a variety of date-time formats, but, I can't seem to get any of them to work. Is there a specific format that is needed? Should I be passing a different value into time? Any help would be appreciated.

As far as I can tell, the --time parameter wants the datetime formatted as 'Sortable date/time pattern' (yyyy-MM-ddTHH:mm:ss).
This should do it:
$RestoreDateTime = (Get-Date).ToUniversalTime().AddHours(-1)
$RestoreDateTimeString = '{0:yyyy-MM-dd_HH:mm}' -f $RestoreDateTime
$RestoreName = '{0}_{1}' -f $SrvName, $RestoreDateTimeString
# format the datetime as Sortable date/time pattern 'yyyy-MM-ddTHH:mm:ss'
# see: https://learn.microsoft.com/en-us/dotnet/standard/base-types/standard-date-and-time-format-strings
$azRestoreTime = '{0:s}' -f $RestoreDateTime
az sql db restore --dest-name $RestoreName --resource-group $RGName --server $SrvName --name $SrvName --time $azRestoreTime
Hope that helps

Related

How to stop all running Step Functions of a specific state machine?

I accidentally started very many step functions and now wish to terminate all of them.
Any smart ways to do this using the CLI or web console?
OK, let's do this using the CLI.
You can stop an execution using the following:
aws stepfunctions stop-execution \
--execution-arn <STEP FUNCTION EXECUTION ARN>
But since I started way too many executions, it's helpful to be able to list all running executions of a state machine:
aws stepfunctions list-executions \
--state-machine-arn <STEP FUNCTION ARN> \
--status-filter RUNNING \
--output text
Next, make sure to only list execution ARN's for these executions and list each execution ARN on a separate line:
aws stepfunctions list-executions \
--state-machine-arn <STEP FUNCTION ARN> \
--status-filter RUNNING \
--query "executions[*].{executionArn:executionArn}" \
--output text
Now, we put this together into one command using xargs:
aws stepfunctions list-executions \
--state-machine-arn <STEP FUNCTION ARN> \
--status-filter RUNNING \
--query "executions[*].{executionArn:executionArn}" \
--output text | \
xargs -I {} aws stepfunctions stop-execution \
--execution-arn {}
Now all running executions should be shut down. Make sure you do this with care so that you don't mess up production!
On that note, if you user aws-vault to minimize that very risk, the command above would look something like this:
aws-vault exec test-env -- aws stepfunctions list-executions \
--state-machine-arn <STEP FUNCTION ARN> \
--status-filter RUNNING \
--query "executions[*].{executionArn:executionArn}" \
--output text | \
xargs -I {} aws-vault exec test-env -- aws stepfunctions stop-execution \
--execution-arn {}
For me xargs was giving issue because my execution-arn was quite big enough.
aws stepfunctions list-executions \
--state-machine-arn <ARN> \
--status-filter RUNNING \
--query "executions[*].{executionArn:executionArn}" \
--output text | \
awk '{print}' |
while read line;
do aws stepfunctions stop-execution --execution-arn $line
done
This did the trick for me. Thanks to #Pål Brattberg
for some reason after each iteration, it stuck in Mac,
adding >> out.t solves it
aws stepfunctions list-executions \
--state-machine-arn arn:aws:states:us-east-1:322348515048:stateMachine:workflow-dev-acknowledge-Awaiter \
--status-filter RUNNING \
--query "executions[*].{executionArn:executionArn}" \
--output text | \
xargs -I {} aws stepfunctions stop-execution \
--execution-arn {} >> out.t
I had 45,000 executions, so deleting using Pål Brattberg's answer was taking a long time, so I wrote a PowerShell script which will run the stop-execution command in parallel:
$ExecutionBlock = {
Param([string] $StepFunctionExecutionArn)
aws stepfunctions stop-execution --execution-arn $StepFunctionExecutionArn
}
if ($(Get-Job -state running).count -gt 1) {
Write-Host "There are $($(Get-Job -state running).count) jobs already running"
}
else {
Write-Host 'Remove all existing jobs.'
Get-Job | Remove-Job
$StateMachineArn = "<Step Function ARN>"
$StateMachineRegion = "<Step Function Region>"
Write-Host "Getting Step Function Execution ARNs for state machine with arn = '$StateMachineArn' in region = '$StateMachineRegion'."
[array]$StepFunctionExecutionArns = aws stepfunctions list-executions --state-machine-arn $StateMachineArn --status-filter RUNNING --query "executions[*].{executionArn:executionArn}" --output text --region $StateMachineRegion
$MaxThreads = 64
Write-Host "Starting the jobs. Max $MaxThreads jobs running simultaneously."
foreach($StepFunctionExecutionArn in $StepFunctionExecutionArns){
Write-Host "Starting execution of job with input '$StepFunctionExecutionArn'."
While ($(Get-Job -state running).count -ge $MaxThreads) { Start-Sleep -Milliseconds 15 }
Start-Job -Scriptblock $ExecutionBlock -ArgumentList $StepFunctionExecutionArn
Write-Host "Job with input '$StepFunctionExecutionArn' started."
}
Write-Host 'Waiting for jobs to finish.'
Get-Job -state running | Wait-Job
Write-Host 'Writing information from each job.'
foreach($job in Get-Job) { Write-Host Receive-Job -Id ($job.Id) }
Write-Host "Cleaning up all jobs."
Get-Job | Remove-Job
}
To use the script, replace <Step Function ARN> and <Step Function Region> with the correct values.
My results were 5-6 times faster than serial execution. Tweaking $MaxThreads may give better results.
Notes:
You can check how many executions are still running in the web console by filtering on Running and checking the number next to Executions (may need to click the Load More button until you get all of them).
You may want to change the logging. For example, add the number of executions in the $StepFunctionExecutionArns array.

Get-AzureRmStorageAccount, Dig into Container files and get "Modified" property

I need to get all Storage Accounts which last modified date is 6 months ago with PS script.
I didn't found any cmdlet or function which could provide such information. I thought it would be enough to sort by 'LastModifiedTime' but then I dig dipper, I saw that I have a lot of new files inside containers with the parameter "Modified". Question is how can I access these files with Powershell? Any cmdlet, function, etc?
Here is what I used to get SA before:
function check_stores {
$stores = Get-AzureRmResource -ODataQuery "`$filter=resourcetype eq 'Microsoft.Storage/storageAccounts'"
$x = (Get-Date).AddDays(-180)
foreach($store in $stores){
$storename = $store.Name
$dates = (Get-AzureRmStorageContainer -ResourceGroupName $store.ResourceGroupName -StorageAccountName $store.Name).LastModifiedTime
if(!($dates -ge $x)){
"Storage Account Name: $storename"
}}
}
check_stores
Not sure if you just want to get the blobs which LastModifiedTime (aka: LMT) is in 180 days.
If so, you don't need to check the container LMT, since it is not related with blob last modify time. (container LMT is for container properties modification).
Following script works with pipeline. If you don't need to check container LMT, just remove the check:
$x = (Get-Date).AddDays(-180)
# get all storage accounts of current subscription
$accounts = Get-AzStorageAccount
foreach($a in $accounts)
{
# get container of storage account with LMT in 180 days
$containers = $a | Get-AzStorageContainer | ? {$_.LastModified -ge $x}
# if don't need check container LMT, use : $containers = $a[0] | Get-AzStorageContainer
# get blob of containers with LMT in 180 days
$blobs = $containers | Get-AzStorageBlob | ? {$_.LastModified -ge $x}
#add code to handle blobs
echo $blobs
}

ACR - delete only old images - variable reference not vald

Im trying to cleanup old images in my ACR. It has 8 repositories so first I want it to test it in only one of them... The complicated thing about it that I need to keep last 4 images created. So I have this script:
$acrName = ACRttestt
$repo = az acr repository list --name $acrName --top 1
$repo | Convertfrom-json | Foreach-Object {
$imageName = $_
(az acr repository show-tags -n $acrName --repository $_ |
convertfrom-json |) Select-Object -SkipLast 4 | Foreach-Object {
az acr repository delete -n $acrName --image "$imageName:$_"
}
}
But Im receiving the following error:
Failed At line:9 char:58 + ... az acr repository delete -n $acrName
--image "$imageName:$_" + ~~~~~~~~~~~ Variable reference is not valid. ':' was not followed by a valid variable name character. Consider
using ${} to delimit the name.
Any ideas?
Thanks in advance
You need to change the "$imageName:$_" into "${imageName}:$_". Then the script will like below:
$acrName = "ACRttestt"
$repo = az acr repository list --name $acrName --top 1
$repo | Convertfrom-json | Foreach-Object {
$imageName = $_
(az acr repository show-tags -n $acrName --repository $_ |
convertfrom-json |) Select-Object -SkipLast 4 | Foreach-Object {
az acr repository delete -n $acrName --image "${imageName}:$_"
}
}

Copy container by Azure CLI and wait for result

I tried to find some simple way how to copy container from one storage to other asynchronously via Azure CLI. Something that can be done by azcopy. I don't have azcopy on my machine installed, but Azure CLI is.
Question: I understand I need to copy one blob after other. How do I check that the copy operation is finished?
Something that kind of works, but calling az storage blob show one by one takes very long time (minutes).
$backup = 'somecontainer'
$exists = (az storage container exists --name $backup --account-name an --account-key ak --output tsv) -match 'true'
if (!$exists) {
az storage container create --name $backup --account-name mt --account-key mk
}
$blobs = az storage blob list --container-name $backup --account-name an --account-key ak | ConvertFrom-Json
# copy one by one
$blobs.name | % {
$name = $_
az storage blob copy start --destination-blob $name --destination-container $backup --source-blob $name --source-container $backup --account-name mt --account-key mk --source-account-name an --source-account-key ak
}
# check operation status
$results = $blobs.name | % {
az storage blob show --container-name $backup --name $_ --account-name mt --account-key mk | ConvertFrom-Json
}
# still unfinished copy opearations:
$results | ? { !($_.properties.copy.completiontime) } | % { $_.name }
#stej As #GeorgeChen mentioned you can use the below:
az storage blob copy start-batch --account-key 00000000 --account-name MyAccount --destination-container MyDestinationContainer --source-account-key MySourceKey --source-account-name MySourceAccount --source-container MySourceContainer
Here is the documentation link:
https://learn.microsoft.com/en-us/cli/azure/storage/blob/copy?view=azure-cli-latest#az-storage-blob-copy-start-batch

How to read txt/csv line by line using PowerShell

I have a couple of txt/csv files, I am trying to feed the each line in the file into my API commands in a loop.
eg: server.txt - first file contains servers and resource groups server1, rg1
server2, rg2
server3, rg3 etc.
ips.txt - another file contains rules & ips rule1, startip1, endip1
rule2, startip2, endip2
rule3, startip3, endip3 etc.
The issue is how to I set up powershell to each line in server.txt and also ips.txt within the same loop?
I had something like this before, but It doesnt seem to work well. Any thoughts?
$list=Get-Content "servers.txt"
foreach ($data in $list) {
$server_name, $rg = $data -split ',' -replace '^\s*|\s*$'
Write-Host "Checking if SQL server belongs in subscription"
$check=$(az sql server list -g $rg --query "[?name == '$server_name'].name" -o tsv)
Write-Host $check
# Get current rules and redirect output to file
az sql server firewall-rule list --resource-group "$rg" --server "$server_name" --output table | Export-Csv -NoTypeInformation current-natrules.csv
$new_rules=Get-Content "ips.txt"
foreach ($data in $new_rules) {
$rule_name, $start_ip, $end_ip = $data -split ',' -replace '^\s*|\s*$'
# Create rule with new configs
Write-Host "Assigning new firewall rules..."
az sql server firewall-rule create --name "$rule_name" --server "$server_name" --resource-group "$rg" --start-ip-address "$start_ip" --end-ip-address "$end_ip" --output table
}
# Validating
Write-Host "Printing new NAT rules to file..."
az sql server firewall-rule list --resource-group "$rg" --server "$server_name" --output table | Export-Csv -NoTypeInformation new-natrules.csv
}

Resources