I am using Terraform Modules for storage account and once storage account is created I want to use output variable for the access key
output "storage_account_primary_access_key" {
value = data.azurerm_storage_account.storage.primary_access_key
}
Further in my azure-pipelines.yml, I am using "az command" as below
az storage blob upload-batch -s drop -d \$web --account-name "" --account-key ""
How can I use Module's output variable in the YML file?
You can use the output command. For example:
terraform output storage_account_primary_access_key
So you may do something like this:
az storage blob upload-batch -s drop -d $web \
--account-name "" \
--account-key "$(terraform output -raw storage_account_primary_access_key)"
You could also assign it to variable, that you can use throughout your popeline. Something along these lines.
- script: echo "##vso[task.setvariable variable=ACCESS_KEY]$(terraform output -raw storage_account_primary_access_key)"
Related
I have an Azure Storage account and in it are multiple Containers. How can I check all of the Containers to see if a specific named blob is in any of them? Also the blobs have multiple directories.
I know there's the az storage blob exists command, but that requires a Container name parameter. Will I have to use the List Containers command first?
Yes, you need to get the containers list. I have reproduced in my environment and got expected results as below and I followed Microsoft-Document:
One of the way is Firstly, i have executed below code for getting containers list:
$storage_account_name="rithemo"
$key="T0M65s8BOi/v/ytQUN+AStFvA7KA=="
$containers=az storage container list --account-name $storage_account_name --account-key $key
$x=$containers | ConvertFrom-json
$x.name
$key = Key of Storage Account
Now getting every blob present in all containers in my Storage account:
$Target = #()
foreach($emo in $x.name )
{
$y=az storage blob list -c $emo --account-name $storage_account_name --account-key $key
$y=$y | ConvertFrom-json
$Target += $y.name
}
$Target
Now checking if given blob exists or not as below:
$s="Check blob name"
if($Target -contains $s){
Write-Host("Blob Exists")
}else{
Write-Host("Blob Not Exists")
}
Or you can directly use az storage blob exists command as below after getting containers list:
foreach($emo in $x.name )
{
az storage blob exists --account-key $key --account-name $storage_account_name --container-name mycontainer --name $emo --name "xx"
}
Yes, you will need to use the List Containers command to get a list of all the containers in your storage account, and then you can loop through the list of containers and check each one for the specific blob that you are looking for.
Here's an example of how you can accomplish this using the Azure CLI
# First, get a list of all the containers in your storage account
containers=$(az storage container list --account-name YOUR_STORAGE_ACCOUNT_NAME --output tsv --query '[].name')
# Loop through the list of containers
for container in $containers
do
# Check if the specific blob exists in the current container
az storage blob exists --account-name YOUR_STORAGE_ACCOUNT_NAME --container-name $container --name YOUR_BLOB_NAME
# If the blob exists, print a message and break out of the loop
if [ $? -eq 0 ]; then
echo "Blob found in container: $container"
break
fi
done
I'm trying to delete all files from an Azure Storage File share and exclude one specific file name. I just can't figure out the pattern.
I've tried this but the pattern doesn't match anything now.
az storage file delete-batch --source "https://myst.file.core.windows.net/myshare" --sas-token $sastoken --pattern "[!importantfile.py]"
You need to add the pattern parameter in single quotes as shown in the below
az storage file delete-batch --source "https://myst.file.core.windows.net/myshare" --sas-token $sastoken --pattern '[!importantfile.py]*'
If you have similar files in the file share like (test.json.test1.json) ,using
az storage delete-batch with pattern filter wont be possible to exclude the deletion of a particular file.
Reference SO thread how to use pattern in the az storage blob delete-batch
Alternatively, if want a particular file to be excluded from deletion in file share you can use the below power shell script
connect-azaccount
$accountName = '<accountname>';
$accountKey = '<accountkey>';
$myshare = '<file sharename >';
$notremovalfile = '<file that need to be excluded from deletion>';
$filelist = az storage file list -s $myshare --account-name $accountName --account-key $accountKey
$fileArray = $filelist|ConvertFrom-Json
foreach ($file in $fileArray)
{
if($file.name -ne $notremovalfile)
{
Write-Host $file.name
az storage file delete --account-name $accountName --account-key $accountKey -s $myshare -p $file.name
Write-Host "deleting $file.name"
}
}
Need to calculate size of specific containers and folders at ADLS Gen2.
Started with command az storage fs file list. However don't understand how to grab next_marker ? It appears in stdout as warning but not in output of command:
WARNING: Next Marker:
WARNING: VBbYvMrBhcCCqHEYSAAAA=
So how to get this next_marker:
$files=$(az storage fs file list --file-system <container name>\
--auth-mode login --account-name <account name> \
--query "[*].[contentLength]" --num-results 1000 -o tsv)
$files.next_marker is empty.
UPDATE1: Created issues https://github.com/Azure/azure-cli/issues/16893
If you're using this azure cli command: az storage fs file list, the next_marker is not returned to the variable $files, it's always printed out in the console. You need to copy and paste it.
As a workaround, you can use this azure cli command: az storage blob list(Most of the azure blob storage cli commands are also available in ADLS Gen2). This command has a parameter --show-next-marker, you can use it to return next_marker to a variable.
I write an azure cli scripts and it can work well for ADLS Gen2:
$next_token = ""
$blobs=""
$response = & az storage blob list --container-name your_file_system_in_ADLS_Gen2 --account-name your_ADLS_Gen2_account --account-key your_ADLS_Gen2_key --num-results 5 --show-next-marker | ConvertFrom-Json
$blobs += $response.properties.contentLength
$next_token = $response.nextMarker
while ($next_token -ne $null){
$response = & az storage blob list --container-name your_file_system_in_ADLS_Gen2 --account-name your_ADLS_Gen2_account --account-key your_ADLS_Gen2_key --num-results 5 --marker $next_token --show-next-marker | ConvertFrom-Json
$blobs = $blobs + " " + $response.properties.contentLength
$next_token = $response.nextMarker
}
$blobs
The test result:
Please note that upgrade your azure cli to the latest version, the --show-next-marker parameter may not work in the old versions as per this issue.
I have a user that is putting a lot of whitespaces in their filenames and this is causing a download script to go bad.
To get the names of the blobs I use this:
BLOBS=$(az storage blob list --container-name $c \
--account-name $AZURE_STORAGE_ACCOUNT --account-key $AZURE_STORAGE_KEY \
--query "[].{name:name}" --output tsv)
What is happening for a blob like blob with space.pdf it is getting stored as [blob\twith\tspace.pdf] where \t is the tab. When I iterate in an effort to download obviously I can't get at the file.
How can I do this correctly?
You can use this command az storage blob download-batch.
I tested it in azure portal, all the blobs including whose name contains white-space are downloaded.
The command:
c=container_name
AZURE_STORAGE_ACCOUNT=xx
AZURE_STORAGE_KEY=xx
//download the blobs to clouddrive
cd clouddrive
az storage blob download-batch -d . -s $c --account-name $AZURE_STORAGE_ACCOUNT --account-key $AZURE_STORAGE_KEY
The test result:
In my release pipeline I'm using a Azure CLI to transfer my build files to a Azure storage blob:
call az storage blob upload-batch --source "$(System.DefaultWorkingDirectory)/_ClientWeb-Build-CI/ShellArtifact/out/build" --destination "$web" --account-key "****QxjclDGftOY/agnqUDzwNe/gOIAzsQ==" --account-name "*****estx"
This works, but I want to retrieve the account-key dynamically.
When I use:
az storage account keys list -g CustomersV2 -n ****estx
I get a array with 2 objects, both holding a key value:
[
{
"keyName": "key1",
"permissions": "Full",
"value": "f/eybpcl*****************Vm9uT1PwFC1D82QxjclDGftOY/agnqUDzwNe/gOIAzsQ=="
},
{
"keyName": "key2",
"permissions": "Full",
"value": "bNM**********L6OxAemK1U2oudW5WGRQW++/bzD6jVw=="
}
]
How do I use one of the two keys in my upload-batch command?
For your issue, if you just want one of the two keys for example, the first one. You can set a variable with the key as the value like this:
key=$(az storage account keys list -g CustomersV2 -n ****estx --query [0].value -o tsv)
And then use the variable key in the other command like this:
call az storage blob upload-batch --source "$(System.DefaultWorkingDirectory)/_ClientWeb-Build-CI/ShellArtifact/out/build" --destination "$web" --account-key $key --account-name "*****estx"
Or you can just put the command which gets the key in the other command directly like this:
call az storage blob upload-batch --source "$(System.DefaultWorkingDirectory)/_ClientWeb-Build-CI/ShellArtifact/out/build" --destination "$web" --account-key $(az storage account keys list -g CustomersV2 -n ****estx --query [0].value -o tsv) --account-name "*****estx"
Update
According to what you said, it seems you run the command in the windows command prompt, it's different from the Linux shell and PowerShell. You cannot set the environment variable with the value that the output of a command. You can do that like this:
az storage account keys list -g CustomersV2 -n ****estx --query [0].value -o tsv > key.txt
set /P key=<key.txt
az storage blob upload-batch --source "$(System.DefaultWorkingDirectory)/_ClientWeb-Build-CI/ShellArtifact/out/build" --destination "$web" --account-key %key% --account-name "*****estx"
And it seems you just can quote the environment variable as %variable_name%, so it seems it's a wrong way to use "$web" in your command.
I created a Azure Powershell task (version 4) that does:
az login -u **** -p ****
Write-Host "##vso[task.setvariable variable=storageKey;]az storage account keys list -g ***** -n ***** --query [0].value -o tsv"
$key = az storage account keys list -g ***** -n **** --query [0].value -o tsv
Write-Host "##vso[task.setvariable variable=something;]$key"
Then I can use the something variable in my Azure CLI task:
call az storage blob upload-batch --source "$(System.DefaultWorkingDirectory)/_ClientWeb-Build-CI/ShellArtifact/out/build" --destination "$web" --account-key $(something) --account-name "*****"
And this works. You'll probably need to put the -u and -p in a variable though.
#Charles thanks a lot for this line (az storage account keys list -g **** -n ****estx --query [0].value -o tsv) !