Azure CLI and SAS Token issue in PowerShell - azure

I generated SAS token in Azure Portal and trying to use it to upload files to blob storage:
az storage blob upload-batch --source ./test --destination '$web' --account-name 'myaccountname' --sas-token '"sp=racwl&st=2022-02-22T17:04:19Z&se=2022-12-23T01:04:19Z&spr=https&sv=2020-08-04&sr=c&sig=mXXXXXXXXXXXXXXXXXXXXXXXXXXONfAA%3D"'
But above command gives me following error in PowerShell:
<AuthenticationErrorDetail>Signature fields not well formed.</AuthenticationErrorDetail>
I am literally copying the SAS Token from Azure Portal so how on earth can it be malformed?

We have ran the same az storage blob upload-batch cmdlet in our local environment( which is running with powershell v5.1) & we are able to upload the files from local machine to the storage account as shown in below.
Here is the cmdlet we have used :
az storage blob upload-batch --account-name <strgAccountName> -s <sourcefilepath> -d 'https://xxxxxx.blob.core.windows.net/cont1' --sas-token '<generatedSAStoken from portal>'
Here is the sample Output for reference:
Note:
To the above cmdlet, We have tried passing the SAS token with appending(single quote+ question mark) '?' & without passing in single quote's to the --sas-token flag in both the cases we are able to upload the files from local machine to Azure storage container.

I don't know what was wrong, but suddenly it started to work with '"sastoken"' format. Thanks for your responses.

Related

Delete Azure storage blob given a URI

I am working on a cleanup script that deletes an Azure image and its underlying storage blob. I can find the storage blob for my image with this command:
az image list --query "[?name=='$IMAGE_NAME'] | [].storageProfile.osDisk.blobUri"
(This is bash, so $IMAGE_NAME gets replaced with the actual image name). The output of the above command is a JSON list of URIs, each looking something like this:
https://storage_account.blob.core.windows.net/container_name/blob_name.vhd
Looking at the documentation for az storage blob delete, I can tell that this blob can be deleted with a command like this:
az storage blob delete --account-name storage_account --container container_name --name blob_name.vhd
So, obviously I can parse the URI and then generate this command. However, this seems odd: what's the point of giving blobs a URI if you can't use them?
So my question is:
Is there a direct az cli command to delete a blob by using its URI?
Better yet, is there a way to delete the blob associated with a given Azure image?
There is no built-in CLI command to delete with blob url directly. There is a workaround to use az rest to call the Delete Blob REST API.
access_token = $(az account get-access-token --resource https://storage.azure.com/ --query accessToken -o tsv)
now = $(env LANG=en_US TZ=GMT date '+%a, %d %b %Y %T %Z')
headers = "Authorization=Bearer "+$access_token+" x-ms-date="+$now+" x-ms-version=2020-08-04"
az rest --method delete --uri $blob_url --headers $headers

How to enable/disable staticwebsite on blob service using script

In order to programmatically automate enablement of the static website on blob service. What should be the property field allowed? I am unable to find something similar to this Microsoft.Storage/storageAccounts/blobService/staticWebsite.
Can anyone help me with this?
Indeed, you can do so with some Azure CLI commands.
# To query the current status of the property
az storage blob service-properties show --account-name <your-storage-account> --query 'staticWebsite.enabled'
To set toggle the staticWebsite.enabled property, you can use the az storage blob service-properties update command as follows:
az storage blob service-properties update --account-name <your-storage-account> --static-website
The Azure PowerShell equivalents for the above would be the Enable-AzStorageStaticWebsite and Disable-AzStorageStaticWebsite cmdlets.

Copy file from Azure VM to Azure Blob Storage

Am new to this and am trying to do something which I think is relatively simple.
I download a file from a URL to my Azure VM using wget (its a large file and I don't want to store it locally). I want to now copy this file to an existing container in blob storage. This is completely defeating me.
It's a single line command in the aws universe
aws s3 sync <file_name> s3://<bucket name>
is there an equivalent in azure?
There are a bunch of ways by you can accomplish this and you don't even have to download this large file on your local computer first and then upload in blob storage.
For example, you can use az storage blob copy command which is part of Azure CLI tools, to do so. Here's the sample command for that:
az storage blob copy start --account-key <your-azure-storage-account-key> --account-name <your-azure-storage-account-name> --destination-blob <name-of-the-blob> --destination-container <name-of-the-container> --source-uri <uri-of-the-file>
You can also accomplish the same using azcopy utility or Azure PowerShell Storage Cmdlets. The Cmdlet you would want to use is Start-AzStorageBlobCopy.

Azure CLI - Blob list command throwing invalid "InvalidResourceName" error while using with SAS token

I Am using AzureCLI in my machine to download and list blobs from Azure. I have tried to list the Blobs using Account key which worked as expected. But, when
I try to list it using SAS Token am getting the below-mentioned exception,
** Command:**
C:\Users\22222>az storage blob list -c containerName --account-name accountName -o table --sas-token sp=r&st=2018-10-16T12:53:16Z&se=2018-10-16T20:53:16Z&spr=https&sv=2017-11-09&sig=d%2asdfasdfewerasdf$#$%#$%#$A%3D&sr=b
Note: I got this SAS Token from Azure portal
The specified resource does not exist.ErrorCode: ResourceNotFound
<?xml version="1.0" encoding="utf-8"?><Error><Code>ResourceNotFound</Code><Message>The specified resource does not exist.
RequestId:a108a8f9-d01e-000d-6a6c-6b0194000000
Time:2018-10-24T07:38:04.5834052Z</Message></Error>
I also reproduce the issue if not put sas-token in qutotation marks. Please have a try to enclose the sastoken string in quotation marks.
az storage blob list -c containerName --account-name accountName -o table --sas-token "sp=r&st=2018-10-16T12:53:16Z&se=2018-10-16T20:53:16Z&spr=https&sv=2017-11-09&sig=d%2asdfasdfewerasdf$#$%#$%#$A%3D&sr=b"
Test Result:
Note: Make sure that your sas token is vaild.

Azure SAS for one container

I am able to generate SAS token for the storage account from the Azure portal but the problem which I am facing is explained below-
Storage account consists of two Containers. One container file has to be given access for the users whom I will provide the SAS token and one container should be completely private means user cannot see this container.
Problem is if I am generating SAS token and login into Azure explorer using that SAS token,I am seeing both the containers but my requirement is to see only 1 container. Is there any way to give permission for only one container by generating SAS token using Azure portal without creating any custom application for generating these tokens.
Easiest way to do that would be to use powershell:
Set-AzureRmStorageAccount -Name 'name'
$sasToken = New-AzureStorageContainerSASToken -Permission r -ExpiryTime (Get-Date).AddHours(2.0) -Name 'container name'
you could issue this command with -debug switch, capture the rest call and use that call to mimic it, using arm client, or custom app or whatever.
The Azure CLI alternative:
az storage container generate-sas --account-name ACCOUNT_NAME --account-key ACCOUNT_KEY --https-only --expiry 'YYYY-MM-DD' --name CONTAINER_NAME --permissions r
Valid permissions: (a)dd (c)reate (d)elete (l)ist (r)ead (w)rite
For more information, check out: az storage container generate-sas -h

Resources