I am able to generate SAS token for the storage account from the Azure portal but the problem which I am facing is explained below-
Storage account consists of two Containers. One container file has to be given access for the users whom I will provide the SAS token and one container should be completely private means user cannot see this container.
Problem is if I am generating SAS token and login into Azure explorer using that SAS token,I am seeing both the containers but my requirement is to see only 1 container. Is there any way to give permission for only one container by generating SAS token using Azure portal without creating any custom application for generating these tokens.
Easiest way to do that would be to use powershell:
Set-AzureRmStorageAccount -Name 'name'
$sasToken = New-AzureStorageContainerSASToken -Permission r -ExpiryTime (Get-Date).AddHours(2.0) -Name 'container name'
you could issue this command with -debug switch, capture the rest call and use that call to mimic it, using arm client, or custom app or whatever.
The Azure CLI alternative:
az storage container generate-sas --account-name ACCOUNT_NAME --account-key ACCOUNT_KEY --https-only --expiry 'YYYY-MM-DD' --name CONTAINER_NAME --permissions r
Valid permissions: (a)dd (c)reate (d)elete (l)ist (r)ead (w)rite
For more information, check out: az storage container generate-sas -h
Related
I am following (previous and) this tutorial: https://learn.microsoft.com/en-us/training/modules/connect-an-app-to-azure-storage/9-initialize-the-storage-account-model?pivots=javascript to connect an application to the Azure Storage account.
At step 8, when I verify the creation of the container by running the given Azure CLI command and replacing with my storage account:
az storage container list \
--account-name <name>
I get the following output:
There are no credentials provided in your command and environment, we will query for account key for your storage account.
It is recommended to provide --connection-string, --account-key or --sas-token in your command as credentials.
You also can add `--auth-mode login` in your command to use Azure Active Directory (Azure AD) for authorization if your login account is assigned required RBAC roles.
For more information about RBAC roles in storage, visit https://learn.microsoft.com/azure/storage/common/storage-auth-aad-rbac-cli.
In addition, setting the corresponding environment variables can avoid inputting credentials in your command. Please use --help to get more information about environment variable usage.
[]
which I am not sure whether the container is listed as [] at the end of the above output.
Comments and suggestions are welcome. Thanks!
This error you are getting is because of an auth issue.
So, there are three solution one is that you run the following command before the running the az storage container list
az login
The other way would be to use the --auth-mode option in the az storage container list this is written in the error prompt itself which you have given.
command:
az storage container list --account-name <name> --auth-mode login
this will prompt you for login credentials once provided the output should look like this
Lastly you can use the same option as above but with key
az storage container list --account-name <name> --auth-mode key <key>
you can get your key from the portal under access keys
The output of the command should look like this here I have two containers name photos and test.
I tried to reproduce in my environment and I got same error:
There are no credentials provided in your command and environment,
we will query for account key for your storage account. It is
recommended to provide --connection-string, --account-key or
--sas-token in your command as credentials.
You also can add --auth-mode login in your command to use Azure
Active Directory (Azure AD) for authorization if your login account is
assigned required RBAC roles. For more information about RBAC roles in
storage, visit
https://learn.microsoft.com/azure/storage/common/storage-auth-aad-rbac-cli.
In addition, setting the corresponding environment variables can avoid
inputting credentials in your command. Please use --help to get more
information about environment variable usage. []
The above error show that in your storage account you didn't create any containers and files.
I have created one container and add files.
I tried the same command now i got an output successfully.
If you need to remove warnings you can use this command--only-show-errors
Reference:
az storage container | Microsoft Learn
I'm generating a SAS token to access the linked templates in my ARM deployment. And I'm passing the SAS Token as a parameter override to the az deployment command. Turns out that my template deployment fails with the error "Unable to download deployment content from 'https://myLinkedTemplateURL?SASToken'
First, I fetch the storageAccountKey stored in a keyvault:
$storeKey = az keyvault secret show --name "myStorageSecretName" --vault-name "myKeyVaultName" --query value
$storeKey = $storeKey.Replace('"','')
Then, here's the two ways I'm generating the SAS Token:
SAS token generate by this method succeeds the deployment
$context = New-AzureStorageContext -StorageAccountName 'myStorageAccountName' -StorageAccountKey $storeKey
$tokenval = New-AzureStorageContainerSASToken -Container builds -Permission rwdl -Context $context
SAS token generate by this method fails the deployment
$tokenval = az storage container generate-sas --account-key $storeKey --account-name "myStorageAccountName" --name "testcontainer" --permissions acdlrw --expiry (Get-Date).AddMinutes(30).ToString("yyyy-MM-dTH:mZ")
Also, I observe that the length of the SASToken generated by the second method is shorter than the first method.
Can someone please help shed some light on what's the difference between the above two methods and why one fails but the other succeeds?
$tokenval = az storage container generate-sas --account-key $storeKey --account-name "myStorageAccountName" --name "testcontainer" --permissions acdlrw --expiry (Get-Date).AddMinutes(30).ToString("yyyy-MM-dTH:mZ")
As mentioned in the comments, the issue actually is with the SAS expiry date. You are getting the local date and formatting it in ISO 8601 format whereas you need to get the date/time value in UTC and format it.
Please try something like:
$tokenval = az storage container generate-sas --account-key $storeKey --account-name "myStorageAccountName" --name "testcontainer" --permissions acdlrw --expiry (Get-Date).ToUniversalTime().AddMinutes(30).ToString("yyyy-MM-dTH:mZ")
The problem I believe is because how you're definining the permissions (`acdlrw`). According to the [`documentation`][1], the permissions must be specified in a particular order. From this link:
> Permissions can be combined to permit a client to perform multiple
> operations with the same signature. **When you construct the SAS, you
> must include permissions in the order that they appear in the table
> for the resource type**.
Based on this, can you try with the permissions in the following order - `racwdl`?
When I create a new Cosmos DB (SQL API) container, I see events "Write SQL Database" and "Write SQL Container" in the Azure Portal - Activity Log. However, I don't see any event logged when I delete the Container. Am I missing something, or perhaps can someone explain why delete events are not deemed relevant for logging?
Updated 0507:
I just got back from the support team, here are details:
If you want to audit various operations of end-users' performing, you need to disable the key-based metadata write access on your account.
Once disable it, Cosmos DB will prevent accessing the account via account keys, and only the user with the proper Role-based access control(RBAC) role and credentials can change to any resource within the account.
This is by-designed behavior, you can refer more detail on this documents:
The code below is used to enable sending delete container logs to Activity logs:
Disable key based metadata write access via CLI :
$subscriptionid = "xxx"
$resourceGroupName = "xxx"
$accountName = "xx"
$databaseName = "xx"
$containerName = "xx"
az account set --subscription $subscriptionid
az cosmosdb update --name $accountName --resource-group $resourceGroupName --disable-key-based-metadata-write-access true
delete SQL container via CLI
az cosmosdb sql container delete --account-name $accountName --resource-group $resourceGroupName --database-name $databaseName --name $containerName
Then in Activity logs:
Important Note: If you use azure cli to disable key based metadata write access, then you cannot use UI(azure portal) to delete a container, you should use azure cli to do it. And if you want to perform it via UI, you need to enable it via azure cli command by setting false to --disable-key-based-metadata-write-access
i followed the tutorial (below *)
and now have a Service Principal .
How can i use this Service Principal when reading a blob using Get-AzureStorageBlob ?
Get-AzureStorageBlob requires a New-AzureStorageContext , can i use the SP instead of the StorageAccountKey guid?
Thanks,Peter
https://azure.microsoft.com/en-us/documentation/articles/resource-group-authenticate-service-principal/
As far as I know, you cannot use a SPN for accessing items in blob storage. You will need to use the access keys or SAS tokens.
Recently, Azure has added an option to Manage access rights to Azure Storage data with RBAC. You need to add one of the built-in RBAC roles scoped to the storage account to your service principal.
Storage Blob Data Contributor (Preview)
Storage Blob Data Reader (Preview)
Then, if you want to use the AzureCLI to access the Blob Storage with a Service Principal
Log in with a service principal
$ az login --service-principal --tenant contoso.onmicrosoft.com -u http://azure-cli-2016-08-05-14-31-15 -p VerySecret \
Enable the preview extension
$ az extension add -n storage-preview
Use --auth-mode parameter with your AzureCLI command
$ az storage blob download --account-name storagesamples --container sample-container --name myblob.txt --file myfile.txt --auth-mode login
For more information please see:
Manage access rights to Azure Storage data with RBAC (Preview)
Use an Azure AD identity to access Azure Storage with CLI or PowerShell (Preview)
if your SPN has only reader role, you cannot access the storage w/o SAS or account key.
You can asign the SPN to contributor role and create SAS for other normal users.
then switch to other normal user to access the storage with SAS.
I'm following steps in this tutorial: How to use Blob storage from iOS to generate Shared Access Signatures (SAS). I ran the commands successfully including this one:
azure storage container sas create --container sascontainer
--permissions rw --expiry 2016-09-05T00:00:00
My terminal said:
info: Executing command storage container sas create
+ Creating shared access signature for container sascontainer
I looked at azure portal and I don't see that container: sascontainer created anywhere. According to this article (my understanding) is that it will create a container:
--container : The name of the storage container to create.
So, where is it!? Shouldn't that command be enough to create that container and make it visible in my azure portal!? I also have looked in Azure Classic Portal.
azure storage container sas create --container sascontainer
--permissions rw --expiry 2016-09-05T00:00:00
This command will not create a blob container. It will create a Shared Access Signature on a container named sascontainer with Read and Write permission that will expire on 2016-09-05T00:00:00.
To create a blob container, the command you want to use is:
azure storage container create "sascontainer"
Once this command completes successfully, you should be able to see the blob container in the portal.