Following the instructions for the Azure CLI "quickstart" on creating a blob.
It looks like something in the default storage account is blocking the ability to create new containers; yet, the "defaultAction" is Allow:
The following Azure CLI:
az storage container create --account-name meaningfulname --name nancy --auth-mode login
... results in the error explaining the network rules of the Storage Account might be the cause:
The request may be blocked by network rules of storage account. Please check network rule set using 'az storage account show -n accountname --query networkRuleSet'.
If you want to change the default action to apply when no rule matches, please use 'az storage account update'.
Using the suggestion from the above message, the "show" command on the account-name gives:
> az storage account show -n meaningfulname --query networkRuleSet
{
"bypass": "AzureServices",
"defaultAction": "Allow",
"ipRules": [],
"virtualNetworkRules": []
}
I would think that the Azure CLI would be among the "services" that could bypass and do operations. And, the default action would seem to me to be quite permissive.
I've done to searching around by the error messages and commands (and variations). There does not appear to be much on what I don't know the quirks of the Azure CLI, so maybe this is so obvious that people haven't written anything up. I don't think I'm duplicating
Although the selected answer is different.
There can be another reason as in my case. You need to be in the role before you can create a container as stated by Microsoft documentation here
Before you create the container, assign the Storage Blob Data
Contributor role to yourself. Even though you are the account owner,
you need explicit permissions to perform data operations against the
storage account.
Also note that
Azure role assignments may take a few minutes to propagate.
Not sure if this would be helpful ...
If you update the "Firewalls and virtual networks" section of the Storage account and make it accessible for all networks , using CLI , it takes sometime to take effect. I have observed that it takes around 10 -30 seconds to take effect.
Try waiting for 30 seconds and then try the az container create statement. It should work.
Remove the --auth-mode login from your command. Use it like this:
az storage container create \
--account-name helloworld12345 \
--name images \
--public-access container
If we don't set --auth-mode, it uses the default auth-mode key. Which will query for the account key inside your storage account
https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-cli
Use --auth-mode login if you have required RBAC roles in your command. For more information about RBAC roles in storage, visit https://learn.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli.
The current networkRuleSet configuration is enough. I can not reproduce this issue with the same networkRuleSet configuration as you. So you may double-check if there is a typo for the storage account when creating a container or querying the networkRuleSet.
By default, storage accounts accept connections from clients on any network. To limit access to selected networks, you must first change the default action.
If you need to only allow access your storage account from some specific IP addresses or specific subnets and allow Azure services, you can add it like this,
{
"bypass": "AzureServices",
"defaultAction": "Deny",
"ipRules": [
{
"action": "Allow",
"ipAddressOrRange": "100.100.100.100"
}
],
"virtualNetworkRules": [
{
"action": "Allow",
"virtualNetworkResourceId": "subnetID"
}
]
}
With Azure CLI, Set the default rule to allow network access by default.
az storage account update --resource-group "myresourcegroup" --name "mystorageaccount" --default-action Deny
az storage account update --resource-group "myresourcegroup" --name "mystorageaccount" --default-action Allow
See Change the default network access rule for more details.
Edit
In this case, you set the --auth-mode parameter to login to authorize with Azure AD credentials. You need to ensure that the Azure AD security principal with which you sign in to Azure CLI has permission to do data operations against Blob or Queue storage. For more information about RBAC roles in Azure Storage, see Manage access rights to Azure Storage data with RBAC.
Related
I have a free trial subscription on Azure:
$ az account subscription list
Command group 'account subscription' is experimental and under development. Reference and support levels: https://aka.ms/CLI_refstatus
[
{
"authorizationSource": "RoleBased",
"displayName": "Azure subscription 1",
"id": "/subscriptions/fffffff-ffff-ffff-ffff-ffffffffffff",
"state": "Enabled",
"subscriptionId": "fffffff-ffff-ffff-ffff-ffffffffffff",
"subscriptionPolicies": {
"locationPlacementId": "Public_2014-09-01",
"quotaId": "FreeTrial_2014-09-01",
"spendingLimit": "On"
}
}
]
but when I execute the command (list MariaDB SKUs) I get the following error:
$ az mariadb server list-skus --location eastus
(SubscriptionNotExists) Subscription 'fffffff-ffff-ffff-ffff-ffffffffffff' does not exist.
Code: SubscriptionNotExists
Message: Subscription 'fffffff-ffff-ffff-ffff-ffffffffffff' does not exist.
Works fine under my other account where I have a pay-as-you-go subscription. Same thing with the go SDK.
If the free trial is the issue it would be great to document it somewhere.
Turns out you have to register provider resources for your subscription before you can use them. For some reason MariaDB was already registered for one of my accounts but not for the other. The error SubscriptionNotExists is extremely confusing in that regard.
To list the skus list Mariadb Please make sure that you have logged in successfully using az login cmdlet .
Then try to execute the following command which you are using
az mariadb server list-skus --location eastus
OUTPUT FOR REFERENCE:-
NOTE:- Try to close and reopen your terminal and use az login and choose your account in which you have free trail subscription then use the command.
As i don't have any free trial subscription to test it on my environment .
AFAIK, we can use it For free trail account as well based on the below screenshot and also can Check this Microsoft Document :
For more details please refer the below links for Azure free trial supported resources:
MICROSOFT DOCUMENTATION:- Azure free account FAQ ,Azure subscription and service limits, quotas, and constraints & az mariadb server list-skus
I have an account key and corresponding account name. How can I find the storage options it has?
Using:
az storage account list
retrieves the accounts that my subscription has access to, and I get the access points:
"primaryEndpoints": {
"blob": "https://MYACCOUNT.blob.core.windows.net/",
"dfs": "https://MYACCOUNT.dfs.core.windows.net/",
"file": "https://MYACCOUNT.file.core.windows.net/",
"internetEndpoints": null,
"microsoftEndpoints": null,
"queue": "https://MYACCOUNT.queue.core.windows.net/",
"table": "https://MYACCOUNT.table.core.windows.net/",
"web": "https://MYACCOUNT.z6.web.core.windows.net/"
}
I want to obtain a similar endpoint for an account for which I have an account key, how to do this?
Then, if there is a 'blob' access point, I know that I can call:
az storage fs list --account-name "MYACCOUNT" --account-key "MYKEY"
to get the list of blob containers.
Bonus question: how to know whether the key is for a Gen1 or Gen2 type account?
I have an account key and corresponding account name. How can I find the storage options it has? (question from user)
If you are using the cli , you need to connect to the subscription where the storage account is present & run the below commnads to show list of storage options/access endpoints & properities of that particular storage without using the account key.
az login
az account set --subscription
az storage account show --Name "accountname" --resource-group "resource-groupname"
As per the documentation the cmdlets "az storage fs" are used to manage the file systems in azure data lake storage gen2 account.
Azure don't have any mechanism to identify a storage account generations using access key generally When you create a storage account, Azure generates two 512-bit storage account access keys. These keys can be used to authorize access to data in your storage account via Shared Key authorization.
Alternatively, you can use Azure storage explorer from (Portal/Desktop version) to check the storage options and type of storage account it is as shown in below image if the HNS value of the storage account is true then it is a ADLS gen2 account.
Using Azure Cli , and use --query parameter to filter result
az storage account show --name $storage_account_name --resource-group $ResourceGroup
There are some limits for creating resources in an azure subscription, as outlined in the Azure documentation - https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits
How to programmatically get the limits from azure cli/sdk?
Here are few commands to check Resource Limits
az --version helps you to find the installed version.
az login to log in to Azure.
az network list-usages --location [--subscription] List the number of network resources in a region that is used against a subscription quota.
For more information, you can check this Document.
Here are few commands to check Subscription Limits
az account show To view which Azure Subscription.
az account list To view a list of all the Azure Subscriptions.
az account set --subscription "Company Subscription" To set the Azure Subscription you want to target.
For more information, you can refer to this Blog
Our CI pipeline needs to back up some files to Azure Blob Storage. I'm using the Azure CLI like this: az storage blob upload-batch -s . -d container/directory --account-name myaccount
When giving the service principal contributor access, it works as expected. However, I would like to lock down permissions so that the service principal is allowed to add files, but not delete, for example. What are the permissions required for this?
I've created a custom role giving it the same permissions as Storage Blob Data Contributor minus delete. This (and also just using the Storage Blob Data Contributor role directly) fails with a Storage account ... not found. Ok, I then proceeded to add more read permissions to the blob service. Not enough, now I'm at a point where it wants to do Microsoft.Storage/storageAccounts/listKeys/action. But if I give it access to the storage keys, then what's the point? With the storage keys the SP will have full access to the account, which I want to avoid in the first place. Why is az storage blob upload-batch requesting keys and can I prevent this from happening?
I've created a custom role giving it the same permissions as Storage Blob Data Contributor minus delete. This (and also just using the Storage Blob Data Contributor role directly) fails with a Storage account ... not found.
I can also reproduce your issue, actually what you did will work. The trick is the --auth-mode parameter of the command, if you did not specify it, it will use key by default, then the command will list all the storage accounts in your subscription, when it found your storage account, it will list the keys of the account and use the key to upload blobs.
However, the Storage Blob Data Contributor minus delete has no permission to list storage accounts, then you will get the error.
To solve the issue, just specify the --auth-mode login in your command, then it will use the credential of your service principal to get the access token, then use the token to call the REST API - Put Blob to upload blobs, principle see Authorize access to blobs and queues using Azure Active Directory.
az storage blob upload-batch -s . -d container/directory --account-name myaccount --auth-mode login
I'm trying to use az against my Azure account. My account has two directories: one for personal (default) and one for business. I need to "switch to" the business directory so that az has access to the correct resources. However, I cannot find any way to achieve this via the command line, so when I do az group list I see the resource groups from my personal directory, not the business one.
How can I switch Azure directory from the CLI?
Subscription and directory is not the same. You can have access to several subscriptions in your work directory for example.
To login to a different (non-default) directory, use the --tenant option with the az login command, passing the FQDN for the directory, e.g.
az login --tenant yourdir.onmicrosoft.com
You can find the FQDN in Azure Portal when listing the directories.
When logged into a directory, you can see list of all your available subscriptions.
# List of the tenants:
az account tenant list
[
{
"id": "/tenants/91358f27-xxxx-xxxxxxxxxxx",
"tenantId": "91358f27-xxxx-xxxxxxxxxxx"
},
{
"id": "/tenants/cf39b7bf-xxxx-xxxxxxxxxxx",
"tenantId": "cf39b7bf-xxxx-xxxxxxxxxxx"
}
]
# Select the tenant ID:
az login --tenant cf39b7bf-xxxx-xxxxxxxxxxx --allow-no-subscriptions
# Set a validated subscription:
az account set --subscription "Pago por uso"
# Verify
az account list -o table
Ugh, nevermind. For some reason the CLI calls them subscriptions when the portal calls them directories. So I needed az account set --subscription $SUBSCRIPTION_ID