Throttling issue while listing Azure Storage Accounts - azure

I am using Azure JAVA SDK and am trying to list the Storage Accounts for the subscription. But I am intermittently getting this exception response.
com.microsoft.azure.CloudException: Status code 429,
{"error":{"code":"ResourceCollectionRequestsThrottled","message":"Operation
'Microsoft.Storage/storageAccounts/read' failed as server encountered
too many requests. Please try after '17' seconds. Tracking Id is
'f13c3318-8fb3-4ae1-85a5-975f4c17a512'."}}
Is there a limit on the number of requests one can make to the Azure resource API ?

Is there a limit on the number of requests one can make to the Azure
resource API ?
Yes. The limits are documented here: https://learn.microsoft.com/en-us/azure/azure-subscription-service-limits (please see Subscription limits - Azure Resource Manager section). And you can see the 429 error code from here.
Based on the documentation, currently you're allowed to make 15000 Read requests/hour for Azure Resource Manager API.

Related

Azure Batch within a VNET that has a Service endpoint policy for Storage

I am struggling to get my Azure batch nodes to start within a Pool that is configured to use a virtual network. The virtual network has been configured with a service endpoint policy that has a "Microsoft.Storage" policy definition and it points at a single storage account. Without the service endpoints defined on the virtual network the Azure batch pool works as expected, but with it the following error occurs and the node never starts.
I have tried creating the Batch account in both Pool allocation modes. This did not seem to make a difference, the pool resizes successfully and then the nodes are stuck in "Starting" mode. In the "User Subscription" mode I found the start-up error because I can see the VM instance in my account:
VM has reported a failure when processing extension 'batchNodeExtension'. Error message: "Enable failed: processing file downloads failed: failed to download file[0]: failed to download file: unexpected status code: actual=403 expected=200" More information on troubleshooting is available at https://aka.ms/VMExtensionCSELinuxTroubleshoot
From what I can determine this is an Azure VM extension that is running to configure the VM for Azure Batch. My base image is Canonical, ubuntuserver, 18.04-lts (batch.node.ubuntu 18.04). I can see that the extensions is attempting to download from:
https://a52a7f3c745c443e8c2cac69.blob.core.windows.net/nodeagentpackage-version9-22-0-2/Ubuntu-18.04/batch_init-ubuntu-18.04-1.8.7.tar.gz (note I removed the SAS token from this URL for posting here)
there are 8 further files that are downloaded and it looks like this is configuring the Batch agent on the node.
The 403 error indicates that the node cannot connect to this storage account, which makes sense given the service endpoint policy. It does not include this storage account within it and this storage account is external to my Azure subscription. I thought that I might be able to add it to the service endpoint policy, but I have no way of determining what Azure subscription it is part of it. If I knew this I thought I could add it like:
Endpoint policy allows you to add specific Azure Storage accounts to allow list, using the resourceID format. You can restrict access to all storage accounts in a subscription
E.g. /subscriptions/subscriptionId (from https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoint-policies-overview)
I tried adding security group rules using service tags for Azure storage, but this did not help. The node still cannot connect and this makes sense given the description of service endpoint policies.
The reason for my interest in this is the following post:
[https://github.com/Azure/Batch/issues/66][1]
I am trying to minimise the bandwidth charges from my storage account by using service endpoints.
I have also tried to create my own VM, but I am not sure whether the "batchNodeExtension" script is run automatically for VMs that you're using with Batch.
I would really appreciate any pointers because I am running out of ideas to try!
Batch requires a generic rule for all of Storage (can be regional variant) as specified at https://learn.microsoft.com/en-us/azure/batch/batch-virtual-network#network-security-groups-specifying-subnet-level-rules. Currently it is mainly used to download our agent and maintain state/get information needed to run tasks.
I am facing the same problem with Azure Machine Learning. We are trying to fight data exfiltration by using the SP Policies in order to prevent sending the data to any non-subscription storage accounts.
Since Azure ML Computes depends on the Batch service, we were unable to run any ML compute if the SP policy is associated to the compute subnet.
Microsoft stated the follwoing:
Filtering traffic on Azure services deployed into Virtual Networks: At this time, Azure Service Endpoint Policies are not supported for any managed Azure services that are deployed into your virtual network.
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoint-policies-overview#scenarios
I understand from this kind of restriction, that any service that use Azure Batch (which almost all services in Azure?) cannot use the SP Policy which make it useless freature...
Finally we endup by removing the SP policy completly from our network architecture and considered it only for scenarios where you to want to restrict customers to access specific storage accounts.

how to add azure blob storage endpoint to Traffic Manager [duplicate]

I have a simple bit of JSON from an Azure Traffic Manager request, so ideally it would be stored in a blob storage account that is marked with a public access policy to read the blob. When I attempt this - using external endpoint in ATM - I get a 400 HTTP response.
The endpoint shows online in the portal, which is interesting since issuing that URL through the browser also results in a 400 error. I have the health probe pointed at a public blob at the $root container.
My second attempt was to then try an Azure function as the endpoint, and in this case the health probe results in a 'stopped' state. From older articles it appeared this would be returned for a basic App service plan (this is a consumption plan), but I presume that's outdated at this point?
What's the resolution here? This shouldn't be this hard!
According to your description, I checked this issue on my side and I could encounter the same issue as you mentioned. Then I found issues about Traffic Manager and Blob Storage and Integration of Azure Functions with Traffic Manager.
Per my understanding, Traffic Manager does not support integration with Blob Storage, you could add your feature request here.
For integrating with Azure Functions, you need to make sure your Web Apps at the Standard SKU or above are eligible for use with Traffic Manager. For web apps below Standard SKU, you could leverage Azure Functions Proxies. Here are some references, you could refer to them:
Traffic Manager - Web Apps as endpoints
Azure Functions Traffic Manager

How to monitor Azure Classic VM using REST API or via Java SDK?

HI i want to monitor Azure Classic VM using REST API/Java SDK , when i tried it with REST API with the following URL(The below url worked for Azure VM)
https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/Preprod2-Resource-Group/providers/Microsoft.ClassicCompute/virtualMachines/cloudops-testvm1/providers/microsoft.insights/metrics?api-version=2016-09-01
I'm getting the following error
{
"code": "NotFound",
"message": "Resource provider not found: [Microsoft.ClassicCompute]"
}
Please suggest me if it can be done via REST API or if there is an SDK please suggest me the same.
My requirement is i want to monitor Classic VM and collect Network In,Network Out,Percentage CPU,Disk Read Operations/Sec,Disk Write Operations/Sec,Disk Write Bytes and Disk Read Bytes for every 5mins
This isn't a supported resource type through metrics API. The supported types and metrics are here:Supported metrics with Azure Monitor. You could check this link, for now, classic VM is not supported.
According to your description, you could use the Metric Definitions API. It works for classic VM.
https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourceGroups/>/providers/Microsoft.ClassicCompute/virtualMachines/<Virutal name>/providers/microsoft.insights/metricdefinitions?api-version=2015-07-01
Metric information stores in Azure storage account, you also could call Storage API to get VM metrics. More information please refer to this link:Storage Analytics.

How to check result of Resource Removal Operation for Azure Resource

Using Resource Management API I can remove Azure resource (https://learn.microsoft.com/en-us/rest/api/resources/resources#Resources_DeleteById). This API returns 202 that removal is accepted - the resource is not removed right away though. The response header in my case contains "x-ms-request-id" value. How can I use it to get the status of this operation? Did the operation succeed? In my case I am removing the Log Analytics Solution resource.
Any help is greatly appreciated.
According to your description, I have checked this issue. I assumed that azure would take some time to handle your request, you could leverage Azure resource Get By Id to check your azure resource as follows:
For a simple way, you could leverage resources.azure.com, choose your resource and check the details. I removed my Log Analytics, then I could retrieve the following result:
UPDATE
According to your latest comment, I have checked the REST API again and both tested the operations on ASM and ARM, you could refer to them as follows:
For classic Azure Services (ASM)
You could use Get Operation Status with authentication using a management certificate to check the operation status.
For ARM
You could follow this tutorial about tracking asynchronous Azure operations. You could use the header values returned by the asynchronous REST operations, then request the related URL with authentication using Azure Active Directory to determine the status of your operation.
Based on your azure service, you need to use the ARM approach.

How to find Azure Subscription Quota

I did google and could not find any accurate answer.
When I try to deploy the web application to different data centre, I am getting error message below:
Server response = 40652 Cannot move or create server.
Subscription 'xxxxxxxxxxxxx' will exceed server quota.
Could someone please help me understand below:
Where do I find the max quota for given subscription within a management portal?
Does error above means the quota exceeded for Azure SQL Server or Host services.
How many servers can one subscription create?
I used Cerebrata cloud management studio which shows the subscription detail below:
Cores: 9 / 20
Hosted Services: 4 / 20
Storage: 8 / 20
Not sure whether Azure SQL Server belongs to hosted service or storage or something else that's not shown above.
Thanks.
For Azure SQL Servers, there is a hidden default max of 6 Azure SQL SERVERS (Not databases). Once you attempt to create the 7th, you will receive this error: New-AzureSqlDatabaseServer : Cannot move or create server. Subscription 'XXXXXX-XXXX-XXXXX-XXXXX-XXXXXXXXXX' will exceed
server quota. Submit a billing request to increase the quota limit on SQL Azure Servers.
You can find the quota information in the azure management portal. Scroll through the list of items in the left bar and at the end you will find "Settings". When you click "Settings" and switch to "Usage" tab you can see quota information as in the screenshot below.
Hope this helps.
I don't think portal exposes this functionality (or at least I could not find it :)). However if you're interested in finding about quota information about some of the services, you can do so programmatically by performing Get Subscription operation on your subscription: http://msdn.microsoft.com/en-us/library/windowsazure/hh403995.aspx. This operation would tell you how many cores available to you in your subscription, how many storage accounts and cloud services you can create in your subscription.
At least these details are available through Azure Management Studio/Cloud Storage Studio from Cerebrata (http://www.cerebrata.com) as well, if you're looking for 3rd party tools. After you add a subscription there, just right click on your subscription node and click on View Subscription Properties.
You could also find the quota information on pricing page as well: https://www.windowsazure.com/en-us/offers/commitment-plans (Just scroll down to Usage Quotas section).
One last comment: You could easily increase the quota by contacting customer support. The link is there in the pricing page link I mentioned above.
Update
As far as quota for SQL Azure is concerned, I could find a limit on the maximum number of database servers allowed per subscription however there's a limit of 149 user databases (150 including master database) / server: http://msdn.microsoft.com/en-us/library/windowsazure/ee336245.aspx#dcasl. Can you ensure you're not exceeding that quota?

Resources