I am getting this error while creating new HDInsight cluster - azure

"You do not have the minimum cores available,12, to create a cluster in Est des Etats-unis, please select different location or subscription".
I tried with two subscriptions and many locations, but I got the same error.

You need to follow this document to request quota increase for a specific region you are interested in. This is a "routine" operation. Usually takes a couple of days (faster if you have EA support). Once your request if fulfilled you can proceed with creating your cluster.
Each subscription in Azure got its own quotas you have to adhere to. By default you are limited to 10 cores per VM type for most VM types.

Related

HDInsight Cores Quota increase

I have two different HDInsight deployments that I need to deploy. One of the HDInsight deployments uses the D12_v2 VM type and the second HDInishgt deployment uses the DS3_v2 VM type.
Although both the VM types use the same number of cores, would the deployments work if I just request a quota increase of the Dv2-series type? Do note that, at a time, only a single deployment will exist.
Although both the VM types use the same number of cores, would the
deployments work if I just request a quota increase of the Dv2-series
type?
No, it won't work that way as both are of different VM series i.e. Dv2 and DSv2. So , even if they are using same cores , deployment will fail in that region if you don't have sufficient quota to allocate in your subscription for both of the VM series as it depends on your total Vcpu's available for that region.
You can refer this Microsoft Document for the VM series specifications.
So , as per your requirement ,You have to create the quota request for both the series in the particular region .
Reference for Quota limits of VM:
Request an increase in vCPU quota limits per Azure VM series - Azure supportability | Microsoft Docs
Reference for Quota limits of HDInsights:
CPU Core quota increase request - Azure HDInsight | Microsoft Docs
You should include both VMs in your request.
Please refer to the following document which provides info about requesting a quota increase for HDInsight. You need to be sure to ask for HDInsight quota, and not regular Compute-VM quota. In the text box entry, you state which VMs you need and they will process the request accordingly.
Requesting quota increases for Azure HDInsight

Error when creating DevOps project on Azure: Cores quota have ran out in this region, kindly choose a different region or VM

When I try to create a DevOps project described at https://learn.microsoft.com/en-us/azure/devops-project/azure-devops-project-aks, I get the below error message which apparently hasn't been grammarly checked! I tried different regions and lowered the number of nodes to 1 but I still got the error.
Cores quota have ran out in this region, kindly choose a different
region or VM which requires lesser cores.
I think you are setting deploy your project to VM. The error should be caused by your Cores Quota Limit. You need first go Subscription--> Usage + quotas of Azure Portal to check the limit about your different region.
In fact, vCPU quotas for virtual machines and virtual machines scale sets are enforced at two tiers for each subscription, in each region. The first tier is the Total Regional vCPUs limit (across all VM Series), and the second tier is the per VM Series vCPUs limit. If you exceed either of these, you will not be allowed to do VM deployment.
On the Usages + quotas page, you can search the current quota of your chosen region by using the Quota + Usage filters. And you will see current usage and quota limit values are displayed at the end of the row.
If you need request increase to finish your deployment, just click upper right button. Here has detailed steps you can refer: Request Increase

Operation results in exceeding quota limits of Core. Maximum allowed: 4, Current in use: 4, Additional requested: 4. While in 14 day free trial

I'm using the 14 day Premium free trial. I'm trying to create and run a cluster in databricks (I'm following the quick start guide). How ever I'm getting the following error "Operation results in exceeding quota limits of Core. Maximum allowed: 4, Current in use: 4, Additional requested: 4." I cant bump up the limit because I am in the free trial. I'm trying to run only 1 worker on the weakest worker type. I've already tried deleting all my subscriptions and made sure that there are no other clusters being used.
Edit: Im thinking it might be because the worker and the driver each use 4 cores. Is there a way to use databricks in the free trial?
I think these are your options:
log a support request to ask for more quota (apparently not a thing for free\trial subs)
use different VM types for master and slave (like Standard A2 for master and Standard D2 for slave) because they share different core limits
Use smaller nodes (which I think you've mentioned is not possible), but it might be possibly just not with the portal
Azure free trial is eligible to VM with total 4 vCPUs.
Note: Free Trial subscriptions are not eligible for limit or quota increases.
If you have a Free Trial subscription, you can upgrade to a Pay-As-You-Go subscription.
Upgrade Azure Free Trial to Pay-As-You-Go
For more details, refer "Azure subscription and service limits, quotas, and constraints".

Allocation failed in Azure and cannot delete VMs

I use the Microsoft Azure and I subscribed the HDInsight and its location is in Japan. A couple of days ago, I mistakenly removed all the VMs that I have been used and I determined to recover the VMs using the vhdl files. However, I changed my mind to create new VMs instead of restoring them. I successfully completed the installation of the new VMs and I installed the Hadoop and Spark and used them very well. However, a few days later, when I started my VMs, it was strangely slow to turn on and eventually it throw error indicating below message.
Provisioning failed. Allocation failed. Please try reducing the VM
size or number of VMs, retry later, or try deploying to a different
Availability Set or different Azure location.. AllocationFailed.
I tried to follow the documents in Azure and changed the VMs size and delete the VMs that I made but deleting or changing the VMs failed throwing below message.
Provisioning failed. Delete/Deallocate operation on VM 'hadoop-master' failed because the remaining VMs in the Availability Set 'spark-avs' cannot be allocated together. Changes in Availability Set allocation need to be executed atomically. Please deallocate or delete some or all of these VMs before retrying the current operation.
Please note that this VM is not allocated and won't accrue any charges.
Details: {
"resourceType": "Microsoft.WindowsAzure.ComputeResourceProvider.Core.Strings, CRP.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null",
"ResourceCode": "ComputeAllocationFailure",
"ResourceParameters": []
}. CannotAllocateRemainingVMsInAvailabilitySet
Note that all the VMs that I re-installed is in the same availability set and I selected the south-korea for the new region (previous one is japan).
I tried to start many times but failed.
How to resolve this issue?
Note: When you create a VM, restart stopped (de-allocated) VMs, resize a VM, or when you add new instances, Microsoft Azure allocates compute resources to your subscription. You may occasionally receive errors when performing these operations even before you reach the Azure subscription limits.
This article “Troubleshoot allocation failures when you create, restart, or resize Linux VMs in Azure” explains the causes of some of the common allocation failures and suggests possible remediation.
If you assign Extra small, Small, and medium, you will receive validation failed error message.
Note: Head node recommended size is D3 v2, D4 v2, and D12 v2.
For more details, refer “Default node configuration and virtual machine sizes for clusters”.
Before deploying an HDInsight cluster, plan for the desired cluster capacity by determining the needed performance and scale. This planning helps optimize both usability and costs.
Some cluster capacity decisions cannot be changed after deployment. If the performance parameters change, a cluster can be dismantled and re-created without losing stored data.
For more details, refer “Capacity planning for HDInsight clusters”.

ADF activities and Ondemand HDInsight instances

I am new to using on demand hd insight. I have a basic question -
I have multiple activities running simultaneously in separate ADF pipelines each using an HDInsight ondemand linked service. How many instances of HDInsight gets created? Is it one instance per activity?
I got confused a bit because the documentation states that each instance created has a time-to-live value within which if a new job comes it will process that. Does the new job need to come from an activity in the same pipeline that originally created the instance or this instance is shared across activities in other pipelines?
Also just wanted to confirm my understanding that the cores count used for on demand instances do not count towards the subscription usage count.
Really sorry if the questions are very basic but any help very much appreciated.
Partial answers to my questions provided below - refer to comments section above for open points.
Answer for sharing of instance across pipelines is available at url -> "If the timetolive property value is appropriately set, multiple pipelines can share the instance of the on-demand HDInsight cluster."
Regarding my other question on CPU limits for HDInsight, as per azure limits -> the ondemand HDinsight core limits are restricted to 60 per subscription and this is different than the general core limit per subscription.
Also interestingly for manually created HDInsight clusters there exists a CPU limit as mentioned in this Stackoverflow link. It is as of today 170 per subscription obtainable by issuing the powershell command Get-AzureHDInsightProperties. Again I understand this limit is different than the subscription general core limit.

Resources