HDInsight Cores Quota increase - azure

I have two different HDInsight deployments that I need to deploy. One of the HDInsight deployments uses the D12_v2 VM type and the second HDInishgt deployment uses the DS3_v2 VM type.
Although both the VM types use the same number of cores, would the deployments work if I just request a quota increase of the Dv2-series type? Do note that, at a time, only a single deployment will exist.

Although both the VM types use the same number of cores, would the
deployments work if I just request a quota increase of the Dv2-series
type?
No, it won't work that way as both are of different VM series i.e. Dv2 and DSv2. So , even if they are using same cores , deployment will fail in that region if you don't have sufficient quota to allocate in your subscription for both of the VM series as it depends on your total Vcpu's available for that region.
You can refer this Microsoft Document for the VM series specifications.
So , as per your requirement ,You have to create the quota request for both the series in the particular region .
Reference for Quota limits of VM:
Request an increase in vCPU quota limits per Azure VM series - Azure supportability | Microsoft Docs
Reference for Quota limits of HDInsights:
CPU Core quota increase request - Azure HDInsight | Microsoft Docs

You should include both VMs in your request.
Please refer to the following document which provides info about requesting a quota increase for HDInsight. You need to be sure to ask for HDInsight quota, and not regular Compute-VM quota. In the text box entry, you state which VMs you need and they will process the request accordingly.
Requesting quota increases for Azure HDInsight

Related

Is there a possibility to create Azure Kubernetes Cluster with virtual machine which supports GPU computing for Azure Pass Sponsorship?

I want to create Azure Kubernetes Service resource which supports GPU computing. I have huge amount of data and docker image which requires Nvidia drivers. When I attempt to create it I get:
Size not available
This size is currently unavailable in eastus for this subscription: NotAvailableForSubscription.
I get this message for every location I choose. I suppose the problem is that I use Azure Pass Sponsorship. Is there any way to do it on this kind of subscription?
You receive this error when the resource SKU you have selected (such as K8’s Cluster or VM’s) is not available for the location you have selected
You can check the product availability in selected region by Products available by region.
To determine which SKUs are available in a region/zone, use the Get-AzComputeResourceSku command. Filter the results by location. You must have the latest version of PowerShell for this command.
Get-AzComputeResourceSku | where {$_.Locations -icontains "centralus"}
Refer this documentation for more information.
Please refer to this document for a list of common Microsoft Azure limits, quotas and constraints for Azure Sponsorship Subscription.
The following monthly usage quotas are applied. If you need more than these limits, please contact customer service at any time so that they can understand your needs and adjust these limits appropriately.
Reference: Microsoft Azure Sponsorship Offer

Error when creating DevOps project on Azure: Cores quota have ran out in this region, kindly choose a different region or VM

When I try to create a DevOps project described at https://learn.microsoft.com/en-us/azure/devops-project/azure-devops-project-aks, I get the below error message which apparently hasn't been grammarly checked! I tried different regions and lowered the number of nodes to 1 but I still got the error.
Cores quota have ran out in this region, kindly choose a different
region or VM which requires lesser cores.
I think you are setting deploy your project to VM. The error should be caused by your Cores Quota Limit. You need first go Subscription--> Usage + quotas of Azure Portal to check the limit about your different region.
In fact, vCPU quotas for virtual machines and virtual machines scale sets are enforced at two tiers for each subscription, in each region. The first tier is the Total Regional vCPUs limit (across all VM Series), and the second tier is the per VM Series vCPUs limit. If you exceed either of these, you will not be allowed to do VM deployment.
On the Usages + quotas page, you can search the current quota of your chosen region by using the Quota + Usage filters. And you will see current usage and quota limit values are displayed at the end of the row.
If you need request increase to finish your deployment, just click upper right button. Here has detailed steps you can refer: Request Increase

Operation results in exceeding quota limits of Core. Maximum allowed: 4, Current in use: 4, Additional requested: 4. While in 14 day free trial

I'm using the 14 day Premium free trial. I'm trying to create and run a cluster in databricks (I'm following the quick start guide). How ever I'm getting the following error "Operation results in exceeding quota limits of Core. Maximum allowed: 4, Current in use: 4, Additional requested: 4." I cant bump up the limit because I am in the free trial. I'm trying to run only 1 worker on the weakest worker type. I've already tried deleting all my subscriptions and made sure that there are no other clusters being used.
Edit: Im thinking it might be because the worker and the driver each use 4 cores. Is there a way to use databricks in the free trial?
I think these are your options:
log a support request to ask for more quota (apparently not a thing for free\trial subs)
use different VM types for master and slave (like Standard A2 for master and Standard D2 for slave) because they share different core limits
Use smaller nodes (which I think you've mentioned is not possible), but it might be possibly just not with the portal
Azure free trial is eligible to VM with total 4 vCPUs.
Note: Free Trial subscriptions are not eligible for limit or quota increases.
If you have a Free Trial subscription, you can upgrade to a Pay-As-You-Go subscription.
Upgrade Azure Free Trial to Pay-As-You-Go
For more details, refer "Azure subscription and service limits, quotas, and constraints".

Microsoft Azure with Kubernetes and Helm "The maximum number of data disks allowed to be attached to a VM of this size is 4."

I'm trying to run different helm charts and I keep running into this error. It's much more cost effective for me to run 3-4 cheaper nodes than 1 or 2 very expensive nodes that can have more disks attached to them.
Is there a way to configure kubernetes or helm to have a disk attach limit or to set the affinity of one deployment to a particular node?
It's very frustrating that all the deployments try to attach to one node and then run out of disk attach quota.
Here is the error:
Service returned an error. Status=409 Code="OperationNotAllowed"
Message="The maximum number of data disks allowed to be attached to a
VM of this size is 4."
Is there a way to configure kubernetes or helm to have a disk attach
limit or to set the affinity of one deployment to a particular node?
For now, ACS k8s provision PVC based on Azure managed disks or blob disks, so the limit is the number of VM disks.
For now, Azure does not support change the limit about number of VM disks. About VM size and max data disks, we can find the limit here:
More information about limit, please refer to this link.
By the way, the disk maximum capacity is 2TB, maybe we can extend it to 2TB.

Azure Batch Account quota

I want to increase the quota of creating a batch account in same region.
For eg. I have the limit of creating 3 batch accounts in Central US region. However I want to create 2 more batch accounts in same region.
Is there any extra cost associated for increasing the quota?
Based on the Azure Subscription Limits, you can have a maximum of 50 batch accounts. For increasing the quota, you will need to contact Azure Support.
Regarding any extra cost, I don't think so. Based on the pricing page, you are not charged for the account per se. Rather you're charged for the compute and other resources you deploy in these accounts to run your batch jobs.
There is no charge for Batch itself, only the underlying compute and
other resources consumed to run your batch jobs. For compute, Cloud
Services, Linux Virtual Machines or Windows Virtual Machines can be
utilised by Batch.

Resources