Azure Container Service DCOS fails to deploy due to QuotaExceeded - azure

I am trying to deploy Azure Container Service with DCOS orchestrator, 1 master, 1 agent, with Standard D2_v2 agent size (2 cores), West Europe, using Azure portal. I am passing validation at the end of the process, but when I click 'OK' for deployment I get error:
"QuotaExceeded\",\r\n \"message\": \"Operation results in exceeding quota limits of Core. Maximum allowed: 4, Current in use: 0, Additional requested: 6."
Why does service requires 6 cores when I am trying to use D2_v2 agent size that requires only 2 cores?
Here are the pictures with configuration and error:
https://imgur.com/a/js8T9
I tried doing the same with azure CLI as in this guide and got the same error.
Edit: I am using free trial service version.

When we try to use Azure marketplace to deploy ACS DC/OS, Azure will create one master and two nodes, at least we need 6 cores.
But free trial subscription with a limit of 4 cores.
As a workaround, we can deploy DC/OS without use Azure marketplace, we can create two VMs and deploy DC/OS on those VMs, one master and one node.
More information about deploy DC/OS on VMs, please refer to this article.

Because you are deploying a cluster, meaning several nodes linked together. In this case 3 nodes. You need to use other region or request additional cores for this region (use support ticket).

Related

Highly available, redundant Redis-cluster over kubernetes

The objective is to create a highly available redis cluster using kubernetes for a nodeJS client. I have already created the architecture as below:
Created a Kubernetes cluster of Kmaster with 3 nodes (slaves).
Then I created statefulsets and persistent volumes (6 - one for each POD).
Then created Redis pods 2 on each node (3 Master, 3 replicas of respective master).
I need to understand the role of Redis Sentinel hereafter, how does it manage the monitoring, scaling, HA for the redis-cluster PODs across the nodes. I understand Sentinel should be on each node and doing its job but what should be the right architecture here?
P.S. I have created a local setup for now, but ultimately this goes on Azure so any suggestions w.r.to az is also welcome.
Thanks!
From an Azure perspective, you have two options and if you are very specific to option two but are looking for the Sentinel architecture piece, there is business continuity and high availability options in both IaaS (Linux VM scale sets) and PaaS services that go beyond the Sentinel component.
Azure Cache for Redis (PaaS) where you choose & deploy your desired service tier (Premium Tier required for HA) and connect your client applications. Please see: Azure Cache for Redis FAQ and Caching Best Practice.
The second option is to deploy a solution (as you have detailed) as an IaaS solution built from Azure VMs. There are a number of Redis Linux VM images to choose from the Azure Marketplace or there is the option to create a Linux VM OS image from your on-premise solution and migrate that to Azure. The Sentinel component is enabled on each server (master, slavea, and slaveb, ...). There are networking and other considerations too. For building a system from scratch, please see: How to Setup Redis Replication (with Cluster-Mode Disabled) in CentOS 8 – Part 1 and How to Setup Redis For High Availability with Sentinel in CentOS 8 – Part 2

Azure service fabric cluster provisioning questions

Once I create the azure service fabric cluster through azure portal, I am not sure how long I am supposed to wait for the cluster to be up and running. I am only using bare minimum configuration (with node type count 1 and bronze model with 3 VMs etc.) Will take an hour or 2 or more or less? Will there be some kind of indication that cluster deployment is done and is available for me to publish code from visual studio? Also I am not seeing any nodes in the provisioned cluster in the portal.
Thanks.
Raghu/..
Per mckjerral, I changed my VM Size type from A1 standard to DS1 standard and also to reliability tier from Bronze to Silver type, it deployed successfully and I was able to publish my service fabric app to it. Thank you for your help.
Raghu/..

Azure VM downgrade from A8

Is it possible to downgrade an Azure VM A8 (high compute) to a lower version like an A3? I keep getting the following error message when I try. I don't have an availability set setup. Thanks!
"Unable to upgrade the deployment. The requested VM size 'Large' may not be available in the resources supporting the existing deployment. Please try again later, try with a different VM size or smaller number of role instances, or create a deployment under an empty hosted service with a new affinity group or no affinity group binding. The long running operation tracking ID was: b2024fe9e93f6764bec3aa008756f0b7."
I recently discovered (via MS support tickets) that there are different "clusters" within Azure data centers, with different VM size compatibilities. In my case I had some cloud services in older clusters which didn't allow the newer "D-Series" VM sizes I wanted. The only solution was to create brand new cloud service instances from scratch and use Azure traffic manager to achieve a transition from the old servers to new ones.

Unable to increase Virtual Machine Size through Azure Management Portal

I am trying to change the Virtual machine size from - 8 Cores, 14 GB to A 6 - 4 core, 28 GB for one of of my MSSQL Azure IaaS server.
I am getting the following error:
Unable to upgrade the deployment. The requested VM size may not be
available in the resources supporting the existing deployment. Please
try again later, try with a smaller VM size or smaller number of role
instances, or create a deployment under an empty hosted service with a
new affinity group or no affinity group binding. The long running
operation tracking ID was: 1d8145d1977877978d1d8dffdd045d83.
I understand that there is a limitation on how much one can get from one subscription. However, this is the live server and I have another 4 servers running under same subscription. Is there any way I can move this Virtual machine from one subscription to an another?
Otherwise, what is the right approach on increasing the size of this server?
Please advise the earliest.
I got the answer from Support as :
When customer initially deployed service it got deployed under the
cluster which does not support high memory VM's. Since customer is
having deployment under the hosted service it cannot be
pinned/migrated to a cluster which supports A6 or higher VM size. This
is a by design behaviour as of now. Unfortunately, the only way
customer can deploy a A6 VM is to delete and recreate deployment with
A6 size under the given hosted service. When customer tries to create
it, then he/she will be allocated a cluster which supports A6 or
higher VM size.

Window Azure Deploying Cloud Services with A6 vmsize

In my cloud service I have one web role and worker role. I changed my web role VM size from medium to A6.
When I tried to deploy to Windows Azure, I get the following error message:s seem prompt me error
The VM size (or combination of VM sizes) required by this deployment cannot be provisioned due to deployment request constraints. If possible, try relaxing constraints such as virtual network bindings, deploying to a hosted service with no other deployment in it and to a different affinity group or with no affinity group, or try deploying to a different region.
What does that mean?
Basically, you've asked it for one of the new "Uber" A6 Instances (with additional memory/process resources) and it was unable to provision your request (i.e. provide you with the required amount of cloud computing power for an A6 Instance).
You could try deploying to a different geographic location or affinity group or just wait and try again.

Resources