Total count of Virtual Machine if we opt Availability zone option in a scale set of two virtual machines - azure

I have created a azure virtual machine scale set with two instances and also opted the Availability zone option ( selected all three available choices (1, 2,3). After the deployment of Scale set i am able to see two instances of virtual machines are created and visible in azure portal. Now I am confused to build my understanding and need help in below two questions -
1- If only two instances get created in this scenario then how they spread in three selected availability zones (1 , 2 & 3).
2- If same scale set with instances are getting created , how can i see other remaining 4 instances( as two are already visible) in the portal .

Q1. If only two instances get created in this scenario then how they spread in three selected availability zones (1, 2 & 3).
As I know, the availability zone is available for each instance. It means each instance is effectively distributed across three fault domains and three update domains when you select three zones. You use the CLI command to list all the instances to see which zone the instances in:
az vmss list-instances -g group_name -n vmss_name
Q2. If the same scale set with instances are getting created, how can I see other remaining 4 instances( as two are already visible) in the portal?
You cannot see the replicate of the instance in the portal, they are managed by the Azure platform. You also need not care about that.

Related

Azure VMSS Flexible Orchestration - Custom Resource Names

I have created VMSS Flexible with orchestration mode with proper names, yet VMs, NICs, IPs got randomly generated suffixes.
Is it possible to automatically create VMs and corresponding resources, when adding instances through VMSS?
I’d like to have resources names like:
TST-WebServer1-VM
TST-WebServer2-VM
TST-WebServer1-VM-IP
TST-WebServer2-VM-IP
TST-WebServer1-VM-NIC
TST-WebServer2-VM-NIC
and so on.
https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes#instance-naming
When you create a VM and add it to a Flexible scale set, you have full control over instance names within the Azure Naming convention rules. When VMs are automatically added to the scale set via autoscaling, you provide a prefix and Azure appends a unique number to the end of the name.
Apparently it is not actually full control over the names when they are automatically added.

Azure Kubernetes Service (AKS) and the primary node pool

Foreword
When you create a Kubernetes cluster on AKS you specify the type of VMs you want to use for your nodes (--node-vm-size). I read that you can't change this after you create the Kubernetes cluster, which would mean that you'd be scaling vertically instead of horizontally whenever you add resources.
However, you can create different node pools in an AKS cluster that use different types of VMs for your nodes. So, I thought, if you want to "change" the type of VM that you chose initially, maybe add a new node pool and remove the old one ("nodepool1")?
I tried that through the following steps:
Create a node pool named "stda1v2" with a VM type of "Standard_A1_v2"
Delete "nodepool1" (az aks nodepool delete --cluster-name ... -g ... -n nodepool1
Unfortunately I was met with Primary agentpool cannot be deleted.
Question
What is the purpose of the "primary agentpool" which cannot be deleted, and does it matter (a lot) what type of VM I choose when I create the AKS cluster (in a real world scenario)?
Can I create other node pools and let the primary one live its life? Will it cause trouble in the future if I have node pools that use larger VMs for its nodes but the primary one still using "Standard_A1_v2" for example?
Primary node pool is the first nodepool in the cluster and you cannot delete it, because its currently not supported. You can create and delete additional node pools and just let primary be as it is. It will not create any trouble.
For the primary node pool I suggest picking a VM size that makes more sense in a long run (since you cannot change it). B-series would be a good fit, since they are cheap and CPU\mem ratio is good for average workloads.
ps. You can always scale primary node pool to 0 nodes, cordon the node and shut it down. You will have to repeat this post upgrade, but otherwise it will work
It looks like this functionality was introduced around the time of your question, allowing you to add new system nodepools and delete old ones, including the initial nodepool. After encountering the same error message myself while trying to tidy up a cluster, I discovered I had to set another nodepool to a system type in order to delete the first.
There's more info about it here, but in short, Azure nodepools are split into two types ('modes' as they call it): System and User. When creating a single pool to begin with, it will be of a system type (favouring system pod scheduling -- so it might be good to have a dedicated pool of a node or two for system use, then a second user nodepool for the actual app pods).
So if you wish to delete your only system pool, you need to first create another nodepool with the --mode switch set to 'system' (with your preferred VM size etc.), then you'll be able to delete the first (and nodepool modes can't be changed after the fact, only on creation).

Using Packer to Spin a VM and extract the image in an availability set

We have our corporate requirement ( due to pricing and whitelisting) to have Availability sets in our Azure subscription and resources like Compute should be spun inside that particular availability set. Since Packer while creating the Image spins up a temporary VM inside a temporary resource Group , I am confused (since did not find any documentation around it) if we can configure packer to spin the temporary VM inside the whitelisted availability set.
One possible way I can think of is to spin up the VM in the Resource Group which we created for the Availability Set (Since everything in Azure needs to be inside the Resource Group) that way I am guessing it will be tracked as part of billing but I am still not sure if the intermittent VM will be part of availability set.
Please help and suggest if there is an alternate way to the same .

Can i put vm into another resource group than availabilitySet?

I would like to keep each VM in separate resource group for ease for lifecycle management. I have a cluster containing n VLMs.
So I create one resource group for common things like public IP, load balancer and put availabilitySet declaration into it because is also must be shared between VMs.
Then I create VM in separate resource group and reference to availabilitySet with
"availabilitySet": {
"id": "[resourceId('Microsoft.Compute/availabilitySets',variables('availabilitySetName'))]"
},
Of cause 'availabilitySetName' is defined.
When I deploy my template I get an error saying
{"error":{"code":"BadRequest","message":"Entity resourceGroupName in resource reference id /subscriptions/a719381f-1fa0-4b06-8e29-ad6ea7d3c90b/resourceGroups/TB_PIP_OPSRV_UAT/providers/Microsoft.Compute/availabilitySets/tb_avlbs_opsrv_uat is invalid."}}
I double checked that resource and availability set name are specified correctly.
Does it mean that I can't put a set in separate resource group from VM?
Unfortunately, having a VM use an availabilitySet in a different resource group is not supported :(.
First of all, let me ask you why you want different resource groups? I strongly believe that you're overthinking it with multiple resource groups. A resource group is basically your "Entire System" and within the boundaries of one solution, you should only have one resource group for production, one for beta/staging etc, but never mix.
If you're selling SaaS to your customers it would make sense to have one resource group for each of your customers.
And as you know, a Resource group is simply a way for you to link together and manage all of the assets in your solution; vm's, storage, databases etc under one common name. I am very doubtful as to why one would want to consider multiple resource groups in a single solution, however, I am always willing to learn :)
Availability groups
Now, Availability groups are a different thing. This has to do with "Update Domains" and "Fault Domains" for your VM instances. Because Azure does not keep 3 separate VM's for you, as it does with most of it's PaaS services, you have to manage these yourself to ensure full uptime. Basically, when you're adding two or more VM's in an Availability Set, you're ensured that planned or unplanned events, at least one of the VM's will be available to meet the SLA.
Trying to combine the two in an effort of preventing downtime may sound like a good idea, but it is not solving any problems that I'm aware of. Like the old saying goes: if it aint broke, don't fix it :)

Amazon autoscaling scale-down instance

I am learning and setting up auoscale configuration for our production application. I want to know while setting scale-down which instance id to use. e.g. my configuration uses maximum number of instances as 3. I can put scale-up policy on instance id 1, but how can I put scale down policy for instance 2 and instance 3 which are still to be started.
PS: I understand for 2 instances, I can put policy on instance 1 and it will go down if load subsides.
you need to specify ami id for template ami only. all will be scaled up and down based on this single ami id. PLEASE NOTE: it is an ami id or image id that will be utilized and not an instance id.
The scale will be a range from 1 to 3 and will be configured in your autoscaling group with these tags: --min-size 1 --max-size 3 based on the metrics that you supply in the autoscaling policy.
With a minimum size of 1 instance, 1 instance will always be running. With a maximum size of 3 instances, 3 can run. There is not a need to scale non running instances.
Here is a complete tutorial:
http://www.cardinalpath.com/autoscaling-your-website-with-amazon-web-services-part-2/

Resources