Create EKS using reserved instances using terraform - terraform

I want to create EKS cluster using reserved instances. I have tried the following solution for EKS by replacing the lifecycle policy to reserved instead of on-demand over here
However when I spin up the cluster it still says on demand and reserved instances are not used.
Note: We have reserved instances already and have like 10 of them.

The link to the solution.
As #ydaetskcoR pointed out, if you have RIs already reserved then you can just spin up the cluster with on-demand instances. At the billing page you would be able to see actually that the new instances are using the reserved instance hours.

Related

Azure Kubernetes Service (AKS) and the primary node pool

Foreword
When you create a Kubernetes cluster on AKS you specify the type of VMs you want to use for your nodes (--node-vm-size). I read that you can't change this after you create the Kubernetes cluster, which would mean that you'd be scaling vertically instead of horizontally whenever you add resources.
However, you can create different node pools in an AKS cluster that use different types of VMs for your nodes. So, I thought, if you want to "change" the type of VM that you chose initially, maybe add a new node pool and remove the old one ("nodepool1")?
I tried that through the following steps:
Create a node pool named "stda1v2" with a VM type of "Standard_A1_v2"
Delete "nodepool1" (az aks nodepool delete --cluster-name ... -g ... -n nodepool1
Unfortunately I was met with Primary agentpool cannot be deleted.
Question
What is the purpose of the "primary agentpool" which cannot be deleted, and does it matter (a lot) what type of VM I choose when I create the AKS cluster (in a real world scenario)?
Can I create other node pools and let the primary one live its life? Will it cause trouble in the future if I have node pools that use larger VMs for its nodes but the primary one still using "Standard_A1_v2" for example?
Primary node pool is the first nodepool in the cluster and you cannot delete it, because its currently not supported. You can create and delete additional node pools and just let primary be as it is. It will not create any trouble.
For the primary node pool I suggest picking a VM size that makes more sense in a long run (since you cannot change it). B-series would be a good fit, since they are cheap and CPU\mem ratio is good for average workloads.
ps. You can always scale primary node pool to 0 nodes, cordon the node and shut it down. You will have to repeat this post upgrade, but otherwise it will work
It looks like this functionality was introduced around the time of your question, allowing you to add new system nodepools and delete old ones, including the initial nodepool. After encountering the same error message myself while trying to tidy up a cluster, I discovered I had to set another nodepool to a system type in order to delete the first.
There's more info about it here, but in short, Azure nodepools are split into two types ('modes' as they call it): System and User. When creating a single pool to begin with, it will be of a system type (favouring system pod scheduling -- so it might be good to have a dedicated pool of a node or two for system use, then a second user nodepool for the actual app pods).
So if you wish to delete your only system pool, you need to first create another nodepool with the --mode switch set to 'system' (with your preferred VM size etc.), then you'll be able to delete the first (and nodepool modes can't be changed after the fact, only on creation).

Is it possible to change subnet in Azure AKS deployment?

I'd like to move an instance of Azure Kubernetes Service to another subnet in the same virtual network. Is it possible or the only way to do this is to recreate the AKS instance?
No, it is not possible, you need to redeploy AKS
edit: 08.02.2023 - its actually possible to some extent now: https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni-dynamic-ip-allocation#configure-networking-with-dynamic-allocation-of-ips-and-enhanced-subnet-support---azure-cli
I'm not sure it can be updated on an existing cluster without recreating it (or the nodepool)
I know its an old thread, but just responding in case someone might find it useful. You cannot change the subnet of the AKS directly. However, you can always change the subnets of the underlying components. In our case, we had a simple setup of 2 nodes and a LoadBalancer. We created a new subnet and change the subnets on these individual components. It worked for us, so do ensure to check the services and the pods, to ensure correct working.

alicloud_cs_managed_kubernetes vs alicloud_cs_kubernetes on terraform

So on Alibaba Cloud module in terraform and found identic resource:
alicloud_cs_managed_kubernetes
https://www.terraform.io/docs/providers/alicloud/r/cs_managed_kubernetes.html
alicloud_cs_kubernetes
https://www.terraform.io/docs/providers/alicloud/r/cs_kubernetes.html
what is the different of that? i cannot differentiate that two resource
biggest difference is,
Managed kubernetes cluster, that means you can't control kubernetes master.
kubernetes cluster, you need create master as well.
master_instance_types = ["ecs.n4.small"]
Specifically speaking the differences between alicloud_cs_managed_kubernetes vs alicloud_cs_kubernetes on Terraform are can be addressed detailly with the help of parameter reference provided on official documentation.
But, the major difference between Kubernetes and Managed Kubernetes is
You don't need to manage the master node in Managed Kubernetes

Where do I find info on if my Azure VM Reserved instance is properly assigned?

I have only one VM and one reserved VM instance. When I bought the reserved instance, it said that I should check the usage log to see which resource it was associated with. I don't see a usage log, but I do see an activity log. (Same?) Either way, it shows nothing about the reserved instance. This Azure UI is not even close to intuitive. How can I see if the reserved instance is properly associated with my VM?
Thanks.
You can use the API to get the information on your reserved instances:
Using the Reserved Instance usage API
This doc describes how to get the Reserved Instance usage summary as well as Reserved Instance usage details.
GET Method URI:
https://consumption.azure.com/v2/enrollments/{enrollmentNumber}/reservationdetails?startDate={yyyy-mm-dd}&endDate={yyyy-mm-dd}
GET Method URI: https://consumption.azure.com/v2/enrollments/{enrollmentNumber}/reservationsummaries?grain=daily&startdate={yyyy-mm-dd}&enddate={yyyy-mm-dd}
For info on scope, type and utilization, you can go to Azure Portal >> Navigate to All Services >> Reservations.

aws_emr_cluster - is it possible to retrieve the instance identifiers

I am creating an EMR cluster using the aws_emr_cluster resource in terraform.
I need to get access to the instance ID of the underlying EC2 hardware, specifically the MASTER node.
It does not appear in the attributes and neither when I perform an terraform show
The data definitely exists and is available in AWS.
Does anyone know how I can get at this value and how to do it it using terraform?
You won't be able to access the nodes (EC2 Instances) in an EMR Cluster through terraform. It is the same case for AutoScaling Groups too.
If terraform includes EMR or ASG nodes, state file will be changed everytime a change happens in EMR/ASG. So, storing the instance information won't be ideal for terraform.
Instead, you can use AWS SDK/CLI/boto3 to see them.
Thanks.

Resources