aws_emr_cluster - is it possible to retrieve the instance identifiers - terraform

I am creating an EMR cluster using the aws_emr_cluster resource in terraform.
I need to get access to the instance ID of the underlying EC2 hardware, specifically the MASTER node.
It does not appear in the attributes and neither when I perform an terraform show
The data definitely exists and is available in AWS.
Does anyone know how I can get at this value and how to do it it using terraform?

You won't be able to access the nodes (EC2 Instances) in an EMR Cluster through terraform. It is the same case for AutoScaling Groups too.
If terraform includes EMR or ASG nodes, state file will be changed everytime a change happens in EMR/ASG. So, storing the instance information won't be ideal for terraform.
Instead, you can use AWS SDK/CLI/boto3 to see them.
Thanks.

Related

How to apply ConfigMaps to AKS Clusters via Terraform?

I currently deal 10-15 environment 100% IaC with Terraform in Azure. One of the recent projects was to change some log collection settings for all AKS Cluster. Here is a link of how to do it via kubectl - https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-agent-config#data-collection-settings.
What I've found so far?
Terraform has a kubernetes_config_map resource which I was able to successfully create. (https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/config_map)
My next question is how do I apply or attach the kubernetes_config_map resource to the AKS cluster? Assuming I want this applied to all the namespaces. I wasn't able to find config_map parameter on any of the resources.
We also use helm_release, is it possible to attach/pass that kubernets_config_map to the helm_release?
Any guidance would be greatly appreciated. Thanks..

Create EKS using reserved instances using terraform

I want to create EKS cluster using reserved instances. I have tried the following solution for EKS by replacing the lifecycle policy to reserved instead of on-demand over here
However when I spin up the cluster it still says on demand and reserved instances are not used.
Note: We have reserved instances already and have like 10 of them.
The link to the solution.
As #ydaetskcoR pointed out, if you have RIs already reserved then you can just spin up the cluster with on-demand instances. At the billing page you would be able to see actually that the new instances are using the reserved instance hours.

RDS: How to promote a replica (MariaDB) to be a standalone DB instance using terraform script?

I use the terraform module, terraform-aws-modules/rds/aws (version: 2.20.0) provisioned MariaDB master and a replica. I would like to promote the replica to be a standalone DB instance. The document at https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html gives instruction of how to do it via AWS console. I would like to do it use terraform script. Anyone has idea of how to promote a replica to be a standalone DB instance using terraform script? My terraform version is v01.3.5.
I a guessing you have the read replica resource via terraform.
From docs:
Removing the replicate_source_db attribute from an existing RDS
Replicate database managed by Terraform will promote the database to a
fully standalone database.
You can make a condition there to switch it on and off.

Delete instance from scaleset using terraform

I am trying to remove a particular instance from my scaleset using terraform. I know there is a REST API for this:
https://learn.microsoft.com/en-us/rest/api/compute/virtualmachinescalesets/deleteinstances
However, the page for azure tf doesnt really mention this anywhere.
https://www.terraform.io/docs/providers/azurerm/r/virtual_machine_scale_set.html
How do i do this with terraform?
When managing a virtual machine scale set with Terraform, Terraform does not interact with the individual instances at all. Instead, it can change update the settings of a scale set to match what you've written in configuration and then let the scale set itself respond to that new configuration appropriately.
For example, if you wish to have fewer instances of a particular SKU then you might edit your Terraform configuration to have a lower value for the capacity argument for that SKU and run terraform apply. If you accept that plan, Terraform will update the scale set to have a lower capacity and then the remote scale set system will decide how to respond to that.
To delete something Terraform is managing, like the scale set itself, we would remove it from the configuration and run terraform apply. Because Terraform is not managing the individual instances in this scale set, we can't tell Terraform to delete them directly. If you need that sort of control then you'd need to either manage the virtual machines directly with Terraform (not using a scale set at all) or use a separate tool (outside of Terraform) to interact with the API you mentioned.

How to add ECS attributes to an instance using terraform

I heavily use ECS Attributes on our containerized infrastructure. I couldn't find terraform docs to achieve this. Do I need to execute aws cli commands manually to apply those attributes after creating the infrastructure?
I'd recommend having the ECS agent set the ECS attributes if you need these.
You can do this by adding ECS_INSTANCE_ATTRIBUTES to the /etc/ecs/ecs.config file or passing them as an environment variable directly to the ECS agent on startup.
If you have a "base" ECS AMI (either one you rolled your own or the Amazon Linux AMI) then you probably just want to use user data to dynamically set this from Terraform.
You can use "aws_ecs_service" resource and add attributes. For example:
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-west-2a, us-west-2b]"}

Resources