Terraform aws eks worker node spot instance - terraform

I am following this blog to run terraform to spin up an eks cluster .
https://github.com/berndonline/aws-eks-terraform/blob/master/
I just want to change my ec2 worker node type to spot instance
https://github.com/berndonline/aws-eks-terraform/blob/master/eks-worker-nodes.tf
I googled and narrowed it down to launch configuration section,
any ideas how to change the ec2 type to spot instance ?

Please go through the official document about resource aws_launch_configuration
it gives you the sample on how to set spot instance already:
resource "aws_launch_configuration" "as_conf" {
image_id = "${data.aws_ami.ubuntu.id}"
instance_type = "m4.large"
spot_price = "0.001"
lifecycle {
create_before_destroy = true
}
}
Notes:
spot instances price are keep changing depend on the usage. If you are not familiar with it, use the same price of its on-demond price.
Even you set as on-demond price, AWS will only charge you less (normally 5 times less), unless they are used out. But AWS will never charge more.
Please also go through aws document for details: https://aws.amazon.com/ec2/spot/pricing/

Related

Assistance with deploying EC2 instance via terraform

I am new to terraform and I am just working in a lab environment at the moment so I hope someone can help me and point me in the right direction as to where I am going wrong. I am following this video for reference https://www.youtube.com/watch?v=SLB_c_ayRMo&t=2336s. I am running terraform V1.2.9 I run a terraform init command and everything initialises, but when I run a terraform plan I get this error "No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed."
This is my code for reference
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.16"
}
}
required_version = ">= 1.2.0"
}
provider "aws" {
region = "us-east-1"
access_key = "access key"
secret_key = "secretkey"
}
resource "aws_instance" "my-first-server" {
ami = "ami-052efd3df9dad4825"
instance_type = "t2.micro"
tags = {
Name = "MyFirstServer"
}
}
Any help would be gratefully appreciated to help me on my learning journey.
Terraform creates resources (for example, in AWS) and tracks them in a state file. The state file is one large object - nothing complex.
If you look in AWS - EC2 region us-east-1 see if your instance is there. If it isn't you should see the terraform plan saying it was deleted outside of Terraform.
One way to check if it's even being tracked properly is to do a terraform state list. That command should show you aws_instance_my-first-server if you've plan & applied this configuration in the past.
If you're still stuck one way to "start from fresh" since it's only a learning exercise and not production level at work, you can delete the entire state file and start fresh.
However I trust Terraform... you must have the instance running somewhere ;)

How do I ignore changes to everything for an ec2 instance except a certain attribute?

I'm creating a terraform configuration for an already deployed EC2 instance. I want to change the instance type alone for this instance. I want something like this :
resource "aws_instance" "ec2" {
ami = "ami-09a4a9ce71ff3f20b"
instance_type = "t2.micro"
lifecycle {
ignore_changes = [
<everything except instance_type>
]
}
}
How do I ignore changes to everything for an ec2 instance except a certain attribute?
Unfortunately I can't seem to find a way to do this while the state of the resource does not match the existing state. However I have tested that it is possible, but you need to do the operation in stages... starting with telling terraform what the current state of that ec2 instance is and working from there.
Step 1: Create a Resource Block for The ec2 Instance as it Currently Exists
I would do this with a combination of manual entry (yes, tedious I know) and utilsing terraform import.
You can run terraform plan a bunch of times until it reveals no changes on the resource, which would indicate that the resource now matches the current state of the resource.
Step 2. Update the Block With the New Instance Type
Once they are equal, it would then be a matter of simply updating the aws_instance resource block to your desired instance_type.
Step 3. Apply the Changes to the EC2 Instance in a Targeted Way
To ensure that only changes to this instance are applied, you can lean on the terraform apply -target command to target applying just for this resource specifically. This will void any other resources updating in your plan.
Step 4. Make Further Adjustments as Required
Once the resource is now matching the instance you want. Go ahead and modify the rest of the resource block to reflect future state changes.

Terraform Throttling Route53

Did anyone experienced issues with Terraform being throttled when using it with AWS Route53 records and being VERY slow?
I have enabled DEBUG mode and getting this:
2018-11-30T14:35:08.467Z [DEBUG] plugin.terraform-provider-aws_v1.36.0_x4: 2018/11/30 14:35:08 [DEBUG] [aws-sdk-go] <?xml version="1.0"?>
2018-11-30T14:35:08.467Z [DEBUG] plugin.terraform-provider aws_v1.36.0_x4: <ErrorResponse xmlns="https://route53.amazonaws.com/doc/2013-04-01/"><Error><Type>Sender</Type><Code>Throttling</Code><Message>Rate exceeded</Message></Error><RequestId>REQUEST_ID</RequestId></ErrorResponse>
2018-11-30T14:35:08.518Z [DEBUG] plugin.terraform-provider-aws_v1.36.0_x4: 2018/11/30 14:35:08 [DEBUG] [aws-sdk-go] DEBUG: Validate Response route53/ListResourceRecordSets failed, will retry, error Throttling: Rate exceeded
Terraform takes >1h just to do simple Plan, something which normally takes <5 mins.
My infrastructure is organized like this:
alb.tf:
module "ALB"
{ source = "modules/alb" }
modules/alb/alb.tf:
resource "aws_alb" "ALB"
{ name = "alb"
subnets = var.subnets ...
}
modules/alb/dns.tf
resource "aws_route53_record" "r53" {
count = "${length(var.cnames_generic)}"
zone_id = "HOSTED_ZONE_ID"
name = "${element(var.cnames_generic_dns, count.index)}.${var.environment}.${var.domain}"
type = "A"
alias {
name = "dualstack.${aws_alb.ALB.dns_name}"
zone_id = "${aws_alb.ALB.zone_id}"
evaluate_target_health = false
}
}
modules/alb/variables.tf:
variable "cnames_generic_dns" {
type = "list"
default = [
"hostname1",
"hostname2",
"hostname3",
"hostname4",
"hostname5",
"hostname6",
"hostname7",
...
"hostname25"
]
}
So I am using modules to configure Terraform, and inside modules there are resources (ALB, DNS..).
However, looks like Terraform is describing every single DNS Resource (CNAME and A records, which I have ~1000) in a HostedZone which is causing it to Throttle?
Terraform v0.10.7
Terraform AWS provider version = "~> 1.36.0"
that's a lot of DNS records! And partly the reason why the AWS API is throttling you.
First, I'd recommend upgrading your AWS provider. v1.36 is fairly old and there have been more than a few bug fixes since.
(Next, but not absolutely necessary, is to use TF v0.11.x if possible.)
In your AWS Provider block, increase max_retries to at least 10 and experiment with higher values.
Then, use Terraform's --parallelism flag to limit TF's concurrency rate. Try setting that to 5 for starters.
Last, enable Terraform's debug mode to see if it gives you any more useful info.
Hope this helps!
The problem is solved by performing the following actions:
since we re-structured DNS records by adding one resource and then variables / iterate through them, this probably caused Terraform to query constantly all DNS records
we decided to leave Terraform to finish refresh (took 4h and lots of throttling)
manually deleted DNS records from R53 for the Workspace which we were doing this
commenting out Terraform DNS resources so let it also delete from state files
uncommenting Terraform DNS and re-run it again so it created them again
run Terraform plan went fine again
Looks like throttling with Terraform AWS Route53 is completely resolved after upgrading to newer AWS provider. We have updated TF AWS provider to 1.54.0 like this in our init.tf :
version = "~> 1.54.0"
Here are more details about the issue and suggestions from Hashicorp engineers:
https://github.com/terraform-providers/terraform-provider-aws/issues/7056

After EKS cluster is created by Terraform, next plan sees subnet changes to tags

I am intending to use Terraform to stand up my entire monitoring infrastructure in AWS.
So far in my terraform project have created VPC, subnets, appropriate security groups. I am using the Terraform Registry where possible:
vpc
security-group
iam-role
eks
The issue I am seeing is that after the EKS cluster is deployed it introduces tags to the VPC and Subnets that do not appear to be known to Terraform. Hence the next time terraform plan is run it identifying tags that it does not manage and intends to remove them:
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ module.vpc.aws_subnet.private[0]
tags.%: "4" => "3"
tags.kubernetes.io/cluster/monitoring: "shared" => ""
~ module.vpc.aws_subnet.private[1]
tags.%: "4" => "3"
tags.kubernetes.io/cluster/monitoring: "shared" => ""
~ module.vpc.aws_vpc.this
tags.%: "4" => "3"
tags.kubernetes.io/cluster/monitoring: "shared" => ""
Plan: 0 to add, 3 to change, 0 to destroy.
------------------------------------------------------------------------
There is an issue open with terraform-provider-aws with a local workaround using bash, but does anyone know how to get Terraform to become aware of these tags or to get them to be ignored by subsequent plans in a robust way?
Just add the tags when you call the module, notice in the example of https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/1.41.0 it shows tags there and in the docs it says "A map of tags to add to all resources" so you can add it to that map.
If you controlled the module, you could try to use ignore_changes clause in the lifecycle block. Something like
lifecycle {
ignore_changes = [
"tags"
]
}
It's going to be much trickier with a module that you don't control though.
So in the end we chose not to use terraform to deploy the cluster at all,
instead we use eksctl the community based tool from Weaveworks.
https://eksctl.io/
It was recommended by an AWS solutions architect when we were at the AWS offices in London for some training.
The config can be stored in source control if needed.
eksctl create cluster -f cluster.yaml
Since EKS does a lot of tagging of infrastructure, our lives are much better now the state file is not complaining about tags.

How to fix : VALIDATION_ERROR: You must also specify a ServiceAccessSecurityGroup Terraform

I am new in terraform . I have a issue which I am facing when I am launching a simple EMR cluster in private subnet
It fails with the below error message :
aws_emr_cluster.emr-test-cluster: [WARN] Error waiting for EMR Cluster state to be "WAITING" or "RUNNING": TERMINATED_WITH_ERRORS: VALIDATION_ERROR: You must also specify a ServiceAccessSecurityGroup if you use custom security groups when creating a cluster in a private subnet.
I did check the github seems like the fixed it for the issue opened . But I am using the latest version of terraform (0.11.7)
Below are the github links for for the issue reported in Github
https://github.com/hashicorp/terraform/issues/9518
https://github.com/hashicorp/terraform/pull/9600
Any suggestions on how to fix this will be really helpful
Thank you
The issue is fixed in Git because it was raised to show an error which is supposed to ask for service_access_security_group while using the emr_managed_master_security_group and emr_managed_slave_security_group.
So, you would require mentioning service_access_security_group parameter in your EMR resource.
Thanks.
As we know that to put emr components in private subnet, you have to following
these security groups
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-man-sec-groups.html#emr-sg-elasticmapreduce-sa-private
Then in terraform you have to resolve cyclic dependency between security groups from this link and put following configuration as shown below
ec2_attributes {
subnet_id = element(var.subnet_ids, count.index)
key_name = "${var.ssh_key_id}"
emr_managed_master_security_group = aws_security_group.EmrManagedMasterSecurityGroup.id
emr_managed_slave_security_group = aws_security_group.EmrManagedSlaveSecurityGroup.id
service_access_security_group = aws_security_group.ServiceAccessSecurityGroup.id
#additional_master_security_groups = aws_security_group.allow_ssh.id
instance_profile = aws_iam_instance_profile.example_ec2_profile.arn
}
source:- https://github.com/hashicorp/terraform/pull/9600

Resources