Terraform EKS tagging - terraform

I am having this issue of Terraform EKS tagging and don't seem to find workable solution to tag all the VPC subnets when a new cluster is created.
To provide some context: We have one AWS VPC where we deployment several EKS cluster into the subnets. We do not create VPC or subnets are part of the EKS cluster creation. Therefore, the terraform code creating a cluster doesn't get to tag existing subnets and VPC. Although EKS will add the required tags, they are automatically removed next time we run terraform apply on the VPC.
My attempt to workaround is to provide a terraform.tfvars file within the VPC to as follows:
eks_tags =
[
"kubernetes.io/cluster/${var.cluster-1}", "shared",
"kubernetes.io/cluster/${var.cluster-2}", "shared",
"kubernetes.io/cluster/${var.cluster-2}", "shared",
]
Then within the VPC and subnets resources, we do something like
resource "aws_vpc" "demo" {
cidr_block = "10.0.0.0/16"
tags = "${
map(
${var.eks_tags}
)
}"
}
However, the above does not seem to work. I have tried various Terraform 0.11 functions from https://www.terraform.io/docs/configuration-0-11/interpolation.html but not of them help.
Has anyone ben able to resolve this issue?
The idea that we always create new VPC and subnet for every EKS cluster is wrong. Obviously, the has to be a way to tag existing VPC and subnet resources using Terraform?

You can now use the aws provider ignore_tags attribute so that the tags made with the aws_ec2_tag resource do not get removed next time the VPC module is applied.
For example the provider becomes:
provider "aws" {
profile = "terraform"
region = "us-west-1"
// This is necessary so that tags required for eks can be applied to the vpc without changes to the vpc wiping them out.
// https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/resource-tagging
ignore_tags {
key_prefixes = ["kubernetes.io/"]
}
}
You can then leverage the aws_ec2_tag resource like so in your EKS module without worrying about the tag getting removed next time the VPC module is applied.
/**
Start of resource tagging logic to update the provided vpc and its subnets with the necessary tags for eks to work
The toset() function is actually multiplexing the resource block, one for every item in the set. It is what allows
for setting a tag on each of the subnets in the vpc.
*/
resource "aws_ec2_tag" "vpc_tag" {
resource_id = data.terraform_remote_state.vpc.outputs.vpc_id
key = "kubernetes.io/cluster/${var.cluster_name}"
value = "shared"
}
resource "aws_ec2_tag" "private_subnet_cluster_tag" {
for_each = toset(data.terraform_remote_state.vpc.outputs.private_subnets)
resource_id = each.value
key = "kubernetes.io/cluster/${var.cluster_name}"
value = "shared"
}
resource "aws_ec2_tag" "public_subnet_cluster_tag" {
for_each = toset(data.terraform_remote_state.vpc.outputs.public_subnets)
resource_id = each.value
key = "kubernetes.io/cluster/${var.cluster_name}"
value = "shared"
}
/**
These tags have been decoupled from the eks module and moved to the more appropirate vpc module.
*/
resource "aws_ec2_tag" "private_subnet_tag" {
for_each = toset(data.terraform_remote_state.vpc.outputs.private_subnets)
resource_id = each.value
key = "kubernetes.io/role/internal-elb"
value = "1"
}
resource "aws_ec2_tag" "public_subnet_tag" {
for_each = toset(data.terraform_remote_state.vpc.outputs.public_subnets)
resource_id = each.value
key = "kubernetes.io/role/elb"
value = "1"
}

In our case we have separate scripts to provision VPC and networking resources there we are not adding EKS specific tags.
For EKS cluster provisioning we have separate scripts which will auto update/add tags on cluster.
So on VPC scripts in provider.tf file we add below condition so that scripts will not remove these tags and everything works properly.
provider "aws" {
region = "us-east-1"
ignore_tags {
key_prefixes = ["kubernetes.io/cluster/"]
}
}

This problem will always exist when there are 2 pieces of code with different state files trying to act on the same resource.
One way to solve this is to re-import the VPC resource into your VPC state file everytime you apply your EKS terraform code. This will import your tags as well. Same goes with subnets, but it is a manual and tedious process in the long run.
terraform import aws_vpc.test_vpc vpc-a01106c2
Ref: https://www.terraform.io/docs/providers/aws/r/vpc.html
Cheers!

Related

Terraform: creation of resources with nested loop

I have created a bunch of VPC endpoints in one AWS account and need to share them with other VPCs across different accounts. Break down of my terraform code is provided below.
Created the Route53 private hosted zones (Resource#1 for easy reference) as below
resource "aws_route53_zone" "private" {
for_each = local.vpc_endpoints
name = each.value.phz
vpc {
vpc_id = var.vpc_id
}
lifecycle {
ignore_changes = [vpc]
}
tags = {
Name = each.key
}
}
Created vpc association authorizations (Resource#2). The VPC ID, to which the endpoints are to be shared is passed as a variable as shown below.
resource "aws_route53_vpc_association_authorization" "example" {
for_each = local.vpc_endpoints
vpc_id = var.vpc
zone_id = aws_route53_zone.private[each.key].zone_id
}
Finally, I have created the VPC associations (Resource#3)
resource "aws_route53_zone_association" "target" {
for_each = aws_route53_vpc_association_authorization.example
provider = aws.target-account
vpc_id = aws_route53_vpc_association_authorization.example[each.key].vpc_id
zone_id = aws_route53_vpc_association_authorization.example[each.key].zone_id
}
Everything works fine for the first VPC (vpc-A). But now I need to share the same hosted zones (Resource#1) with a different VPC (vpc-B) and more VPCs in the future. This means I need to repeat the creation of "aws_route53_vpc_association_authorization" (Resource#2) for all new VPCs as well, ideally looping through a list of VPC IDs.
But I am unable to it as nested for_each loop is not supported. Tried other options like count + for_each etc., but nothing help.
Could you provide some guidance on how to achieve this?

Terraform simple script says "Error: Error launching source instance: VPCIdNotSpecified: No default VPC for this user"

Getting started on Terraform. I am trying to provision an EC2 instance using the following .tf file. I have a default VPC already in my account in the AZ I am trying to provision the EC2 instance.
# Terraform Settings Block
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
#version = "~> 3.21" # Optional but recommended in production
}
}
}
# Provider Block
provider "aws" {
profile = "default"
region = "us-east-1"
}
# Resource Block
resource "aws_instance" "ec2demo" {
ami = "ami-c998b6b2"
instance_type = "t2.micro"
}
I do the following Terraform commands.
terraform init
terraform plan
terraform apply
aws_instance.ec2demo: Creating...
Error: Error launching source instance: VPCIdNotSpecified: No default VPC for this user. GroupName is only supported for EC2-Classic and default VPC.
status code: 400, request id: 04274b8c-9fc2-47c0-8d51-5b627e6cf7cc
on ec2-instance.tf line 18, in resource "aws_instance" "ec2demo":
18: resource "aws_instance" "ec2demo" {
As the error suggests, it doesn't find the default VPC in the us-east-1 region.
You can provide the subnet_id within your VPC to create your instance as below.
resource "aws_instance" "ec2demo" {
ami = "ami-c998b6b2"
instance_type = "t2.micro"
subnet_id = "subnet-0b1250d733767bafe"
}
I'm only create a Default VPC in AWS
AWS VPC
actions
create default VPC
it's done, you try again now
terraform plan
terraform apply
As of now if there is any default VPC available in the AWS account then using terraform resource aws_instance instance can be created without any network spec input.
Official AWS-terraform example:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#basic-example-using-ami-lookup
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id ## Or use static AMI ID for testing.
instance_type = "t3.micro"
}
The error message: Error: Error launching source instance: VPCIdNotSpecified: No default VPC for this user. states that the EC2 instance did not find any networking configuration in your terraform code where it needs to create the instance.
This is probably because of the missing default VPC in your AWS account and it seems that you are not passing any network config input to terraform resource.
Basically, you have two ways to fix this
Create a default VPC and then use the same code.
Document: https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html#create-default-vpc
Another and better way would be to inject the network config to aws_instance resource. I have used the example from the official aws_instance resource. Feel free to update any attributes accordingly.
resource "aws_vpc" "my_vpc" {
cidr_block = "172.16.0.0/16"
tags = {
Name = "tf-example"
}
}
resource "aws_subnet" "my_subnet" {
vpc_id = aws_vpc.my_vpc.id
cidr_block = "172.16.10.0/24"
availability_zone = "us-west-2a"
tags = {
Name = "tf-example"
}
}
resource "aws_network_interface" "foo" {
subnet_id = aws_subnet.my_subnet.id
private_ips = ["172.16.10.100"]
tags = {
Name = "primary_network_interface"
}
}
resource "aws_instance" "foo" {
ami = "ami-005e54dee72cc1d00" # us-west-2
instance_type = "t2.micro"
network_interface {
network_interface_id = aws_network_interface.foo.id
device_index = 0
}
credit_specification {
cpu_credits = "unlimited"
}
}
Another way of passing network config to ec2 instance is to use subnet_id in aws_instance resource as suggested by others.
Are there probabilities that you deleted the default vpc, if u did u can recreate going to the VPC Section -> My Vpcs -> At the right corner you will see a dropdown called actions click and select create a default vpc
As the AWS announcement last year On August 15, 2022 we expect all migrations to be complete, with no remaining EC2-Classic resources present in any AWS account. From now on you will need to specify while you are creating any new resources the subnet_id and declare it inside your while creating.
Example :
resource "aws_instance" "test" {
ami = "ami-xxxxx"
instance_type = var.instance_type
vpc_security_group_ids = ["sg-xxxxxxx"]
subnet_id = "subnet-xxxxxx"

Create EKS node group of instance type Fargate with Terraform

With the eksctl cli one can create an EKS cluster of type Fargate which creates nodes of instance type "Fargate".
How can the same be achieved with terraform? The cluster can be created with node groups, but instance type Fargate does not seem to exist (although eksctl creates it like that)
node_groups = {
eks_nodes = {
desired_capacity = 3
max_capacity = 3
min_capaicty = 3
instance_type = "Fargate"
}
}
Thanks!
Have you tried to define a Fargate profile first?
You must define at least one Fargate profile that specifies which pods should use Fargate when they are launched. You also need to create a pod execution role this way the components running on the Fargate infrastructure need to make calls to AWS APIs on your behalf to do things like pull container images from Amazon ECR or route logs to other AWS services.
A terraform code for aws eks fargage looks like the following:
resource "aws_eks_fargate_profile" "default" {
cluster_name = var.cluster_name
fargate_profile_name = var.fargate_profile_name
pod_execution_role_arn = join("", aws_iam_role.default.arn)
subnet_ids = var.subnet_ids
tags = var.tags
selector {
namespace = var.kubernetes_namespace
labels = var.kubernetes_labels
}
}
Make sure you're using the aws_eks_fargate_profile resource to create an eks fargate profile.
A terraform code for fargate pod execution role looks like the following:
data "aws_iam_policy_document" "assume_role" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["eks-fargate-pods.amazonaws.com"]
}
}
}
resource "aws_iam_role" "default" {
name = var.role_name
assume_role_policy = join("", data.aws_iam_policy_document.assume_role.json)
tags = var.tags
}
resource "aws_iam_role_policy_attachment" "amazon_eks_fargate_pod_execution_role_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
role = join("", aws_iam_role.default.name)
}
I suggest you check some awesome examples from awesome communities like Cloudposse.
I'll give you the complete example of fargate profile and eks-node-group, it seems the solution that you need to deploy at this moment.
Pd: Try to read how they made the modules, I think you'll reach your goal quickly.
I hope it may useful for you and other users.

How to add resource dependencies in terraform

I have created a gcp kubernetes cluster using terraform and configured a few kubernetes resources such as namespaces and helm releases. I would like terraform to automatically destroy/recreate all the kubernetes cluster resources if the gcp cluster is destroyed/recreated but I cant seem to figure out how to do it.
The behavior I am trying to recreate is similar to what you would get if you used triggers with null_resources. Is this possible with normal resources?
resource "google_container_cluster" "primary" {
name = "marcellus-wallace"
location = "us-central1-a"
initial_node_count = 3
resource "kubernetes_namespace" "example" {
metadata {
annotations = {
name = "example-annotation"
}
labels = {
mylabel = "label-value"
}
name = "terraform-example-namespace"
#Something like this, but this only works with null_resources
triggers {
cluster_id = "${google_container_cluster.primary.id}"
}
}
}
In your specific case, you don't need to specify any explicit dependencies. They will be set automatically because you have cluster_id = "${google_container_cluster.primary.id}" in your second resource.
In case when you need to set manual dependency you can use depends_on meta-argument.

terrafrom aws_instance subnet_id - Error launching source instance: Unsupported: The requested configuration is currently not supported

I am trying to build a VPC and subnet within that VPC. Thirdly I am trying to create an AWS instances within that subnet. Sounds simple, but the subnet_id parameter seems to break the terraform 'apply' (plan works just fine). Am I missing something?
Extract from main.tf
resource "aws_vpc" "poc-vpc" {
cidr_block = "10.0.0.0/16"
instance_tenancy = "dedicated"
enable_dns_hostnames = "true"
}
resource "aws_subnet" "poc-subnet" {
vpc_id = "${aws_vpc.poc-vpc.id}"
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = "true"
availability_zone = "${var.availability_zone}"
}
resource "aws_instance" "POC-Instance" {
ami = "${lookup(var.amis, var.region)}"
instance_type = "${var.instance_type}"
availability_zone = "${var.availability_zone}"
associate_public_ip_address = true
key_name = "Pipeline-POC-Key-Pair"
vpc_security_group_ids = ["${aws_security_group.poc-sec-group.id}"]
subnet_id = "${aws_subnet.poc-subnet.id}"
}
If I remove the subnet_id the 'apply' works, but the instance is created in my default VPC. This is not the aim.
Any help would be appreciated. I am a newbie to terraform so please be gentle.
I worked this out and wanted to post this up to hopefully saves others some time.
The issue is the conflict of subnet_id in the aws_instance provisioner and instance_tennancy in the aws_vpc provisioner. Remove instance tenancy and all is fixed (or set to default)
The error message is meaningless. I've asked whether this can be improved.
It's also possible that there is a conflict in your other configuration. I encountered the same Unsupported error because of different reason.
I used AMI ami-0ba5dfee72d5bb9a1 that I found from https://cloud-images.ubuntu.com/locator/ec2/
I just choose anything that is in the same region as my VPC.
Apparently that AMI can only support a* instance type and don't support t* or m* instance type.
So I think double check that:
Your AMI is compatible with your instance type.
Your AMI is in the same region as your VPC or subnet.
There is no other conflicting configuration.
The problem is terraform example that you likely copied is WRONG.
The problem is vpc instance tenancy property in most cases.
Change the VPC instance_tenancy = "dedicated" to instance_tenancy = "default".
It should work for any ec2 instance type.
The reason is that dedicated instances are supported only for m5.large or bigger instances so VPC and ec2 instance types are in conflict if you are actually creating smaller instances like t3.small or t2.m. You can micro etc. You can look at the dedicated instances here.
https://aws.amazon.com/ec2/pricing/dedicated-instances/
if you want to create your own VPC network and not use the default, then you need to also create a route table and internet gateway so you can have access to the created ec2. You will need to also add the follow config to create a full VPC network with ec2 instances that is accessible with the public IP you assigned
# Internet GW
resource "aws_internet_gateway" "main-gw" {
vpc_id = "${aws_vpc.poc-vpc.id}"
tags {
Name = "poc-vpc"
}
}
# route tables
resource "aws_route_table" "main-public" {
vpc_id = "${aws_vpc.poc-vpc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.main-gw.id}"
}
tags {
Name = "main route"
}
}
# route associations public
resource "aws_route_table_association" "main-public-1-a" {
subnet_id = "${aws_subnet.poc-subnet.id}"
route_table_id = "${aws_route_table.main-public.id}"
}

Resources