Creating a "random" instance with Terraform - autocreate valid configurations - terraform

I'm new to Terraform and like to create "random" instances.
Some settings like OS, setup script ... will stay the same. Mostly the region/zone would change.
How can I do that?
It seems Terraform already knows about which combinations are valid. For example with AWS EC2 or lightsail it will complain if you choose a wrong combination. I guess this will reduce the amount of work. I'm wondering though if this is valid for each provider.
How could you automatically create a valid configuration, with only the region or zone changing each time Terraform runs?
Edit: Config looks like:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
# profile = "default"
# region = "us-west-2"
accesskey = ...
secretkey = ...
}
resource "aws_instance" "example" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
}

Using AWS as an example, aws_instance has two required parameters: ami and instance_type.
Thus to create an instance, you need to provide both of them:
resource "aws_instance" "my" {
ami = "ami-02354e95b39ca8dec"
instance_type = "t2.micro"
}
Everything else will be deduced or set to their default values. In terms of availability zones and subnets, if not explicitly specified, they will be chosen "randomly" (AWS decides how to place them, so if fact they can be all in one AZ).
Thus, to create 3 instances in different subnets and AZs you can do simply:
provider "aws" {
region = "us-east-1"
}
data "aws_ami" "al2_ami" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm*"]
}
}
resource "aws_instance" "my" {
count = 3
ami = data.aws_ami.al2_ami.id
instance_type = "t2.micro"
}

A declarative system like Terraform unfortunately isn't very friendly to randomness, because it expects the system to converge on a desired state, but random configuration would mean that the desired state would change on each action and thus it would never converge. Where possible I would recommend using "randomization" or "distribution" mechanisms built in to your cloud provider, such as AWS autoscaling over multiple subnets.
However, to be pragmatic Terraform does have a random provider, which represents the generation of random numbers as a funny sort of Terraform resource so that the random results can be preserved from one run to the next, in the same way as Terraform remembers the ID of an EC2 instance from one run to the next.
The random_shuffle resource can be useful for this sort of "choose any one (or N) of these options" situation.
Taking your example of randomly choosing AWS regions and availability zones, the first step would be to enumerate all of the options your random choice can choose from:
locals {
possible_regions = toset([
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2",
])
possible_availability_zones = tomap({
us-east-1 = toset(["a", "b", "e"])
us-east-2 = toset(["a", "c")
us-west-1 = toset(["a", "b"])
us-west-2 = toset(["b", "c"])
})
}
You can then pass these inputs into random_shuffle resources to select, for example, one region and then two availability zones from that region:
resource "random_shuffle" "region" {
input = local.possible_regions
result_count = 1
}
resource "random_shuffle" "availability_zones" {
input = local.possible_availability_zones[local.chosen_region]
result_count = 2
}
locals {
local.chosen_region = random_shuffle.region.result[0]
local.chosen_availability_zones = random_shuffle.availability_zones.result
}
You can then use local.chosen_region and local.chosen_availability_zones elsewhere in your configuration.
However, there is one important catch with randomly selecting regions in particular: the AWS provider is designed to require a region, because each AWS region is an entirely distinct set of endpoints, and so the provider won't be able to successfully configure itself if the region isn't known until the apply step, as would be the case if you wrote region = local.chosen_region in the provider configuration.
To work around this will require using the exceptional-use-only -target option to terraform apply, to direct Terraform to first focus only on generating the random region, and ignore everything else until that has succeeded:
# First apply with just the random region targeted
terraform apply -target=random_shuffle.region
# After that succeeds, run apply again normally to
# create everything else.
terraform apply

Related

terraform: ec2 machine change does not trigged dns change

I've got simple code for many EC2 machines setup. It does not update DNS record after machine is changed. I need to run it second time - only then DNS will be changed. What am I doing wrong?
resource "aws_instance" "ec2" {
for_each = var.instances
ami = each.value.ami
instance_type = each.value.type
ebs_optimized = true
}
resource "cloudflare_record" "web" {
for_each = var.instances
zone_id = var.cf_zone_id
name = "${each.key}.${var.env}.aws.${var.domain}."
value = aws_instance.ec2[each.key].public_ip
type = "A"
ttl = 1
depends_on = [
aws_instance.ec2
]
}
So, one thing about Terraform is that it provisions your infrastructure, but whatever happens with your infrastructure between your latest apply and now won't be reflected by the Terraform state file. If there is indeed a change in your infrastructure, then you would see it in the next Terraform plan. There is nothing wrong with what you're doing, only that you might have understood Terraform wrongly. There is a neat concept called "Immutable Infrastructure". Read it up in Hashicorp's blog: https://www.hashicorp.com/resources/what-is-mutable-vs-immutable-infrastructure

Create EKS node group of instance type Fargate with Terraform

With the eksctl cli one can create an EKS cluster of type Fargate which creates nodes of instance type "Fargate".
How can the same be achieved with terraform? The cluster can be created with node groups, but instance type Fargate does not seem to exist (although eksctl creates it like that)
node_groups = {
eks_nodes = {
desired_capacity = 3
max_capacity = 3
min_capaicty = 3
instance_type = "Fargate"
}
}
Thanks!
Have you tried to define a Fargate profile first?
You must define at least one Fargate profile that specifies which pods should use Fargate when they are launched. You also need to create a pod execution role this way the components running on the Fargate infrastructure need to make calls to AWS APIs on your behalf to do things like pull container images from Amazon ECR or route logs to other AWS services.
A terraform code for aws eks fargage looks like the following:
resource "aws_eks_fargate_profile" "default" {
cluster_name = var.cluster_name
fargate_profile_name = var.fargate_profile_name
pod_execution_role_arn = join("", aws_iam_role.default.arn)
subnet_ids = var.subnet_ids
tags = var.tags
selector {
namespace = var.kubernetes_namespace
labels = var.kubernetes_labels
}
}
Make sure you're using the aws_eks_fargate_profile resource to create an eks fargate profile.
A terraform code for fargate pod execution role looks like the following:
data "aws_iam_policy_document" "assume_role" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["eks-fargate-pods.amazonaws.com"]
}
}
}
resource "aws_iam_role" "default" {
name = var.role_name
assume_role_policy = join("", data.aws_iam_policy_document.assume_role.json)
tags = var.tags
}
resource "aws_iam_role_policy_attachment" "amazon_eks_fargate_pod_execution_role_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
role = join("", aws_iam_role.default.name)
}
I suggest you check some awesome examples from awesome communities like Cloudposse.
I'll give you the complete example of fargate profile and eks-node-group, it seems the solution that you need to deploy at this moment.
Pd: Try to read how they made the modules, I think you'll reach your goal quickly.
I hope it may useful for you and other users.

Reference multiple aws_instance in terraform for template output

We want to deploy services into several regions.
Looks like because of the aws provider, we can't just use count or for_each, as the provider can't be interpolated. Thus I need to set this up manually:
resource "aws_instance" "app-us-west-1" {
provider = aws.us-west-1
#other stuff
}
resource "aws_instance" "app-us-east-1" {
provider = aws.us-east-1
#other stuff
}
I would like when running this to create a file which contains all the IPs created (for an ansible inventory).
I was looking at this answer:
https://stackoverflow.com/a/61788089/169252
and trying to adapt it for my case:
resource "local_file" "app-hosts" {
content = templatefile("${path.module}/templates/app_hosts.tpl",
{
hosts = aws_instance[*].public_ip
}
)
filename = "app-hosts.cfg"
}
And then setting up the template accordingly.
But this fails:
Error: Invalid reference
on app.tf line 144, in resource "local_file" "app-hosts":
122: hosts = aws_instance[*].public_ip
A reference to a resource type must be followed by at least one attribute
access, specifying the resource name
I am suspecting that I can't just reference all the aws_instance defined as above like this. Maybe to refer to all aws_instance in this file I need to use a different syntax.
Or maybe I need to use a module somehow. Can someone confirm this?
Using terraform v0.12.24
EDIT: The provider definitions use alias and it's all in the same app.tf, which I was naively assuming to be able to apply in one go with terraform apply (did I mention I am a beginner with terraform?):
provider "aws" {
alias = "us-east-1"
region = "us-east-1"
}
provider "aws" {
alias = "us-west-1"
region = "us-west-1"
}
My current workaround is to not do a join but simply listing them all individually:
{
host1 = aws_instance.app-us-west-1.public_ip
host2 = aws_instance.app-us-east-1.public_ip
# more hosts
}

Terraform EKS tagging

I am having this issue of Terraform EKS tagging and don't seem to find workable solution to tag all the VPC subnets when a new cluster is created.
To provide some context: We have one AWS VPC where we deployment several EKS cluster into the subnets. We do not create VPC or subnets are part of the EKS cluster creation. Therefore, the terraform code creating a cluster doesn't get to tag existing subnets and VPC. Although EKS will add the required tags, they are automatically removed next time we run terraform apply on the VPC.
My attempt to workaround is to provide a terraform.tfvars file within the VPC to as follows:
eks_tags =
[
"kubernetes.io/cluster/${var.cluster-1}", "shared",
"kubernetes.io/cluster/${var.cluster-2}", "shared",
"kubernetes.io/cluster/${var.cluster-2}", "shared",
]
Then within the VPC and subnets resources, we do something like
resource "aws_vpc" "demo" {
cidr_block = "10.0.0.0/16"
tags = "${
map(
${var.eks_tags}
)
}"
}
However, the above does not seem to work. I have tried various Terraform 0.11 functions from https://www.terraform.io/docs/configuration-0-11/interpolation.html but not of them help.
Has anyone ben able to resolve this issue?
The idea that we always create new VPC and subnet for every EKS cluster is wrong. Obviously, the has to be a way to tag existing VPC and subnet resources using Terraform?
You can now use the aws provider ignore_tags attribute so that the tags made with the aws_ec2_tag resource do not get removed next time the VPC module is applied.
For example the provider becomes:
provider "aws" {
profile = "terraform"
region = "us-west-1"
// This is necessary so that tags required for eks can be applied to the vpc without changes to the vpc wiping them out.
// https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/resource-tagging
ignore_tags {
key_prefixes = ["kubernetes.io/"]
}
}
You can then leverage the aws_ec2_tag resource like so in your EKS module without worrying about the tag getting removed next time the VPC module is applied.
/**
Start of resource tagging logic to update the provided vpc and its subnets with the necessary tags for eks to work
The toset() function is actually multiplexing the resource block, one for every item in the set. It is what allows
for setting a tag on each of the subnets in the vpc.
*/
resource "aws_ec2_tag" "vpc_tag" {
resource_id = data.terraform_remote_state.vpc.outputs.vpc_id
key = "kubernetes.io/cluster/${var.cluster_name}"
value = "shared"
}
resource "aws_ec2_tag" "private_subnet_cluster_tag" {
for_each = toset(data.terraform_remote_state.vpc.outputs.private_subnets)
resource_id = each.value
key = "kubernetes.io/cluster/${var.cluster_name}"
value = "shared"
}
resource "aws_ec2_tag" "public_subnet_cluster_tag" {
for_each = toset(data.terraform_remote_state.vpc.outputs.public_subnets)
resource_id = each.value
key = "kubernetes.io/cluster/${var.cluster_name}"
value = "shared"
}
/**
These tags have been decoupled from the eks module and moved to the more appropirate vpc module.
*/
resource "aws_ec2_tag" "private_subnet_tag" {
for_each = toset(data.terraform_remote_state.vpc.outputs.private_subnets)
resource_id = each.value
key = "kubernetes.io/role/internal-elb"
value = "1"
}
resource "aws_ec2_tag" "public_subnet_tag" {
for_each = toset(data.terraform_remote_state.vpc.outputs.public_subnets)
resource_id = each.value
key = "kubernetes.io/role/elb"
value = "1"
}
In our case we have separate scripts to provision VPC and networking resources there we are not adding EKS specific tags.
For EKS cluster provisioning we have separate scripts which will auto update/add tags on cluster.
So on VPC scripts in provider.tf file we add below condition so that scripts will not remove these tags and everything works properly.
provider "aws" {
region = "us-east-1"
ignore_tags {
key_prefixes = ["kubernetes.io/cluster/"]
}
}
This problem will always exist when there are 2 pieces of code with different state files trying to act on the same resource.
One way to solve this is to re-import the VPC resource into your VPC state file everytime you apply your EKS terraform code. This will import your tags as well. Same goes with subnets, but it is a manual and tedious process in the long run.
terraform import aws_vpc.test_vpc vpc-a01106c2
Ref: https://www.terraform.io/docs/providers/aws/r/vpc.html
Cheers!

The Terraform resource "random_pet" is not working

This code will create an EC2 instance with name EC2 Instance:
provider "aws" {
region = "eu-west-1"
}
module ec2 {
source = "./ec2_instance"
name = "EC2 Instance"
}
However, if I try and use the random_pet resource the Instance name becomes an empty string.
provider "aws" {
region = "eu-west-1"
}
resource "random_pet" "server" {
length = 4
}
module ec2 {
source = "./ec2_instance"
name = "${random_pet.server.id}"
}
How come?
I'm using the random_pet.server.id code from https://www.terraform.io/docs/providers/random/r/pet.html
UPDATE: by using an output I was able to debug this.
Terraform does not seem to show the value of this variable during a plan. However, when doing an apply it did successfully populate this variable (and therefore name the instance). The question then becomes why does it not work in plan but does in apply?

Resources