terraform: ec2 machine change does not trigged dns change - terraform

I've got simple code for many EC2 machines setup. It does not update DNS record after machine is changed. I need to run it second time - only then DNS will be changed. What am I doing wrong?
resource "aws_instance" "ec2" {
for_each = var.instances
ami = each.value.ami
instance_type = each.value.type
ebs_optimized = true
}
resource "cloudflare_record" "web" {
for_each = var.instances
zone_id = var.cf_zone_id
name = "${each.key}.${var.env}.aws.${var.domain}."
value = aws_instance.ec2[each.key].public_ip
type = "A"
ttl = 1
depends_on = [
aws_instance.ec2
]
}

So, one thing about Terraform is that it provisions your infrastructure, but whatever happens with your infrastructure between your latest apply and now won't be reflected by the Terraform state file. If there is indeed a change in your infrastructure, then you would see it in the next Terraform plan. There is nothing wrong with what you're doing, only that you might have understood Terraform wrongly. There is a neat concept called "Immutable Infrastructure". Read it up in Hashicorp's blog: https://www.hashicorp.com/resources/what-is-mutable-vs-immutable-infrastructure

Related

When to use ebs_block_device?

I want to create an EC2 instance with Terraform. This instance should have some EBS.
In the documentation I read that Terraform provides two ways to create an EBS:
ebs_block_device
aws_ebs_volume with aws_volume_attachment
I want to know, when should I use ebs_block_device?
Documentation
Unfortunately the documentation isn't that clear (at least for me) about:
When to use ebs_block_device?
How is the exact actual behavior?
See Resource: aws_instance:
ebs_block_device - (Optional) One or more configuration blocks with additional EBS block devices to attach to the instance. Block device configurations only apply on resource creation. See Block Devices below for details on attributes and drift detection. When accessing this as an attribute reference, it is a set of objects.
and
Currently, changes to the ebs_block_device configuration of existing resources cannot be automatically detected by Terraform. To manage changes and attachments of an EBS block to an instance, use the aws_ebs_volume and aws_volume_attachment resources instead. If you use ebs_block_device on an aws_instance, Terraform will assume management over the full set of non-root EBS block devices for the instance, treating additional block devices as drift. For this reason, ebs_block_device cannot be mixed with external aws_ebs_volume and aws_volume_attachment resources for a given instance.
Research
I read:
No change when modifying aws_instance.ebs_block_device.volume_size, which says that Terraform doesn't show any changes with plan/apply and doesn't change anything in AWS, although changes were made..
AWS "ebs_block_device.0.volume_id": this field cannot be set, which says that Terraform shows an error while running plan.
Ebs_block_device forcing replacement every terraform apply, which says that Terraform replaces all EBS.
aws_instance dynamic ebs_block_device forces replacement, which says that Terraform replaces all EBS, although no changes were made.
adding ebs_block_device to existing aws_instance forces unneccessary replacement, which says that Terraform replaces the whole EC2 instance with all EBS.
aws_instance dynamic ebs_block_device forces replacement, which says that Terraform replaces the whole EC2 instance with all EBS, although no changes were made.
I know that the issues are about different versions of Terraform and Terraform AWS provider and some issues are already fixed, but what is the actual intended behavoir?
In almost all issues the workaround/recommendation is to use aws_ebs_volume with aws_volume_attachment instead of ebs_block_device.
Question
When should I use ebs_block_device? What is the use case for this feature?
When should I use ebs_block_device?
When you need another volume other than the root volume because
Unlike the data stored on a local instance store (which persists only
as long as that instance is alive), data stored on an Amazon EBS
volume can persist independently of the life of the instance.
When you launch an instance, the root device volume contains the image used to boot the instance.
Instances that use Amazon EBS for the root device automatically have
an Amazon EBS volume attached. When you launch an Amazon EBS-backed
instance, we create an Amazon EBS volume for each Amazon EBS snapshot
referenced by the AMI you use
Here's an example of an EC2 with additional EBS volume.
provider "aws" {
region = "eu-central-1"
}
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
ebs_block_device {
device_name = "/dev/sda1"
volume_size = 8
volume_type = "gp3"
throughput = 125
delete_on_termination = false
}
}
Note: delete_on_termination for root_block_device is set to true by default.
You can read more on AWS block device mapping here.
EDIT:
aws_volume_attachment is used when you want to attach an existing EBS volume to an EC2 instance. It helps to manage the relationship between the volume and the instance, and ensure that the volume is attached to the desired instance in the desired state.
Here's an example usage:
resource "aws_volume_attachment" "ebs_att" {
device_name = "/dev/sdh"
volume_id = aws_ebs_volume.example.id
instance_id = aws_instance.web.id
}
resource "aws_instance" "web" {
ami = "ami-21f78e11"
availability_zone = "us-west-2a"
instance_type = "t2.micro"
tags = {
Name = "HelloWorld"
}
}
resource "aws_ebs_volume" "example" {
availability_zone = "us-west-2a"
size = 1
}
and the ebs_block_device is used when you want to create a new EBS volume and attach it to an EC2 instance at the same time as the instance is being created.
NOTE:
If you use ebs_block_device on an aws_instance, Terraform will assume
management over the full set of non-root EBS block devices for the
instance, and treats additional block devices as drift. For this
reason, ebs_block_device cannot be mixed with external aws_ebs_volume + aws_volume_attachment resources for a given instance.
Source
I strongly suggest using only resource aws_ebs_volume. When creating an instance, the root block will be created automatically. For extra EBS storage, you will want Terraform to manage them independently.
Why?
Basically you have 2 choices to create an instance with 1 extra disk:
resource "aws_instance" "instance" {
ami = "ami-xxxx"
instance_type = "t4g.micro"
#... other arguments ...
ebs_block_device {
volume_size = 10
volume_type = "gp3"
#... other arguments ...
}
}
OR
resource "aws_instance" "instance" {
ami = "ami-xxxx"
instance_type = "t4g.micro"
#... other arguments ...
}
resource "aws_ebs_volume" "volume" {
size = 10
type = "gp3"
}
resource "aws_volume_attachment" "attachment" {
volume_id = aws_ebs_volume.volume.id
instance_id = aws_instance.instance.id
device_name = "/dev/sdb"
}
The first method is more compact, creates fewer Terraform resource and makes terraform import easier. But if you need to recreate your instance what will happen? Terraform will remove the instance, and redeploy it from scratch with new volumes. If you use the argument delete_on_termination to false, the volumes will still exist but they won't be attached to your instance.
In the contrary, when using a dedicated resource, the instance recreation will recreate the attachements (because the instance id changes) and then, reattach your existing volumes to your instance, which is what we need 90% of the time.
Also, if at some point you need to manipulate your volume in the Terraform state (terraform state commands), it will be much easier to do it on the individual resource aws_ebs_volume.
Finally, at some point in your Terraform journey, you will want to industrialize your code by adding loops, variables and so on. A common use case is to make the number of volumes variables: you provide a list of volumes and Terraform create 1, 2 or 10 volumes according to this list.
And for this you have also have 2 options :
variable "my_volume" { map(any) }
my_volume = {
"/dev/sdb": {
"size": 10
"type": "gp3"
}
}
resource "aws_instance" "instance" {
ami = "ami-xxxx"
instance_type = "t4g.micro"
#... other arguments ...
dynamic "ebs_block_device" {
for_each = var.my_volumes
content {
volume_size = ebs_block_device.value["size"]
volume_type = ebs_block_device.value["type"]
#... other arguments ...
}
}
}
OR
resource "aws_instance" "instance" {
ami = "ami-xxxx"
instance_type = "t4g.micro"
#... other arguments ...
}
resource "aws_ebs_volume" "volume" {
for_each = var.
size = 10
type = "gp3"
}
resource "aws_volume_attachment" "attachment" {
volume_id = aws_ebs_volume.volume.id
instance_id = aws_instance.instance.id
device_name = "/dev/sdb"
}

Terraform simple script says "Error: Error launching source instance: VPCIdNotSpecified: No default VPC for this user"

Getting started on Terraform. I am trying to provision an EC2 instance using the following .tf file. I have a default VPC already in my account in the AZ I am trying to provision the EC2 instance.
# Terraform Settings Block
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
#version = "~> 3.21" # Optional but recommended in production
}
}
}
# Provider Block
provider "aws" {
profile = "default"
region = "us-east-1"
}
# Resource Block
resource "aws_instance" "ec2demo" {
ami = "ami-c998b6b2"
instance_type = "t2.micro"
}
I do the following Terraform commands.
terraform init
terraform plan
terraform apply
aws_instance.ec2demo: Creating...
Error: Error launching source instance: VPCIdNotSpecified: No default VPC for this user. GroupName is only supported for EC2-Classic and default VPC.
status code: 400, request id: 04274b8c-9fc2-47c0-8d51-5b627e6cf7cc
on ec2-instance.tf line 18, in resource "aws_instance" "ec2demo":
18: resource "aws_instance" "ec2demo" {
As the error suggests, it doesn't find the default VPC in the us-east-1 region.
You can provide the subnet_id within your VPC to create your instance as below.
resource "aws_instance" "ec2demo" {
ami = "ami-c998b6b2"
instance_type = "t2.micro"
subnet_id = "subnet-0b1250d733767bafe"
}
I'm only create a Default VPC in AWS
AWS VPC
actions
create default VPC
it's done, you try again now
terraform plan
terraform apply
As of now if there is any default VPC available in the AWS account then using terraform resource aws_instance instance can be created without any network spec input.
Official AWS-terraform example:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#basic-example-using-ami-lookup
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id ## Or use static AMI ID for testing.
instance_type = "t3.micro"
}
The error message: Error: Error launching source instance: VPCIdNotSpecified: No default VPC for this user. states that the EC2 instance did not find any networking configuration in your terraform code where it needs to create the instance.
This is probably because of the missing default VPC in your AWS account and it seems that you are not passing any network config input to terraform resource.
Basically, you have two ways to fix this
Create a default VPC and then use the same code.
Document: https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html#create-default-vpc
Another and better way would be to inject the network config to aws_instance resource. I have used the example from the official aws_instance resource. Feel free to update any attributes accordingly.
resource "aws_vpc" "my_vpc" {
cidr_block = "172.16.0.0/16"
tags = {
Name = "tf-example"
}
}
resource "aws_subnet" "my_subnet" {
vpc_id = aws_vpc.my_vpc.id
cidr_block = "172.16.10.0/24"
availability_zone = "us-west-2a"
tags = {
Name = "tf-example"
}
}
resource "aws_network_interface" "foo" {
subnet_id = aws_subnet.my_subnet.id
private_ips = ["172.16.10.100"]
tags = {
Name = "primary_network_interface"
}
}
resource "aws_instance" "foo" {
ami = "ami-005e54dee72cc1d00" # us-west-2
instance_type = "t2.micro"
network_interface {
network_interface_id = aws_network_interface.foo.id
device_index = 0
}
credit_specification {
cpu_credits = "unlimited"
}
}
Another way of passing network config to ec2 instance is to use subnet_id in aws_instance resource as suggested by others.
Are there probabilities that you deleted the default vpc, if u did u can recreate going to the VPC Section -> My Vpcs -> At the right corner you will see a dropdown called actions click and select create a default vpc
As the AWS announcement last year On August 15, 2022 we expect all migrations to be complete, with no remaining EC2-Classic resources present in any AWS account. From now on you will need to specify while you are creating any new resources the subnet_id and declare it inside your while creating.
Example :
resource "aws_instance" "test" {
ami = "ami-xxxxx"
instance_type = var.instance_type
vpc_security_group_ids = ["sg-xxxxxxx"]
subnet_id = "subnet-xxxxxx"

Creating a "random" instance with Terraform - autocreate valid configurations

I'm new to Terraform and like to create "random" instances.
Some settings like OS, setup script ... will stay the same. Mostly the region/zone would change.
How can I do that?
It seems Terraform already knows about which combinations are valid. For example with AWS EC2 or lightsail it will complain if you choose a wrong combination. I guess this will reduce the amount of work. I'm wondering though if this is valid for each provider.
How could you automatically create a valid configuration, with only the region or zone changing each time Terraform runs?
Edit: Config looks like:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
# profile = "default"
# region = "us-west-2"
accesskey = ...
secretkey = ...
}
resource "aws_instance" "example" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
}
Using AWS as an example, aws_instance has two required parameters: ami and instance_type.
Thus to create an instance, you need to provide both of them:
resource "aws_instance" "my" {
ami = "ami-02354e95b39ca8dec"
instance_type = "t2.micro"
}
Everything else will be deduced or set to their default values. In terms of availability zones and subnets, if not explicitly specified, they will be chosen "randomly" (AWS decides how to place them, so if fact they can be all in one AZ).
Thus, to create 3 instances in different subnets and AZs you can do simply:
provider "aws" {
region = "us-east-1"
}
data "aws_ami" "al2_ami" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm*"]
}
}
resource "aws_instance" "my" {
count = 3
ami = data.aws_ami.al2_ami.id
instance_type = "t2.micro"
}
A declarative system like Terraform unfortunately isn't very friendly to randomness, because it expects the system to converge on a desired state, but random configuration would mean that the desired state would change on each action and thus it would never converge. Where possible I would recommend using "randomization" or "distribution" mechanisms built in to your cloud provider, such as AWS autoscaling over multiple subnets.
However, to be pragmatic Terraform does have a random provider, which represents the generation of random numbers as a funny sort of Terraform resource so that the random results can be preserved from one run to the next, in the same way as Terraform remembers the ID of an EC2 instance from one run to the next.
The random_shuffle resource can be useful for this sort of "choose any one (or N) of these options" situation.
Taking your example of randomly choosing AWS regions and availability zones, the first step would be to enumerate all of the options your random choice can choose from:
locals {
possible_regions = toset([
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2",
])
possible_availability_zones = tomap({
us-east-1 = toset(["a", "b", "e"])
us-east-2 = toset(["a", "c")
us-west-1 = toset(["a", "b"])
us-west-2 = toset(["b", "c"])
})
}
You can then pass these inputs into random_shuffle resources to select, for example, one region and then two availability zones from that region:
resource "random_shuffle" "region" {
input = local.possible_regions
result_count = 1
}
resource "random_shuffle" "availability_zones" {
input = local.possible_availability_zones[local.chosen_region]
result_count = 2
}
locals {
local.chosen_region = random_shuffle.region.result[0]
local.chosen_availability_zones = random_shuffle.availability_zones.result
}
You can then use local.chosen_region and local.chosen_availability_zones elsewhere in your configuration.
However, there is one important catch with randomly selecting regions in particular: the AWS provider is designed to require a region, because each AWS region is an entirely distinct set of endpoints, and so the provider won't be able to successfully configure itself if the region isn't known until the apply step, as would be the case if you wrote region = local.chosen_region in the provider configuration.
To work around this will require using the exceptional-use-only -target option to terraform apply, to direct Terraform to first focus only on generating the random region, and ignore everything else until that has succeeded:
# First apply with just the random region targeted
terraform apply -target=random_shuffle.region
# After that succeeds, run apply again normally to
# create everything else.
terraform apply

Terraform Invalid count argument that depends on another resource

I'm getting the following error when trying to do a plan or an apply on a terraform script.
Error: Invalid count argument
on main.tf line 157, in resource "azurerm_sql_firewall_rule" "sqldatabase_onetimeaccess_firewall_rule":
157: count = length(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses))
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
I understand this is falling over because it doesn't know the count for the number of firewall rules to create until the app_service is created. I can just run the apply with an argument of -target=azurerm_app_service.app_service then run another apply after the app_service is created.
However, this isn't great for our CI process, if we want to create a whole new environment from our terraform scripts we'd like to just tell terraform to just go build it without having to tell it each target to build in order.
Is there a way in terraform to just say go build everything that is needed in order without having to add targets?
Also below is an example terraform script that gives the above error:
provider "azurerm" {
version = "=1.38.0"
}
resource "azurerm_resource_group" "resourcegroup" {
name = "rg-stackoverflow60187000"
location = "West Europe"
}
resource "azurerm_app_service_plan" "service_plan" {
name = "plan-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
kind = "Linux"
reserved = true
sku {
tier = "Standard"
size = "S1"
}
}
resource "azurerm_app_service" "app_service" {
name = "app-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
app_service_plan_id = azurerm_app_service_plan.service_plan.id
site_config {
always_on = true
app_command_line = ""
linux_fx_version = "DOCKER|nginxdemos/hello"
}
app_settings = {
"WEBSITES_ENABLE_APP_SERVICE_STORAGE" = "false"
}
}
resource "azurerm_sql_server" "sql_server" {
name = "mysqlserver-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
version = "12.0"
administrator_login = "4dm1n157r470r"
administrator_login_password = "4-v3ry-53cr37-p455w0rd"
}
resource "azurerm_sql_database" "sqldatabase" {
name = "sqldatabase-stackoverflow60187000"
resource_group_name = azurerm_sql_server.sql_server.resource_group_name
location = azurerm_sql_server.sql_server.location
server_name = azurerm_sql_server.sql_server.name
edition = "Standard"
requested_service_objective_name = "S1"
}
resource "azurerm_sql_firewall_rule" "sqldatabase_firewall_rule" {
name = "App Service Access (${count.index})"
resource_group_name = azurerm_sql_database.sqldatabase.resource_group_name
server_name = azurerm_sql_database.sqldatabase.name
start_ip_address = element(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses), count.index)
end_ip_address = element(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses), count.index)
count = length(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses))
}
To make this work without the -target workaround described in the error message requires reframing the problem in terms of values that Terraform can know only from the configuration, rather than values that are generated by the providers at apply time.
The trick then would be to figure out what values in your configuration the Azure API is using to decide how many IP addresses to return, and to rely on those instead. I don't know Azure well enough to give you a specific answer, but I see on Inbound/Outbound IP addresses that this seems to be an operational detail of Azure App Services rather than something you can control yourself, and so unfortunately this problem may not be solvable.
If there really is no way to predict from configuration how many addresses will be in possible_outbound_ip_addresses, the alternative is to split your configuration into two parts where one depends on the other. The first would configure your App Service and anything else that makes sense to manage along with it, and then the second might use the azurerm_app_service data source to retrieve the data about the assumed-already-existing app service and make firewall rules based on it.
Either way you'll need to run Terraform twice to make the necessary data available. An advantage of using -target is that you only need to do a funny workflow once during initial bootstrapping, and so you could potentially do the initial create outside of CI to get the objects initially created and then use CI for ongoing changes. As long as the app service object is never replaced, subsequent Terraform plans will already know how many IP addresses are set and so should be able to complete as normal.

terrafrom aws_instance subnet_id - Error launching source instance: Unsupported: The requested configuration is currently not supported

I am trying to build a VPC and subnet within that VPC. Thirdly I am trying to create an AWS instances within that subnet. Sounds simple, but the subnet_id parameter seems to break the terraform 'apply' (plan works just fine). Am I missing something?
Extract from main.tf
resource "aws_vpc" "poc-vpc" {
cidr_block = "10.0.0.0/16"
instance_tenancy = "dedicated"
enable_dns_hostnames = "true"
}
resource "aws_subnet" "poc-subnet" {
vpc_id = "${aws_vpc.poc-vpc.id}"
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = "true"
availability_zone = "${var.availability_zone}"
}
resource "aws_instance" "POC-Instance" {
ami = "${lookup(var.amis, var.region)}"
instance_type = "${var.instance_type}"
availability_zone = "${var.availability_zone}"
associate_public_ip_address = true
key_name = "Pipeline-POC-Key-Pair"
vpc_security_group_ids = ["${aws_security_group.poc-sec-group.id}"]
subnet_id = "${aws_subnet.poc-subnet.id}"
}
If I remove the subnet_id the 'apply' works, but the instance is created in my default VPC. This is not the aim.
Any help would be appreciated. I am a newbie to terraform so please be gentle.
I worked this out and wanted to post this up to hopefully saves others some time.
The issue is the conflict of subnet_id in the aws_instance provisioner and instance_tennancy in the aws_vpc provisioner. Remove instance tenancy and all is fixed (or set to default)
The error message is meaningless. I've asked whether this can be improved.
It's also possible that there is a conflict in your other configuration. I encountered the same Unsupported error because of different reason.
I used AMI ami-0ba5dfee72d5bb9a1 that I found from https://cloud-images.ubuntu.com/locator/ec2/
I just choose anything that is in the same region as my VPC.
Apparently that AMI can only support a* instance type and don't support t* or m* instance type.
So I think double check that:
Your AMI is compatible with your instance type.
Your AMI is in the same region as your VPC or subnet.
There is no other conflicting configuration.
The problem is terraform example that you likely copied is WRONG.
The problem is vpc instance tenancy property in most cases.
Change the VPC instance_tenancy = "dedicated" to instance_tenancy = "default".
It should work for any ec2 instance type.
The reason is that dedicated instances are supported only for m5.large or bigger instances so VPC and ec2 instance types are in conflict if you are actually creating smaller instances like t3.small or t2.m. You can micro etc. You can look at the dedicated instances here.
https://aws.amazon.com/ec2/pricing/dedicated-instances/
if you want to create your own VPC network and not use the default, then you need to also create a route table and internet gateway so you can have access to the created ec2. You will need to also add the follow config to create a full VPC network with ec2 instances that is accessible with the public IP you assigned
# Internet GW
resource "aws_internet_gateway" "main-gw" {
vpc_id = "${aws_vpc.poc-vpc.id}"
tags {
Name = "poc-vpc"
}
}
# route tables
resource "aws_route_table" "main-public" {
vpc_id = "${aws_vpc.poc-vpc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.main-gw.id}"
}
tags {
Name = "main route"
}
}
# route associations public
resource "aws_route_table_association" "main-public-1-a" {
subnet_id = "${aws_subnet.poc-subnet.id}"
route_table_id = "${aws_route_table.main-public.id}"
}

Resources