Will "aws_ami" with most_recent=true impact future updates? - terraform

Given the config below, what happens if I run apply command against the infrastructure if Amazon rolled out a new version of the AMI?
Will the test instance is going to be destroyed and recreated?
so scenario
terraform init
terraform apply
wait N months
terraform plan (or apply)
AM I going to see "forced" recreation of the ec2 instance that was created N months ago using the older version of the AMI which was "recent" back then?
data "aws_ami" "amazon-linux-2" {
most_recent = true
filter {
name = "owner-alias"
values = ["amazon"]
}
filter {
name = "name"
values = ["amzn2-ami-hvm*"]
}
}
resource "aws_instance" "test" {
depends_on = ["aws_internet_gateway.test"]
ami = "${data.aws_ami.amazon-linux-2.id}"
associate_public_ip_address = true
iam_instance_profile = "${aws_iam_instance_profile.test.id}"
instance_type = "t2.micro"
key_name = "bflad-20180605"
vpc_security_group_ids = ["${aws_security_group.test.id}"]
subnet_id = "${aws_subnet.test.id}"
}
Will "aws_ami" with most_recent=true impact future updates?

#ydeatskoR and #sogyals429 have the right answer. To be more concrete:
resource "aws_instance" "test" {
# ... (all the stuff at the top)
lifecycle {
ignore_changes = [
ami,
]
}
}
note: docs moved to: https://www.terraform.io/docs/language/meta-arguments/lifecycle.html#ignore_changes

Yes as per what #ydaetskcoR said you can have a look at the ignore_changes lifecycle and then it would not recreate the instances. https://www.terraform.io/docs/configuration/resources.html#ignore_changes

Related

Terrafrom AWS EC2 with no change in the code, trying to destroy and create instance

I used below terrafrom code to create AWS EC2 instance,
resource "aws_instance" "example" {
ami = var.ami-id
instance_type = var.ec2_type
key_name = var.keyname
subnet_id = "subnet-05a63e5c1a6bcb7ac"
security_groups = ["sg-082d39ed218fc0f2e"]
# root disk
root_block_device {
volume_size = "10"
volume_type = "gp3"
encrypted = true
delete_on_termination = true
}
tags = {
Name = var.instance_name
Environment = "dev"
}
metadata_options {
http_endpoint = "enabled"
http_put_response_hop_limit = 1
http_tokens = "required"
}
}
after 5 minutes with no change in the code when I try to run terraform plan. It shows something changed outside of Terraform, its trying destroy and re-create the Ec2 instance. Why is this happening?
How to prevent this?
aws_instance.example: Refreshing state... [id=i-0aa279957d1287100]
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
# aws_instance.example has been changed
~ resource "aws_instance" "example" {
id = "i-0aa279957d1287100"
~ security_groups = [
- "sg-082d39ed218fc0f2e",
]
tags = {
"Environment" = "dev"
"Name" = "ec2linux"
}
# (26 unchanged attributes hidden)
~ root_block_device {
+ tags = {}
# (9 unchanged attributes hidden)
}
# (4 unchanged blocks hidden)
}
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these
changes.
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
adding image:
You must use vpc_security_group_ids instead of security_groups
resource "aws_instance" "example" {
ami = var.ami-id
instance_type = var.ec2_type
key_name = var.keyname
subnet_id = "subnet-05a63e5c1a6bcb7ac"
vpc_security_group_ids = ["sg-082d39ed218fc0f2e"]
# root disk
root_block_device {
volume_size = "10"
volume_type = "gp3"
encrypted = true
delete_on_termination = true
}
tags = {
Name = var.instance_name
Environment = "dev"
}
metadata_options {
http_endpoint = "enabled"
http_put_response_hop_limit = 1
http_tokens = "required"
}
}

Terraform Change Tags Only If There Is Any Other Change

I'd like to use something as below to create/manage common tags for all resources in projects. For the common_var_tag, I'd like it to be applied only there are any other changes. So the sources are tagged with last-modified by who and when.
Is there any way to do it?
Thanks in advance!
locals {
common_var_tags = {
ChangedBy = data.aws_caller_identity.current.arn
ChangedAt = timestamp()
}
common_fix_tags = {
Project. = "Project"
Owner = "Tiger Peng"
Team = "DevOps"
CreatedAt = "2021-06-08"
}
}
For example, right now, I have to comment out the "local.common_var_tags" as each time when I run terraform plan or terrafomr apply without changing any attribute, the resource nginx is marked/changed due to ChangedAt = timestamp(). I'd like to find the way that only when other attributes changed, this tag changing will be applied.
resource "aws_instance" "nginx" {
count = 1
ami = var.nginx-ami
instance_type = var.nginx-instance-type
subnet_id = var.frontend-subnets[count.index]
key_name = aws_key_pair.key-pair.key_name
vpc_security_group_ids = [aws_security_group.nginx-sg.id]
root_block_device {
delete_on_termination = false
encrypted = true
volume_size = var.nginx-root-volume-size
volume_type = var.default-ebs-type
tags = merge(
local.common_fix_tags,
#local.common_var_tags,
map(
"Name", "${var.project}-${var.env}-nginx-${var.zones[count.index]}"
)
)
}
tags = merge(
local.common_fix_tags,
#local.common_var_tags,
map(
"Name", "${var.project}-${var.env}-nginx-${var.zones[count.index]}",
"Role", "Nginx"
)
)
}
I had the same problem and I found a workaround. It is not a clean solution but it works in some way.
First of all, you have to create a lifecycle block on your resource and ignore changes on your "ChangedAt" tag:
resource "aws_instance" "nginx" {
...
lifecycle {
ignore_changes = [tags["ChangedAt"]]
}
}
Then create a local variable. Its value must be a md5 hash with the value of all the resource attributes whose changes should provoke and update on "ChangedAt" tag:
locals{
hash = md5(join(",",[var.nginx-ami,var.nginx-instance-type, etc]))
}
Finally create a null resource that triggers on the change of that local variable, with a local-exec that updates "ChangedAt" tag:
resource "null_resource" "nginx_tags" {
triggers = {
instance = local.hash
}
provisioner "local-exec" {
command = "aws resourcegroupstaggingapi tag-resources --resource-arn-list ${aws_instance.nginx.arn} --tags ChangedAt=${timestamp()}"
}
}
With that configuration any change in the variables included in the md5 has will update your tag
That's defeating a bit the purpose of immutable infrastructure. You shouldn't have any change between 2 successive tf apply. BUT because this is a quite common pattern when you work in K8S clusters, Terraforn AWS provider 2.6 allows you to globally ignore changes on tags
provider "aws" {
# ... potentially other configuration ...
ignore_tags {
# specific tag
keys = ["ChangedAt"]
# or by prefix to ignore ChangedBy too
key_prefixes = ["Changed"]
}
}

Combine two simple objects to a list

I was wondering if anyone could suggest a trick to do this. Imagine I have a terraform code that retrieves the latest version of multiple AMI.
data "aws_ami" "amzn" {
most_recent = true
owners = ["amazon"]
filters...
}
data "aws_ami" "centos" {
most_recent = true
owners = ["12345678"]
filters...
}
What I would like to have is a list with both AWS AMi. The purpose is to choose between the two when I create an EC2 instance.
resource "aws_instance" "EC2" {
count = 100
ami = choose from list
instance_type = "t2.micro"
...
}
In this example, I have provided only two AMI. But in reality I will have about 20.
How a list will help you? You need to choose from it somehow. It's better to use a map for it, so you can pick a specific ami based on a key.
After you load the data resources, you can define a map using locals:
locals {
amis = {
amzn = data.aws_ami.amzn
centos = data.aws_ami.centos
}
}
Then to access it, you simply address it as follows
resource "aws_instance" "EC2" {
count = 100
ami = local.amis[amzn].id
instance_type = "t2.micro"
...
}

Rolling update of ASG using launch template

When I update the AMI associated with a aws_launch_template, Terraform creates a new version of the launch template as expected and also updates the aws_autoscaling_group to point to the new version of the launch template.
However, no "rolling update" is performed to switch out the existing instances with new instances based on the new AMI, I have to manually terminate the existing instances and then the ASG brings up new instances using the new AMI.
What changes do I have to make to my config to get Terraform to perform a rolling update?
Existing code is as follows:
resource "aws_launch_template" "this" {
name_prefix = "my-launch-template-"
image_id = var.ami_id
instance_type = "t3.small"
key_name = "testing"
vpc_security_group_ids = [ aws_security_group.this.id ]
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "this" {
name_prefix = "my-asg-"
vpc_zone_identifier = var.subnet_ids
target_group_arns = var.target_group_arns
health_check_type = "ELB"
health_check_grace_period = 300
default_cooldown = 10
min_size = 4
max_size = 4
desired_capacity = 4
launch_template {
id = aws_launch_template.this.id
version = aws_launch_template.this.latest_version
}
lifecycle {
create_before_destroy = true
}
}
I recently worked on that exact same scenario.
We used the random_pet resource to generate a human readable random name that links with the AMI changes.
resource "random_pet" "ami_random_name" {
keepers = {
# Generate a new pet name every time we change the AMI
ami_id = var.ami_id
}
}
You can then use that random_pet name id on a variable that would force the recreation of your autoscaling group.
For example with name_prefix:
resource "aws_autoscaling_group" "this" {
name_prefix = "my-asg-${random_pet.ami_random_name.id}"
vpc_zone_identifier = var.subnet_ids
target_group_arns = var.target_group_arns
health_check_type = "ELB"
health_check_grace_period = 300
default_cooldown = 10
min_size = 4
max_size = 4
desired_capacity = 4
launch_template {
id = aws_launch_template.this.id
version = aws_launch_template.this.latest_version
}
lifecycle {
create_before_destroy = true
}
}
ASG instance refresh is also an option that replaces all old instances with newer instances as per the newest version in the launch template ( make sure to set LaunchTemplateVersion = $Latest in ASG settings). other benefits are:
Set a time to warm up instances before receiving traffic ( if you have time taking bootstrap installations)
You can specify percentage of how many instances to replace within ASG parallelly to speed things up.
Below is terraform code block. More about the feature here
instance_refresh {
strategy = "Rolling"
preferences {
min_healthy_percentage = 50
}
triggers = ["tag"]
}

Terraform (provider AWS) - Auto Scaling group doesn't take effect on a launch template change

Unable to make launch template work with ASGs while using launch templates, it works with Launch Configuration using a small hack i.e by interpolating the launch configuration name in ASG resource but it doesn't work with launch templates.
ASG uses the latest version to launch new instances but doesn't change anything w.r.t to pre-running instances inspite of a change in launch template.
I understand that this is sort of expected but do we have any workaround to make launch templates work with ASG or we need to stick to launch configuration itself?
TF code snippet -
resource "aws_launch_template" "lc_ec2" {
image_id = "${var.ami_id}"
instance_type = "${var.app_instance_type}"
key_name = "${var.orgname}_${var.environ}_kp"
vpc_security_group_ids = ["${aws_security_group.sg_ec2.id}"]
user_data = "${base64encode(var.userdata)}"
block_device_mappings {
device_name = "/dev/xvdv"
ebs {
volume_size = 15
}
}
iam_instance_profile {
name = "${var.orgname}_${var.environ}_profile"
}
lifecycle {
create_before_destroy = true
}
tag_specifications {
resource_type = "instance"
tags = "${merge(map("Name", format("%s-%s-lc-ec2", var.orgname, var.environ)), var.tags)}"
}
tag_specifications {
resource_type = "volume"
tags = "${merge(map("Name", format("%s-%s-lc-ec2-volume", var.orgname, var.environ)), var.tags)}"
}
tags = "${merge(map("Name", format("%s-%s-lc-ec2", var.orgname, var.environ)), var.tags)}"
}
resource "aws_autoscaling_group" "asg_ec2" {
name = "${var.orgname}-${var.environ}-asg-ec2-${aws_launch_template.lc_ec2.name}"
vpc_zone_identifier = ["${data.aws_subnet.private.*.id}"]
min_size = 1
desired_capacity = 1
max_size = 1
target_group_arns = ["${aws_lb_target_group.alb_tg.arn}"]
default_cooldown= 100
health_check_grace_period = 100
termination_policies = ["ClosestToNextInstanceHour", "NewestInstance"]
health_check_type="ELB"
launch_template = {
id = "${aws_launch_template.lc_ec2.id}"
version = "$$Latest"
}
lifecycle {
create_before_destroy = true
}
tags = [
{
key = "Name"
value = "${var.orgname}"
propagate_at_launch = true
},
{
key = "Environ"
value = "${var.environ}"
propagate_at_launch = true
}
]
}
There is one hack to achieve this.
AWS CloudFormation supports rolling updates of an Autoscaling group.
Since Terraform supports a cloudformation stack resource, you can define your ASG as a cloudformation stack with an update policy. However, CloudFormation does not support the $$Latest tag for launch template version, so you will have to parameterize the version and take the input value from the latest_version attribute of the launch template resource created in your terraform configuration file.

Resources