AWS EC2 instance name is hyphenated - terraform

I am using the following terraform to create windows EC2 instance. The instance is launched successfully, but in AWS console I see hyphenated name for EC2 instance.
For brevity purpose I have removed some TF code
resource "aws_launch_template" "server_launch_template" {
name = "my-launch-template"
image_id = "my-windows-ami-id"
instance_type = "t3.medium"
key_name = "my-keypair"
vpc_security_group_ids = [var.security_group_id]
iam_instance_profile {
arn = aws_iam_instance_profile.my_instance.arn
}
tag_specifications {
resource_type = "instance"
tags = module.tags.mytags
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "server_autoscaling_group" {
name = "my autoscaling group"
max_size = 1
min_size = 1
desired_capacity = 1
vpc_zone_identifier = [var.subnet_id]
wait_for_capacity_timeout = var.wait_for_capacity
health_check_type = "EC2"
dynamic "tag" {
#some code here
}
launch_template {
id = aws_launch_template.server_launch_template.id
version = "$Latest"
}
lifecycle {
create_before_destroy = true
}
}
How and where do I specify instance name in the launch template?

You can't define dynamic names for instances launched by an autoscaling group.
You can however configure a lambda function to run whenever the autoscaling launches new instances, and you can name the instances from the lambda.

This works. As #MarkoE suggested added Name tag in tag_specifications
resource "aws_launch_template" "server_launch_template" {
name = "my-launch-template"
image_id = "my-windows-ami-id"
instance_type = "t3.medium"
key_name = "my-keypair"
vpc_security_group_ids = [var.security_group_id]
iam_instance_profile {
arn = aws_iam_instance_profile.my_instance.arn
}
tag_specifications {
resource_type = "instance"
tags = merge(module.tags.mytags, { Name = "my-runner-instance" })
}
lifecycle {
create_before_destroy = true
}
}

Related

Terraform is asking for VPC ID even though its implied in the subnet

I have the following simple EC2 creating terraform script:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "default"
region = "us-east-1" # virginia
}
resource "aws_network_interface" "network" {
subnet_id = "subnet-0*******"
security_groups = ["sg-******"]
attachment {
instance = aws_instance.general_instance.id
device_index = 0
}
}
resource "aws_instance" "general_instance" {
ami = "ami-00874d747dde814fa" # unbutu server
instance_type = "m5.2xlarge"
key_name = "my-key"
root_block_device {
delete_on_termination = true
volume_size = 500
tags = { Name = "Root Volume" }
}
# user_data = file("startup.sh") # file directive can install stuff
tags = {
Name = "General"
}
}
I get the following:
Error: Error launching source instance: VPCIdNotSpecified: No default VPC for this user. GroupName is only supported for EC2-Classic and default VPC.
I find this odd because the classic flow is to make a VPC, make a subnet and then make a network interface. However, I have a VPC I want to use that is associated with the subnet I'm using. So I'm wondering whey it's asking for a VPC id if I have it associated with the subnet I'm requesting.
Thanks in advance
I figured it out already.
resource "aws_instance" "general_instance" {
ami = "ami-00874d747dde814fa" # unbutu server
instance_type = "m5.2xlarge"
key_name = "EC2-foundry"
network_interface {
network_interface_id = aws_network_interface.network.id
device_index = 0
}
root_block_device {
delete_on_termination = true
volume_size = 500
tags = { Name = "Foundry Root Volume" }
}
# user_data = file("startup.sh") # file directive can install stuff
tags = {
Name = "Foundry General"
}
}
network interface must be attached in the aws resource

Launch instance on service deployment

I am using Terraform to deploy an ECS EC2 cluster on AWS.
My pipeline is creating a new task definition from docker-compose, then updates the service to use this task definition.
Desired count is 1, deployment_minimum_healthy_percent is 100 and deployment_maximum_percent is 200.
Expected behavior is that the autoscaling group will launch a new EC2 instance to deploy the new task, then kill the old one.
What happens is that I get an error message : "service X was unable to place a task because no container instance met all of its requirements. The closest matching container-instance has insufficient memory available."
No instance is created and the deployment is rolled back. How can I make sure that an extra instance is created when deploying my service ?
Here is my Terraform code :
resource "aws_ecs_cluster" "main_cluster" {
name = var.application_name
tags = {
Environment = var.environment_name
}
}
data "aws_ecs_task_definition" "main_td" {
task_definition = var.application_name
}
resource "aws_ecs_service" "main_service" {
name = var.application_name
cluster = aws_ecs_cluster.main_cluster.id
launch_type = "EC2"
scheduling_strategy = "REPLICA"
task_definition = "${data.aws_ecs_task_definition.main_td.family}:${data.aws_ecs_task_definition.main_td.revision}"
desired_count = var.target_capacity
deployment_minimum_healthy_percent = 100
deployment_maximum_percent = 200
health_check_grace_period_seconds = 10
wait_for_steady_state = false
force_new_deployment = true
load_balancer {
target_group_arn = aws_lb_target_group.main_tg.arn
container_name = var.container_name
container_port = var.container_port
}
ordered_placement_strategy {
type = "binpack"
field = "memory"
}
deployment_circuit_breaker {
enable = true
rollback = true
}
lifecycle {
ignore_changes = [desired_count]
}
tags = {
Environment = var.environment_name
}
}
Auto-scaling group :
data "template_file" "user_data" {
template = "${file("${path.module}/user_data.sh")}"
vars = {
ecs_cluster = "${aws_ecs_cluster.main_cluster.name}"
}
}
resource "aws_launch_configuration" "main_lc" {
name = var.application_name
image_id = var.ami_id
instance_type = var.instance_type
associate_public_ip_address = true
iam_instance_profile = "arn:aws:iam::812844034365:instance-profile/ecsInstanceRole"
security_groups = ["${aws_security_group.main_sg.id}"]
root_block_device {
volume_size = "30"
volume_type = "gp3"
}
user_data = "${data.template_file.user_data.rendered}"
}
resource "aws_autoscaling_policy" "main_asg_policy" {
name = "${var.application_name}-cpu-scale-policy"
policy_type = "TargetTrackingScaling"
autoscaling_group_name = aws_autoscaling_group.main_asg.name
estimated_instance_warmup = 10
target_tracking_configuration {
predefined_metric_specification {
predefined_metric_type = "ASGAverageCPUUtilization"
}
target_value = 40.0
}
}
resource "aws_autoscaling_group" "main_asg" {
name = var.application_name
launch_configuration = aws_launch_configuration.main_lc.name
min_size = var.target_capacity
max_size = var.target_capacity * 2
health_check_type = "EC2"
health_check_grace_period = 10
default_cooldown = 30
desired_capacity = var.target_capacity
vpc_zone_identifier = data.aws_subnet_ids.subnets.ids
wait_for_capacity_timeout = "3m"
instance_refresh {
strategy = "Rolling"
preferences {
min_healthy_percentage = 100
}
}
}
Module is published here https://registry.terraform.io/modules/hboisgibault/ecs-cluster/aws/latest

Is it possible to reference the resource name inside the resource itself

I'd like to use a resource name inside the resource itself to avoid string duplication and copy/paste errors.
resource "aws_instance" "bastion-euw3-infra-01" {
ami = "ami-078db6d55a16afc82"
instance_type = "t2.micro"
key_name = "sylvain"
user_data = templatefile("./scripts/cloudinit.yaml", {
hostname = "bastion-euw3-infra-01"
tailscale_authkey = var.tailscale_authkey
})
network_interface {
device_index = 0
network_interface_id = aws_network_interface.bastion-euw3-infra-01.id
}
tags = {
Name = "bastion-euw3-infra-01"
Environment = "infra"
}
lifecycle {
ignore_changes = [user_data]
}
}
Basically I'd like to replace "bastion-euw3-infra-01" inside the resource with some kind of var, e.g.:
resource "aws_instance" "bastion-euw3-infra-01" {
...
user_data = templatefile("./scripts/cloudinit.yaml", {
hostname = ___name___
tailscale_authkey = var.tailscale_authkey
})
...
tags = {
Name = ___name___
Environment = "infra"
}
...
}
Does terraform provide a way to do this ?

terraform failes to create eks_node group

resource "aws_eks_node_group" "n-cluster-group" {
cluster_name = aws_eks_cluster.n-cluster.name
node_group_name = "n-cluster-group"
node_role_arn = aws_iam_role.eks-nodegroup.arn
subnet_ids = [aws_subnet.public.id, aws_subnet.public2.id]
scaling_config {
desired_size = 3
max_size = 6
min_size = 1
}
launch_template {
id = aws_launch_template.n-cluster.id
version = aws_launch_template.n-cluster.latest_version
}
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
]
resource "aws_launch_template" "n-cluster" {
image_id = "ami-0d45236a5972906dd"
instance_type = "t3.medium"
name_prefix = "cluster-node-"
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 20
}
}
Although instances appear to successfully createthe node group status is CREATE_FAILED terraform reports this as well.
I am wondering what CREATE_FAILED means
what am I dooing wrong? when using a launch group and an eks optomized AMI should I still specify user_data and if so what is the correct way to do this using terraform.
I managed to solve the issue with the following configurations:
resource "aws_launch_template" "eks_launch_template" {
name = "eks_launch_template"
block_device_mappings {
device_name = "/dev/xvda"
ebs {
volume_size = 20
volume_type = "gp2"
}
}
image_id = <custom_ami_id>
instance_type = "t3.medium"
user_data = filebase64("${path.module}/eks-user-data.sh")
tag_specifications {
resource_type = "instance"
tags = {
Name = "EKS-MANAGED-NODE"
}
}
}
resource "aws_eks_node_group" "eks-cluster-ng" {
cluster_name = aws_eks_cluster.eks-cluster.name
node_group_name = "eks-cluster-ng-"
node_role_arn = aws_iam_role.eks-cluster-ng.arn
subnet_ids = [var.network_subnets.pvt[0].id, var.network_subnets.pvt[1].id, var.network_subnets.pvt[2].id]
scaling_config {
desired_size = var.asg_desired_size
max_size = var.asg_max_size
min_size = var.asg_min_size
}
launch_template {
name = aws_launch_template.eks_launch_template.name
version = aws_launch_template.eks_launch_template.latest_version
}
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
]
}
The key lies with user_data = filebase64("${path.module}/eks-user-data.sh")
The eks-user-data.sh file should be something like this:
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="
--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
/etc/eks/bootstrap.sh <cluster-name>
--==MYBOUNDARY==--\
I have tested the above and it works as intended. Thanks all for leading me to this solution
Adding this to your launch template definition resolves it:
user_data = base64encode(<<-EOF
#!/bin/bash -xe
/etc/eks/bootstrap.sh CLUSTER_NAME_HERE
EOF
)
I guess even a EKS optimised AMI counts as a custom AMI if used via launch template.

terraform - Get private ip of aws launch configuration

I have this:
data "template_file" "init" {
template = "${file("script.tpl")}"
}
resource "aws_launch_configuration" "ec21" {
image_id = "${var.image_id}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
security_groups = ["${aws_security_group.instance.id}"]
user_data = "${data.template_file.init.rendered}"
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "asg" {
launch_configuration = "${aws_launch_configuration.pgetcd.id}"
min_size = "${var.min_instances}"
max_size = "${var.max_instances}"
vpc_zone_identifier = ["${aws_subnet.sb0.id}", "${aws_subnet.sb1.id}", "${aws_subnet.sb2.id}"]
lifecycle {
create_before_destroy = true
}
}
How can i get private IP of aws_launch_configuration.ec21 and send using a variable to template_file of another launch_configuration?
I was try use:
aws_launch_configuration.ec21.ip.private_ip, but this does't work.

Resources