I have this:
data "template_file" "init" {
template = "${file("script.tpl")}"
}
resource "aws_launch_configuration" "ec21" {
image_id = "${var.image_id}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
security_groups = ["${aws_security_group.instance.id}"]
user_data = "${data.template_file.init.rendered}"
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "asg" {
launch_configuration = "${aws_launch_configuration.pgetcd.id}"
min_size = "${var.min_instances}"
max_size = "${var.max_instances}"
vpc_zone_identifier = ["${aws_subnet.sb0.id}", "${aws_subnet.sb1.id}", "${aws_subnet.sb2.id}"]
lifecycle {
create_before_destroy = true
}
}
How can i get private IP of aws_launch_configuration.ec21 and send using a variable to template_file of another launch_configuration?
I was try use:
aws_launch_configuration.ec21.ip.private_ip, but this does't work.
Related
I am using the following terraform to create windows EC2 instance. The instance is launched successfully, but in AWS console I see hyphenated name for EC2 instance.
For brevity purpose I have removed some TF code
resource "aws_launch_template" "server_launch_template" {
name = "my-launch-template"
image_id = "my-windows-ami-id"
instance_type = "t3.medium"
key_name = "my-keypair"
vpc_security_group_ids = [var.security_group_id]
iam_instance_profile {
arn = aws_iam_instance_profile.my_instance.arn
}
tag_specifications {
resource_type = "instance"
tags = module.tags.mytags
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "server_autoscaling_group" {
name = "my autoscaling group"
max_size = 1
min_size = 1
desired_capacity = 1
vpc_zone_identifier = [var.subnet_id]
wait_for_capacity_timeout = var.wait_for_capacity
health_check_type = "EC2"
dynamic "tag" {
#some code here
}
launch_template {
id = aws_launch_template.server_launch_template.id
version = "$Latest"
}
lifecycle {
create_before_destroy = true
}
}
How and where do I specify instance name in the launch template?
You can't define dynamic names for instances launched by an autoscaling group.
You can however configure a lambda function to run whenever the autoscaling launches new instances, and you can name the instances from the lambda.
This works. As #MarkoE suggested added Name tag in tag_specifications
resource "aws_launch_template" "server_launch_template" {
name = "my-launch-template"
image_id = "my-windows-ami-id"
instance_type = "t3.medium"
key_name = "my-keypair"
vpc_security_group_ids = [var.security_group_id]
iam_instance_profile {
arn = aws_iam_instance_profile.my_instance.arn
}
tag_specifications {
resource_type = "instance"
tags = merge(module.tags.mytags, { Name = "my-runner-instance" })
}
lifecycle {
create_before_destroy = true
}
}
I am using Terraform to deploy an ECS EC2 cluster on AWS.
My pipeline is creating a new task definition from docker-compose, then updates the service to use this task definition.
Desired count is 1, deployment_minimum_healthy_percent is 100 and deployment_maximum_percent is 200.
Expected behavior is that the autoscaling group will launch a new EC2 instance to deploy the new task, then kill the old one.
What happens is that I get an error message : "service X was unable to place a task because no container instance met all of its requirements. The closest matching container-instance has insufficient memory available."
No instance is created and the deployment is rolled back. How can I make sure that an extra instance is created when deploying my service ?
Here is my Terraform code :
resource "aws_ecs_cluster" "main_cluster" {
name = var.application_name
tags = {
Environment = var.environment_name
}
}
data "aws_ecs_task_definition" "main_td" {
task_definition = var.application_name
}
resource "aws_ecs_service" "main_service" {
name = var.application_name
cluster = aws_ecs_cluster.main_cluster.id
launch_type = "EC2"
scheduling_strategy = "REPLICA"
task_definition = "${data.aws_ecs_task_definition.main_td.family}:${data.aws_ecs_task_definition.main_td.revision}"
desired_count = var.target_capacity
deployment_minimum_healthy_percent = 100
deployment_maximum_percent = 200
health_check_grace_period_seconds = 10
wait_for_steady_state = false
force_new_deployment = true
load_balancer {
target_group_arn = aws_lb_target_group.main_tg.arn
container_name = var.container_name
container_port = var.container_port
}
ordered_placement_strategy {
type = "binpack"
field = "memory"
}
deployment_circuit_breaker {
enable = true
rollback = true
}
lifecycle {
ignore_changes = [desired_count]
}
tags = {
Environment = var.environment_name
}
}
Auto-scaling group :
data "template_file" "user_data" {
template = "${file("${path.module}/user_data.sh")}"
vars = {
ecs_cluster = "${aws_ecs_cluster.main_cluster.name}"
}
}
resource "aws_launch_configuration" "main_lc" {
name = var.application_name
image_id = var.ami_id
instance_type = var.instance_type
associate_public_ip_address = true
iam_instance_profile = "arn:aws:iam::812844034365:instance-profile/ecsInstanceRole"
security_groups = ["${aws_security_group.main_sg.id}"]
root_block_device {
volume_size = "30"
volume_type = "gp3"
}
user_data = "${data.template_file.user_data.rendered}"
}
resource "aws_autoscaling_policy" "main_asg_policy" {
name = "${var.application_name}-cpu-scale-policy"
policy_type = "TargetTrackingScaling"
autoscaling_group_name = aws_autoscaling_group.main_asg.name
estimated_instance_warmup = 10
target_tracking_configuration {
predefined_metric_specification {
predefined_metric_type = "ASGAverageCPUUtilization"
}
target_value = 40.0
}
}
resource "aws_autoscaling_group" "main_asg" {
name = var.application_name
launch_configuration = aws_launch_configuration.main_lc.name
min_size = var.target_capacity
max_size = var.target_capacity * 2
health_check_type = "EC2"
health_check_grace_period = 10
default_cooldown = 30
desired_capacity = var.target_capacity
vpc_zone_identifier = data.aws_subnet_ids.subnets.ids
wait_for_capacity_timeout = "3m"
instance_refresh {
strategy = "Rolling"
preferences {
min_healthy_percentage = 100
}
}
}
Module is published here https://registry.terraform.io/modules/hboisgibault/ecs-cluster/aws/latest
I'd like to use a resource name inside the resource itself to avoid string duplication and copy/paste errors.
resource "aws_instance" "bastion-euw3-infra-01" {
ami = "ami-078db6d55a16afc82"
instance_type = "t2.micro"
key_name = "sylvain"
user_data = templatefile("./scripts/cloudinit.yaml", {
hostname = "bastion-euw3-infra-01"
tailscale_authkey = var.tailscale_authkey
})
network_interface {
device_index = 0
network_interface_id = aws_network_interface.bastion-euw3-infra-01.id
}
tags = {
Name = "bastion-euw3-infra-01"
Environment = "infra"
}
lifecycle {
ignore_changes = [user_data]
}
}
Basically I'd like to replace "bastion-euw3-infra-01" inside the resource with some kind of var, e.g.:
resource "aws_instance" "bastion-euw3-infra-01" {
...
user_data = templatefile("./scripts/cloudinit.yaml", {
hostname = ___name___
tailscale_authkey = var.tailscale_authkey
})
...
tags = {
Name = ___name___
Environment = "infra"
}
...
}
Does terraform provide a way to do this ?
I want to pass var.domain_names as list(list(string)), example:
domain_names = [
["foo.com",".*foo-1.com",".*foo-2.com"],
["bar.com",".*bar-1.com"],
...
]
So it should create certificate for foo.com, bar.com ... but add others like .*foo-1.com ... to the subject_alternative_names.
Please help me solve this, Using terraform 0.12.18
resource "aws_acm_certificate" "certificate" {
domain_name = var.domain_names[count.index]
subject_alternative_names = slice(var.domain_names, 1, length(var.domain_names))
validation_method = var.validation_method
tags = {
Name = var.domain_names[count.index]
owner = "xx"
terraform = "true"
}
lifecycle {
create_before_destroy = true
}
}
You can accomplish this with a map and a for_each loop. For example:
variable "domain_names" {
type = map(list(string))
default = {
"foo.com" = ["foo.com", ".*foo-1.com", ".*foo-2.com"]
"bar.com" = [".*bar-1.com"]
}
}
resource "aws_acm_certificate" "certificate" {
for_each = var.domain_names
domain_name = each.key
subject_alternative_names = each.value
validation_method = var.validation_method
tags = {
Name = each.key
owner = "xx"
terraform = "true"
}
lifecycle {
create_before_destroy = true
}
}
Refer to this blog post for more info on loops and conditionals.
I tried to build an ECS cluster with ALB in front using terraform. As I used dynamic port mappping the targets will not be registerd as healthy. I played with the healthcheck and Success codes if I set it to 301 everything is fine.
ECS
data "template_file" "mb_task_template" {
template = file("${path.module}/templates/marketplace-backend.json.tpl")
vars = {
name = "${var.mb_image_name}"
port = "${var.mb_port}"
image = "${aws_ecr_repository.mb.repository_url}"
log_group = "${aws_cloudwatch_log_group.mb.name}"
region = "${var.region}"
}
}
resource "aws_ecs_cluster" "mb" {
name = var.mb_image_name
}
resource "aws_ecs_task_definition" "mb" {
family = var.mb_image_name
container_definitions = data.template_file.mb_task_template.rendered
volume {
name = "mb-home"
host_path = "/ecs/mb-home"
}
}
resource "aws_ecs_service" "mb" {
name = var.mb_repository_url
cluster = aws_ecs_cluster.mb.id
task_definition = aws_ecs_task_definition.mb.arn
desired_count = 2
iam_role = var.aws_iam_role_ecs
depends_on = [aws_autoscaling_group.mb]
load_balancer {
target_group_arn = var.target_group_arn
container_name = var.mb_image_name
container_port = var.mb_port
}
}
resource "aws_autoscaling_group" "mb" {
name = var.mb_image_name
availability_zones = ["${var.availability_zone}"]
min_size = var.min_instance_size
max_size = var.max_instance_size
desired_capacity = var.desired_instance_capacity
health_check_type = "EC2"
health_check_grace_period = 300
launch_configuration = aws_launch_configuration.mb.name
vpc_zone_identifier = flatten([var.vpc_zone_identifier])
lifecycle {
create_before_destroy = true
}
}
data "template_file" "user_data" {
template = file("${path.module}/templates/user_data.tpl")
vars = {
ecs_cluster_name = "${var.mb_image_name}"
}
}
resource "aws_launch_configuration" "mb" {
name_prefix = var.mb_image_name
image_id = var.ami
instance_type = var.instance_type
security_groups = ["${var.aws_security_group}"]
iam_instance_profile = var.aws_iam_instance_profile
key_name = var.key_name
associate_public_ip_address = true
user_data = data.template_file.user_data.rendered
lifecycle {
create_before_destroy = true
}
}
resource "aws_cloudwatch_log_group" "mb" {
name = var.mb_image_name
retention_in_days = 14
}
ALB
locals {
target_groups = ["1", "2"]
}
resource "aws_alb" "mb" {
name = "${var.mb_image_name}-alb"
internal = false
load_balancer_type = "application"
security_groups = ["${aws_security_group.mb_alb.id}"]
subnets = var.subnets
tags = {
Name = var.mb_image_name
}
}
resource "aws_alb_target_group" "mb" {
count = length(local.target_groups)
name = "${var.mb_image_name}-tg-${element(local.target_groups, count.index)}"
port = var.mb_port
protocol = "HTTP"
vpc_id = var.vpc_id
target_type = "instance"
health_check {
path = "/health"
protocol = "HTTP"
timeout = "10"
interval = "15"
healthy_threshold = "3"
unhealthy_threshold = "3"
matcher = "200-299"
}
lifecycle {
create_before_destroy = true
}
tags = {
Name = var.mb_image_name
}
}
resource "aws_alb_listener" "mb_https" {
load_balancer_arn = aws_alb.mb.arn
port = 443
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = module.dns.certificate_arn
default_action {
type = "forward"
target_group_arn = aws_alb_target_group.mb.0.arn
}
}
resource "aws_alb_listener_rule" "mb_https" {
listener_arn = aws_alb_listener.mb_https.arn
priority = 100
action {
type = "forward"
target_group_arn = aws_alb_target_group.mb.0.arn
}
condition {
field = "path-pattern"
values = ["/health/"]
}
}
Okay. Looks like the code above is working. I had different issue with networking.