Argument or block definition required Terraform - Error - linux

I have autoscaling group and I need to create an application load balancer for accessing the application this is my two code for autoscaling and application load balancer but I get this issue
autoscaling group
resource "aws_launch_configuration" "OS-Type"{
name_prefix = "OS-Type"
image_id = "ami-0996d3051b72b5b2c"
instance_type = "t2.micro"
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "Dynamic-IN"{
name = "Dynamic-EC2-instance"
min_size = 1
max_size = 4
desired_capacity = 2
health_check_type = "ELB"
launch_configuration = aws_launch_configuration.OS-Type.name
vpc_zone_identifier = [aws_subnet.P-AV1.id, aws_subnet.P-AV2.id]
target_group_arns="aws_lb.App-lb.name"
lifecycle {
create_before_destroy = true
}
}
Application load balancer
resource "aws_lb_target_group" "Target-group"{
name = "Target-group"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main.id
}
resource "aws_lb" "App-lb"{
name = "Application-load-balancer"
load_balancer_type = "application"
subnets = [aws_subnet.P-AV1.id , aws_subnet.P-AV2.id]
internal = false
}
resource "aws_autoscaling_attachment" "TG-attach" {
autoscaling_group_name = aws_autoscaling_group.Dynamic-IN.id
alb_target_group_arn = aws_lb_target_group.Target-group.arn
}
I get this error
Error: Argument or block definition required
on autoscalling-group.tf line 20, in resource "aws_autoscaling_group" "Dynamic-IN":
20: target_group.arns="aws_lb.App-lb.name"
An argument or block definition is required here. To set an argument, use the
equals sign "=" to introduce the argument value.
I have tried
I have tried aws_lb.App-lb.arns for the target group alos but not working in the both ways

Yes like you suspect there should not quotes there:
target_group_arns="aws_lb.App-lb.name"
and that 'target_group_arns' is a set not a single item: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/autoscaling_group#target_group_arns
target_group_arns (Optional) A set of aws_alb_target_group ARNs, for use with Application or Network Load Balancing.
Your code should probably be something like:
resource "aws_autoscaling_group" "Dynamic-IN" {
name = "Dynamic-EC2-instance"
min_size = 1
max_size = 4
desired_capacity = 2
health_check_type = "ELB"
launch_configuration = aws_launch_configuration.OS-Type.name
vpc_zone_identifier = [ aws_subnet.P-AV1.id, aws_subnet.P-AV2.id ]
target_group_arns = [ aws_lb_target_group.Target-group.arn ]
lifecycle {
create_before_destroy = true
}
}

Related

How to dynamically attach multiple volumes to multiple instances via terraform in openstack?

I have written a terraform module(v0.14) that can be used to provision multiple instances in openstack with the correct availability_zone,network-port, correct flavor, can create 1 local disk or 1 blockstorage_volume based on boolean input variables and so forth. Now I have gotten a feature request to be able to add dynamically multiple blockstorage_volumes(=network/shared storage volumes) on multiple instances.
Example of how everything dynamically works for 1 blockstorage_volume on multiple instances:
#Dynamically create this resource ONLY if boolean "volume_storage" = true
resource "openstack_blockstorage_volume_v3" "shared_storage" {
for_each = var.volume_storage ? var.nodes : {}
name = "${each.value}-${each.key}.${var.domain}"
size = var.volume_s
availability_zone = each.key
volume_type = var.volume_type
image_id = var.os_image
}
resource "openstack_compute_instance_v2" "instance" {
for_each = var.nodes
name = "${each.value}-${each.key}.${var.domain}"
flavor_name = var.flavor
availability_zone = each.key
key_pair = var.key_pair
image_id = (var.volume_storage == true ? "" : var.os_image)
config_drive = true
#Dynamically create this parameter ONLY if boolean "volume_storage" = true
dynamic "block_device" {
for_each = var.volume_storage ? var.volume : {}
content {
uuid = openstack_blockstorage_volume_v3.shared_storage[each.key].id
source_type = "volume"
destination_type = "volume"
delete_on_termination = var.volume_delete_termination
}
}
user_data = data.template_file.cloud-init[each.key].rendered
scheduler_hints {
group = openstack_compute_servergroup_v2.servergroup.id
}
network {
port = openstack_networking_port_v2.network-port[each.key].id
}
}
So let's say now that I have 2 instances, where I want to add dynamically 2 extra blockstorage_volumes to each instance, my first idea was to add 2 extra dynamic resources as a try-out:
#Dynamically create this resource if boolean "multiple_volume_storage" = true
resource "openstack_blockstorage_volume_v3" "multiple_shared_storage" {
for_each = var.multiple_volume_storage ? var.multiple_volumes : {}
name = each.value
size = var.volume_s
availability_zone = each.key
volume_type = var.volume_type
image_id = var.os_image
}
Example of 2 extra blockstorage_volumes defined in a .tf file:
variable "multiple_volumes" {
type = map(any)
default = {
dc1 = "/volume/mysql"
dc2 = "/volume/postgres"
}
}
Example of 2 instances defined in a .tf file:
nodes = {
dc1 = "app-stage"
dc2 = "app-stage"
}
Here I try to dynamically attach 2 extra blockstorage_volumes to each instance:
resource "openstack_compute_volume_attach_v2" "attach_multiple_shared_storage" {
for_each = var.multiple_volume_storage ? var.multiple_volumes : {}
instance_id = openstack_compute_instance_v2.instance[each.key].id
volume_id = openstack_blockstorage_volume_v3.multiple_shared_storage[each.key].id
}
The openstack_compute_instance_v2.instance [each.key] is obviously not correct since it now only creates 1 extra blockstorage_volume per instance. Is there a clean/elegant way to solve this? So basically to attach all given volumes in variable "multiple_volumes" to each single instance that is defined in var.nodes
Kind regards,
Jonas

How re-attache ebs volume using terraform

I'm trying to keep AWS EBS volume as a persistent data-store, every week my AMI changes so I have to spin-up new VM in aws. At this time I'm expecting my volume to detach from the old VM and attach to a new VM without destroying the EBS volume and data.
resource "aws_instance" "my_instance" {
count = var.instance_count
ami = lookup(var.ami,var.aws_region)
instance_type = var.instance_type
key_name = aws_key_pair.terraform-demo.key_name
subnet_id = aws_subnet.main-public-1.id
// user_data = "${file("install_apache.sh")}"
tags = {
Name = "Terraform-${count.index + 1}"
Batch = "5AM"
}
}
variable "instances" {
type = map
default = {
"xx" = "sss-console"
"4xx" = "sss-upload-port"
"xxx" = "sss"
}
}
resource "aws_kms_key" "cmp_kms" {
description = "ssss-ebsencrypt"
tags = local.all_labels
}
resource "aws_ebs_volume" "volumes" {
count = var.instance_count
availability_zone = element(aws_instance.my_instance.*.availability_zone, count.index )
encrypted = true
kms_key_id = aws_kms_key.cmp_kms.arn
size = local.volume_size
type = local.volume_type
iops = local.volume_iops
// tags = merge(var.extra_labels, map("Name", "${var.cell}-${element(local.volume_name, count.index)}"))
lifecycle {
// prevent_destroy = true
ignore_changes = [kms_key_id, instance_id]
}
}
resource "aws_volume_attachment" "volumes-attachment" {
depends_on = [aws_instance.my_instance, aws_ebs_volume.volumes]
count = var.instance_count
device_name = "/dev/${element(local.volume_name, count.index)}"
volume_id = element(aws_ebs_volume.volumes.*.id, count.index)
instance_id = element(aws_instance.my_instance.*.id, count.index)
force_detach = true
}
ERROR on terraform apply
Error: Unsupported attribute
on instance.tf line 71, in resource "aws_ebs_volume" "volumes":
71: ignore_changes = [kms_key_id, instance_id]
This object has no argument, nested block, or exported attribute named
"instance_id".
earlier the same code use to work with terraform v0.11 but it's not working with v0.12. what is the replacement for this or how can we re-attach EBS to a different machine without destroying it?
As per terraform documentation, they do not expose any attribute named as instance_id for resource aws_ebs_volume.
For reference: https://www.terraform.io/docs/providers/aws/d/ebs_volume.html.
You can specify the instance_id at the time of volume attachment using resource
aws_volume_attachment.
You can refer the answer given in https://gitter.im/hashicorp-terraform/Lobby?at=5ab900eb2b9dfdbc3a237e36 for more information.

How to attach Two target group against single ECS services

I am looking for a way to attach two target group against single ECS services, in other my container exposes two port but I am only able to map one port against my service to LB.
So far I am able to create a new listener and target group but after target group creation I can see everything as per expectation but the target group show There are no targets registered to this target group
Here are my target group and listener configuration
target_group:
resource "aws_lb_target_group" "e_admin" {
name = "${var.env_prefix_name}-admin"
port = 5280
protocol = "HTTP"
vpc_id = "${aws_vpc.VPC.id}"
health_check {
path = "/admin"
healthy_threshold = 2
unhealthy_threshold = 10
port = 5280
timeout = 90
interval = 100
matcher = "401,200"
}
}
Listener:'
resource "aws_lb_listener" "admin" {
load_balancer_arn = "${aws_lb.admin_lb.arn}"
port = "5280"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_lb_target_group.e_admin.id}"
type = "forward"
}
}
My question is how I can add ECS cluster Autoscaling group or how I can add all the instances running in the ECS cluster to this target group?
AWS recently announced support for multiple target groups for an ECS service.
The, currently unreleased, 2.22.0 version of the AWS provider contains support for this by adding more load_balancer blocks to the aws_ecs_service resource. Example from the acceptance tests:
resource "aws_ecs_service" "with_alb" {
name = "example"
cluster = "${aws_ecs_cluster.main.id}"
task_definition = "${aws_ecs_task_definition.with_lb_changes.arn}"
desired_count = 1
iam_role = "${aws_iam_role.ecs_service.name}"
load_balancer {
target_group_arn = "${aws_lb_target_group.test.id}"
container_name = "ghost"
container_port = "2368"
}
load_balancer {
target_group_arn = "${aws_lb_target_group.static.id}"
container_name = "ghost"
container_port = "4501"
}
depends_on = [
"aws_iam_role_policy.ecs_service",
]
}
Accodring to https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html,
There is a limit of one load balancer or target group per service.
If you want to attach a autoscaling group to the target group, use aws_autoscaling_attachment,
https://www.terraform.io/docs/providers/aws/r/autoscaling_attachment.html
resource "aws_autoscaling_attachment" "asg_attachment_bar" {
autoscaling_group_name = "${aws_autoscaling_group.your_asg.id}"
alb_target_group_arn = "${aws_alb_target_group.e_admin.arn}"
}
You can add multiple define multiple target groups for the same ecs service using the load_balancer block.
resource "aws_ecs_service" "ecs_service_1" {
name = "service-1"
cluster = aws_ecs_cluster.ecs_cluster_prod.id
task_definition = aws_ecs_task_definition.ecs_task_definition_1.arn
desired_count = 1
launch_type = "FARGATE"
enable_execute_command = true
# Target group 1
load_balancer {
target_group_arn = aws_lb_target_group.lb_tg_1.arn
container_name = "app"
container_port = 8080
}
# Target group 2
load_balancer {
target_group_arn = aws_lb_target_group.lb_tg_2.arn
container_name = "app"
container_port = 8080
}
network_configuration {
subnets = [aws_subnet.subnet_a.id, aws_subnet.subnet_b.id]
security_groups = [aws_security_group.sg_internal.id]
assign_public_ip = true
}
tags = {
Name = "service-1"
ManagedBy = "terraform"
Environment = "prod"
}
}
You can map the same container and port to both target groups in case of having an external and an internal load balancer for example

Delay in creation of launch config in AWS

Using Terraform, I have the following launch config and autoscale group resources defined:
resource "aws_launch_configuration" "lc_name" {
name = "lc_name"
image_id = "ami-035d01348bb6e6070"
instance_type = "m3.large"
security_groups = ["sg-61a0b51b"]
}
####################
# Autoscaling group
####################
resource "aws_autoscaling_group" "as_group_name" {
name = "as_group_name"
launch_configuration = "lc_name"
vpc_zone_identifier = ["subnet-be1088f7","subnet-fa8d6fa1"]
min_size = "1"
max_size = "1"
desired_capacity = "1"
load_balancers = ["${aws_elb.elb_name.name}"]
health_check_type = "EC2"
}
When I run terraform apply, I get:
Error: Error applying plan:
1 error(s) occurred:
aws_autoscaling_group.as_group_name: 1 error(s) occurred:
aws_autoscaling_group.as_group_name: Error creating AutoScaling Group: ValidationError: Launch configuration name not found - A launch configuration with the name: lc_name does not exist
status code: 400, request id: b09191d3-a47c-11e8-8198-198283743bc9
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
If I run apply again, all goes well, strongly implying that there is a delay in the create autoscale group code recognizing a new launch configuration. Is there a way to adjust to this delay?
Update:
Per suggestion, I added a dependency:
resource "aws_launch_configuration" "myLaunchConfig" {
name = "myLaunchConfig"
image_id = "ami-01c068891b0d9411a"
instance_type = "m3.large"
security_groups = ["sg-61a0b51b"]
}
resource "aws_autoscaling_group" "myAutoScalingGroup" {
name = "myAutoScalingGroup"
launch_configuration = "myLaunchConfig"
depends_on = ["myLaunchConfig"]
vpc_zone_identifier = ["subnet-be1088f7","subnet-fa8d6fa1"]
min_size = "1"
max_size = "1"
desired_capacity = "1"
load_balancers = ["${aws_elb.myLoadBalancer.name}"]
health_check_type = "EC2"
}
Still getting an error for the same reason, though it looks a bit different:
Error: aws_autoscaling_group.myAutoScalingGroup: resource depends on non-existent resource 'myLaunchConfig'
As far as Terraform can tell there is no relationship between your autoscaling group and your launch configuration so it is going to try to create these in parallel, leading you to the observed race condition that corrects itself on the next apply.
With Terraform you have two different ways of ordering a dependency chain between resources.
You can use the explicit depends_on syntax to force a resource to wait until another resource is created before it, in turn, is created.
In your case this would be something like:
resource "aws_launch_configuration" "lc_name" {
name = "lc_name"
image_id = "ami-035d01348bb6e6070"
instance_type = "m3.large"
security_groups = ["sg-61a0b51b"]
}
####################
# Autoscaling group
####################
resource "aws_autoscaling_group" "as_group_name" {
name = "as_group_name"
launch_configuration = "lc_name"
vpc_zone_identifier = ["subnet-be1088f7", "subnet-fa8d6fa1"]
min_size = "1"
max_size = "1"
desired_capacity = "1"
load_balancers = ["${aws_elb.elb_name.name}"]
health_check_type = "EC2"
depends_on = ["aws_launch_configuration.lc_name"]
}
Or, and this is generally preferable where possible, if you interpolate a value from one resource then it will automatically wait until that resource is created before creating the second resource.
In your case you would then use something like this:
resource "aws_launch_configuration" "lc_name" {
name = "lc_name"
image_id = "ami-035d01348bb6e6070"
instance_type = "m3.large"
security_groups = ["sg-61a0b51b"]
}
####################
# Autoscaling group
####################
resource "aws_autoscaling_group" "as_group_name" {
name = "as_group_name"
launch_configuration = "${aws_launch_configuration.lc_name.name}"
vpc_zone_identifier = ["subnet-be1088f7", "subnet-fa8d6fa1"]
min_size = "1"
max_size = "1"
desired_capacity = "1"
load_balancers = ["${aws_elb.elb_name.name}"]
health_check_type = "EC2"
}
If you are ever unsure as to the order of things that Terraform will operate on then you might want to take a look at the terraform graph command.

Terraform (provider AWS) - Auto Scaling group doesn't take effect on a launch template change

Unable to make launch template work with ASGs while using launch templates, it works with Launch Configuration using a small hack i.e by interpolating the launch configuration name in ASG resource but it doesn't work with launch templates.
ASG uses the latest version to launch new instances but doesn't change anything w.r.t to pre-running instances inspite of a change in launch template.
I understand that this is sort of expected but do we have any workaround to make launch templates work with ASG or we need to stick to launch configuration itself?
TF code snippet -
resource "aws_launch_template" "lc_ec2" {
image_id = "${var.ami_id}"
instance_type = "${var.app_instance_type}"
key_name = "${var.orgname}_${var.environ}_kp"
vpc_security_group_ids = ["${aws_security_group.sg_ec2.id}"]
user_data = "${base64encode(var.userdata)}"
block_device_mappings {
device_name = "/dev/xvdv"
ebs {
volume_size = 15
}
}
iam_instance_profile {
name = "${var.orgname}_${var.environ}_profile"
}
lifecycle {
create_before_destroy = true
}
tag_specifications {
resource_type = "instance"
tags = "${merge(map("Name", format("%s-%s-lc-ec2", var.orgname, var.environ)), var.tags)}"
}
tag_specifications {
resource_type = "volume"
tags = "${merge(map("Name", format("%s-%s-lc-ec2-volume", var.orgname, var.environ)), var.tags)}"
}
tags = "${merge(map("Name", format("%s-%s-lc-ec2", var.orgname, var.environ)), var.tags)}"
}
resource "aws_autoscaling_group" "asg_ec2" {
name = "${var.orgname}-${var.environ}-asg-ec2-${aws_launch_template.lc_ec2.name}"
vpc_zone_identifier = ["${data.aws_subnet.private.*.id}"]
min_size = 1
desired_capacity = 1
max_size = 1
target_group_arns = ["${aws_lb_target_group.alb_tg.arn}"]
default_cooldown= 100
health_check_grace_period = 100
termination_policies = ["ClosestToNextInstanceHour", "NewestInstance"]
health_check_type="ELB"
launch_template = {
id = "${aws_launch_template.lc_ec2.id}"
version = "$$Latest"
}
lifecycle {
create_before_destroy = true
}
tags = [
{
key = "Name"
value = "${var.orgname}"
propagate_at_launch = true
},
{
key = "Environ"
value = "${var.environ}"
propagate_at_launch = true
}
]
}
There is one hack to achieve this.
AWS CloudFormation supports rolling updates of an Autoscaling group.
Since Terraform supports a cloudformation stack resource, you can define your ASG as a cloudformation stack with an update policy. However, CloudFormation does not support the $$Latest tag for launch template version, so you will have to parameterize the version and take the input value from the latest_version attribute of the launch template resource created in your terraform configuration file.

Resources