I have two target_groups - one for port 80 and another for 443. Also have two instances as the members (of that NLB) and I need to attach both of the target groups to each instance. So this is how I have configured, to attach:
// Creates the target-group
resource "aws_lb_target_group" "nlb_target_groups" {
for_each = {
for lx in var.nlb_listeners : "${lx.protocol}:${lx.target_port}" => lx
}
name = "${var.vpc_names[var.idx]}-tgr-${each.value.target_port}"
deregistration_delay = var.deregistration_delay
port = each.value.target_port
protocol = each.value.protocol
vpc_id = var.vpc_ids[var.idx]
proxy_protocol_v2 = true
health_check {
port = each.value.health_port
protocol = each.value.protocol
interval = var.health_check_interval
healthy_threshold = var.healthy_threshold
unhealthy_threshold = var.unhealthy_threshold
}
}
// Attach the target groups to the instance(s)
resource "aws_lb_target_group_attachment" "tgr_attachment" {
for_each = {
for pair in setproduct(keys(aws_lb_target_group.nlb_target_groups), var.nlb_members.ids) : "${pair[0]}:${pair[1]}" => {
target_group = aws_lb_target_group.nlb_target_groups[pair[0]]
instance_id = pair[1]
}
}
target_group_arn = each.value.target_group.arn
target_id = each.value.instance_id
port = each.value.target_group.port
#target_id = [for tid in range(var.inst_count) : data.aws_instances.nlb_insts.ids[tid]]
}
where var.nlb_listeners is defined like this:
nlb_listeners = [
{
protocol = "TCP"
target_port = "80"
health_port = "1936"
},
{
protocol = "TCP"
target_port = "443"
health_port = "1936"
}
]
and var.elb_members.ids is like this:
"ids" = [
"i-015604f88xxxxxx42",
"i-0e4defceexxxxxxe5",
]
but I’m getting Invalid for_each argument error:
Error: Invalid for_each argument
on ../../modules/elb/balencer.tf line 46, in resource
"aws_lb_target_group_attachment" "tgr_attachment": 46: for_each =
{ 47: for pair in
setproduct(keys(aws_lb_target_group.nlb_target_groups),
var.elb_members.ids) : "${pair[0]}:${pair[1]}" => { 48:
target_group = aws_lb_target_group.nlb_target_groups[pair[0]] 49:
instance_id = pair[1] 50: } 51: }
The "for_each" value depends on resource attributes that cannot be
determined until apply, so Terraform cannot predict how many instances
will be created. To work around this, use the -target argument to
first apply only the resources that the for_each depends on.
I cannot figure out why it’s either invalid or how this for_each cannot determine the values. Any idea what’s am I doing wrong here? Seriously got stuck in the middle and would really appreciate any help to put me to the right direction.
-S
=== Update: 02/23 ==========
#martin-atkins,
I think I understand what you said but it seems give me the same error even for the instances that already exist. Anyway, this is my aws_instance resource:
resource "aws_instance" "inst" {
count = var.inst_count
instance_type = var.inst_type
depends_on = [aws_subnet.snets]
ami = data.aws_ami.ubuntu.id
# the VPC subnet
subnet_id = element(aws_subnet.snets.*.id, count.index)
vpc_security_group_ids = [var.sg_default[var.idx], aws_security_group.secg.id]
user_data = <<-EOF
#!/bin/bash
hostnamectl set-hostname ${var.vpc_names[var.idx]}${var.inst_role}0${count.index + 1}
# Disable apt-daily.service & wait until `apt updated` has been killed
systemctl stop apt-daily.service && systemctl kill --kill-who=all apt-daily.service
while ! (systemctl list-units --all apt-daily.service | egrep -q '(dead|failed)')
do sleep 1; done
EOF
# the public SSH key
key_name = var.key_info
tags = merge(
var.common_tags,
{ "Name" = "${var.vpc_names[var.idx]}${var.inst_role}0${count.index + 1}" }
)
}
Anything else you think can be done to get around that issue?
-S
EC2 instance ids are not allocated until the instance has already been created, so the AWS provider cannot pre-calculate them during the plan phase. Because of that, they are not suitable for use as part of the key of a for_each map.
Instead, you'll need to use some other identifier for those instances that is determined by the configuration itself, rather than by data returned by the provider during apply. You didn't share the configuration for the instances themselves so I can't make a specific suggestion, but if they are all instances of the same resource created by count or for_each then a common choice is to use the index of each instance (for count) or the unique key of each instance (for for_each), so that elements of that map will be added and removed to correspond with the instances themselves being added and removed.
Finally I've worked that out. Not sure if that's the best way of doing or not but now working for me error-free. Basically, I've change the data-lookup like this:
// List of instance attributes by role
data "aws_instance" "by_role" {
for_each = {
for ic in range(var.inst_count): "${var.inst_role}0${ic+1}" => ic
}
instance_tags = {
Name = "${var.vpc_names[var.idx]}${each.key}"
}
instance_id = aws_instance.inst[substr(each.key,4,2)-1].id
}
and used that in the for_each like this:
for_each = {
for pair in setproduct(keys(aws_lb_target_group.nlb_target_groups), keys(aws_instance.by_role)):
"${pair[0]}:${pair[1]}" => {
target_group = aws_lb_target_group.nlb_target_groups[pair[0]]
member_name = aws_instance.by_role[pair[1]]
}
}
Details are here, answered to my own question in the Terraform Community forum, just in case if someone else facing the same issue as mine.
-San
Related
I created a resource that produces a list of VM's using for_each argument. I'm having trouble trying to reference this resource to my web_private_group.
resource "google_compute_instance_group" "web_private_group" {
name = "${format("%s","${var.gcp_resource_name}-${var.gcp_env}-vm-group")}"
description = "Web servers instance group"
zone = var.gcp_zone_1
network = google_compute_network.vpc.self_link
# I've added some attempts I've tried that do not work...
instances = [
//google_compute_instance.compute_engines.self_link
//[for o in google_compute_instance.compute_engines : o.self_link]
{for k, o in google_compute_instance.compute_engines : k => o.self_link}
//google_compute_instance.web_private_2.self_link
]
named_port {
name = "http"
port = "80"
}
named_port {
name = "https"
port = "443"
}
}
# Create Google Cloud VMs
resource "google_compute_instance" "compute_engines" {
for_each = var.vm_instances
name = "${format("%s","${var.gcp_resource_name}-${var.gcp_env}-each.value")}"
machine_type = "e2-micro"
zone = var.gcp_zone_1
tags = ["ssh","http"]
boot_disk {
initialize_params {
image = "debian-10"
}
}
}
variable "vm_instances" {
description = "list of VM's"
type = set(string)
default = ["vm1", "vm2", "vm3"]
}
How can I properly link my compute_engines to my web_private_group resource within the instances= [] block?
Edit: To further clarify, how can I state the fact that there are multiple instances within my compute_engines resource?
You probably just need to use a splat expression as follows:
instances = values(google_compute_instance.compute_engines)[*].id
Moreover, a for expression can be used as well:
instances = [for res in google_compute_instance.compute_engines: res.id]
I have this script which works great. It created 3 instances with the sepcified tags to identify them easily. But issue is i want to add a remote-exec provisioner (currently commented) to the code to install some packages. If i was using count, i could have looped over it to do the remote-exec over all the instances. I could not use count because i had to use for_each to loop over a local list. Since count and for_each cannot be used together, how do i loop over the instances to retrieve their IP addresses for using in the remote-exec provisioner.
On digital ocean and AWS, i was able to get it work using host = "${self.public_ip}"
But it does not work on vultr and gives the Unsupported attribute error
instance.tf
resource "vultr_ssh_key" "kubernetes" {
name = "kubernetes"
ssh_key = file("kubernetes.pub")
}
resource "vultr_instance" "kubernetes_instance" {
for_each = toset(local.expanded_names)
plan = "vc2-1c-2gb"
region = "sgp"
os_id = "387"
label = each.value
tag = each.value
hostname = each.value
enable_ipv6 = true
backups = "disabled"
ddos_protection = false
activation_email = false
ssh_key_ids = [vultr_ssh_key.kubernetes.id]
/* connection {
type = "ssh"
user = "root"
private_key = file("kubernetes")
timeout = "2m"
host = vultr_instance.kubernetes_instance[each.key].ipv4_address
}
provisioner "remote-exec" {
inline = "sudo hostnamectl set-hostname ${each.value}"
} */
}
locals {
expanded_names = flatten([
for name, count in var.host_name : [
for i in range(count) : format("%s-%02d", name, i + 1)
]
])
}
provider.tf
terraform {
required_providers {
vultr = {
source = "vultr/vultr"
version = "2.3.1"
}
}
}
provider "vultr" {
api_key = "***************************"
rate_limit = 700
retry_limit = 3
}
variables.tf
variable "host_name" {
type = map(number)
default = {
"Manager" = 1
"Worker" = 2
}
}
The property you are looking for is called main_ip instead of ip4_address or something like that. Specifically accessible via self.main_ip in your connection block.
I'm getting the below error while running terraform plan and apply
on main.tf line 517, in resource "aws_lb_target_group_attachment" "ecom-tga":
│ 517: for_each = local.service_instance_map
│ ├────────────────
│ │ local.service_instance_map will be known only after apply
│
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will
│ be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.
My configuration file is as below
variable "instance_count" {
type = string
default = 3
}
variable "service-names" {
type = list
default = ["valid","jsc","test"]
}
locals {
helper_map = {for idx, val in setproduct(var.service-names, range(var.instance_count)):
idx => {service_name = val[0]}
}
}
resource "aws_instance" "ecom-validation-service" {
for_each = local.helper_map
ami = data.aws_ami.ecom.id
instance_type = "t3.micro"
tags = {
Name = "${each.value.service_name}-service"
}
vpc_security_group_ids = [data.aws_security_group.ecom-sg[each.value.service_name].id]
subnet_id = data.aws_subnet.ecom-subnet[each.value.service_name].id
}
data "aws_instances" "ecom-instances" {
for_each = toset(var.service-names)
instance_tags = {
Name = "${each.value}-service"
}
instance_state_names = ["running", "stopped"]
depends_on = [
aws_instance.ecom-validation-service
]
}
locals {
service_instance_map = merge([for env, value in data.aws_instances.ecom-instances:
{
for id in value.ids:
"${env}-${id}" => {
"service-name" = env
"id" = id
}
}
]...)
}
resource "aws_lb_target_group_attachment" "ecom-tga" {
for_each = local.service_instance_map
target_group_arn = aws_lb_target_group.ecom-nlb-tgp[each.value.service-name].arn
port = 80
target_id = each.value.id
depends_on = [aws_lb_target_group.ecom-nlb-tgp]
}
Since i'm passing count as var and its value is 3,i thought terraform will predict as it needs to create 9 instances.But it didn't it seems and throwing error as unable to predict.
Do we have anyway to by pass this by giving some default values for instances count prediction or for that local service_instance_map?
Tried try function but still no luck
Error: Invalid for_each argument
│
│ on main.tf line 527, in resource "aws_lb_target_group_attachment" "ecom-tga":
│ 527: for_each = try(local.service_instance_map,[])
│ ├────────────────
│ │ local.service_instance_map will be known only after apply
│
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will
│ be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.
My requirement got changed and now i have to create 3 instances in 3 subnets available in that region.I changed the locals as like below But same prediction issue
locals {
merged_subnet_svc = try(flatten([
for service in var.service-names : [
for subnet in aws_subnet.ecom-private.*.id : {
service = service
subnet = subnet
}
]
]), {})
variable "azs" {
type = list(any)
default = ["ap-south-1a", "ap-south-1b", "ap-south-1c"]
}
variable "private-subnets" {
type = list(any)
default = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
}
resource "aws_instance" "ecom-instances" {
for_each = {
for svc in local.merged_subnet_svc : "${svc.service}-${svc.subnet}" => svc
}
ami = data.aws_ami.ecom.id
instance_type = "t3.micro"
tags = {
Name = "ecom-${each.value.service}-service"
}
vpc_security_group_ids = [aws_security_group.ecom-sg[each.value.service].id]
subnet_id = each.value.subnet
}
}
In your configuration you've declared that data "aws_instances" "ecom-instances" depends on aws_instance.ecom-validation-service. Since that other object won't exist yet on your first run, Terraform must therefore wait until the apply step to read data.aws_instances.ecom-instances because otherwise it would fail to honor the dependency you've declared, because aws_instance.ecom-validation-service wouldn't exist yet.
To avoid the error message you saw here, you need to make sure that for_each only refers to values that Terraform will know before any objects are actually created. Because EC2 assigns instance ids only once the instance is created, it's not correct to use an EC2 instance id as part of a for_each instance key.
Furthermore, there's no need for a data "aws_instances" block to retrieve instance information here because you already have the relevant instance information as a result of the resource "aws_instance" "ecom-validation-service" block.
With all of that said, let's start from your input variables and build things up again while making sure that we only build instance keys only from values we'll know during planning. The variables you have stay essentially the same; I've just tweaked the type constraints a little to match how we're using each one:
variable "instance_count" {
type = string
default = 3
}
variable "service_names" {
type = set(string)
default = ["valid", "jsc", "test"]
}
I understand from the rest of your example that you are intending to create var.instance_count instances for each distinct element of var.service_names. Your setproduct to produce all of the combinations of those is also good, but I'm going to tweak it to assign the instances unique keys that include the service name:
locals {
instance_configs = tomap({
for pair in setproduct(var.service_names, range(var.instance_count)) :
"${pair[0]}${pair[1]}" => {
service_name = pair[0]
}
})
}
This will produce a data structure like the following:
{
valid0 = { service_name = "valid" }
valid1 = { service_name = "valid" }
valid2 = { service_name = "valid" }
jsc0 = { service_name = "jsc" }
jsc1 = { service_name = "jsc" }
jsc2 = { service_name = "jsc" }
test0 = { service_name = "test" }
test1 = { service_name = "test" }
test2 = { service_name = "test" }
}
This matches the shape that for_each expects, so we can use it directly to declare nine aws_instance instances:
resource "aws_instance" "ecom-validation-service" {
for_each = local.instance_configs
instance_type = "t3.micro"
ami = data.aws_ami.ecom.id
subnet_id = data.aws_subnet.ecom-subnet[each.value.service_name].id
vpc_security_group_ids = [
data.aws_security_group.ecom-sg[each.value.service_name].id,
]
tags = {
Name = "${each.value.service_name}-service"
Service = each.value_service_name
}
}
So far this has been mostly the same as what you shared. But this is the point where I'm going to go in a totally different direction: rather than now trying to read back the instances this declared using a separate data resource, I'll just gather the same data directly from the aws_instance.ecom-validation-service resource. It's generally best for a Terraform configuration to either manage a particular object or read it, not both at the same time, because this way the necessary dependency ordering is revealed automatically be the references.
Notice that I included an extra tag Service on each of the instances to give a more convenient way to get the service name back. If you can't do that then you could get the same information by trimming the -service suffix from the Name tag, but I prefer to keep things direct where possible.
It seemed like your goal then was to have a aws_lb_target_group_attachment instance per instance, with each one connected to the appropriate target group based on the service name. Because that aws_instance resource has for_each set, aws_instance.ecom-validation-service in expressions elsewhere is a map of objects where the keys are the same as the keys in var.instance_configs. That means that value is also compatible with the requirements for for_each and so we can use it directly to declare the target group attachments:
resource "aws_lb_target_group_attachment" "ecom-tga" {
for_each = aws_instance.ecom-validation-service
target_group_arn = aws_lb_target_group.ecom-nlb-tgp[each.value.tags.Service].arn
port = 80
target_id = each.value.id
}
I relied on the extra Service tag from earlier to easily determine which service each instance belongs to in order to look up the appropriate target group ARN. each.value.id works here because each.value is always an aws_instance object, which exports that id attribute.
The result of this is two sets of instances that each have keys matching those in local.instance_configs:
aws_instance.ecom-validation-service["valid0"]
aws_instance.ecom-validation-service["valid1"]
aws_instance.ecom-validation-service["valid2"]
aws_instance.ecom-validation-service["jsc0"]
aws_instance.ecom-validation-service["jsc1"]
aws_instance.ecom-validation-service["jsc2"]
...
aws_lb_target_group_attachment.ecom-tga["valid0"]
aws_lb_target_group_attachment.ecom-tga["valid1"]
aws_lb_target_group_attachment.ecom-tga["valid2"]
aws_lb_target_group_attachment.ecom-tga["jsc0"]
aws_lb_target_group_attachment.ecom-tga["jsc1"]
aws_lb_target_group_attachment.ecom-tga["jsc2"]
...
Notice that all of these keys contain only information specified directly in the configuration, and not any information decided by the remote system. That means we avoid the "Invalid for_each argument" error even though each instance still has an appropriate unique key. If you were to add a new element to var.service_names or increase var.instance_count later then Terraform will also see from the shape of these instance keys that it should just add new instances of each resource, rather than renaming/renumbering any existing instances.
➜ terraform -v
Terraform v0.12.24
+ provider.aws v2.60.0
My terraform example.tf:
locals {
standard_tags = {
team = var.team
project = var.project
component = var.component
environment = var.environment
}
}
provider "aws" {
profile = "profile"
region = var.region
}
resource "aws_key_pair" "security_key" {
key_name = "security_key"
public_key = file(".ssh/key.pub")
}
# New resource for the S3 bucket our application will use.
resource "aws_s3_bucket" "project_bucket" {
# NOTE: S3 bucket names must be unique across _all_ AWS accounts, so
# this name must be changed before applying this example to avoid naming
# conflicts.
bucket = "project-bucket"
acl = "private"
}
resource "aws_security_group" "ssh_allow" {
name = "allow-all-ssh"
ingress {
cidr_blocks = [
"0.0.0.0/0"
]
from_port = 22
to_port = 22
protocol = "tcp"
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "http_allow" {
name = "allow-all-http"
ingress {
cidr_blocks = [
"0.0.0.0/0"
]
from_port = 80
to_port = 80
protocol = "tcp"
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "example" {
ami = "ami-08ee2516c7709ea48"
instance_type = "t2.micro"
security_groups = [aws_security_group.ssh_allow.name, aws_security_group.http_allow.name]
key_name = aws_key_pair.security_key.key_name
connection {
type = "ssh"
user = "centos"
private_key = file(".ssh/key")
host = self.public_ip
}
provisioner "local-exec" {
command = "echo ${aws_instance.example.public_ip} > ip_address.txt"
}
provisioner "remote-exec" {
inline = [
"sudo yum -y install nginx",
"sudo systemctl start nginx"
]
}
depends_on = [aws_s3_bucket.project_bucket, aws_key_pair.security_key]
dynamic "tag" {
for_each = local.standard_tags
content {
key = tag.key
value = tag.value
propagate_at_launch = true
}
}
}
And when I run terraform plan
I got the following error:
➜ terraform plan
Error: Unsupported block type
on example.tf line 94, in resource "aws_instance" "example":
94: dynamic "tag" {
Blocks of type "tag" are not expected here.
There isn't a block type called tag defined in the schema for the aws_instance resource type. There is an argument called tags, which is I think the way to get the result you were looking for here:
tags = local.standard_tags
I expect you are thinking of the tag block in aws_autoscaling_group, which deviates from the usual design of tags arguments in AWS provider resources because for this resource type in particular each tag has the additional attribute propagate_at_launch. That attribute only applies to autoscaling groups because it decides whether instances launched from the autoscaling group will inherit a particular tag from the group itself.
unfortunately since the aws_instance resource's tags attribute is a map, w/in the HCL constructs atm, it cannot exist as repeatable blocks like a tag attribute in the aws_autoscaling_group example seen here in the Dynamic Nested Blocks section: https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each/
but from your comment, it seems you're trying to set the tags attribute with perhaps a map of key/value pairs? in this case, this is certainly doable 😄 you should be able to directly set the field with tags = local.standard_tags
OR if you intend to set the tags attribute with a list of key/value pairs, a for loop can work as well by doing something like:
locals {
standard_tags = [
{
name = "a"
number = 1
},
{
name = "b"
number = 2
},
{
name = "c"
number = 3
},
]
}
resource "aws_instance" "test" {
...
tags = {
for tag in local.standard_tags:
tag.name => tag.number
}
}
I have a terraform template that creates multiple EC2 instances.
I then create a few Elastic Network interfaces in the AWS console and added them as locals in the terraform template.
Now, I want to map the appropriate ENI to the instance hence I added locals and variables as below.
locals {
instance_ami = {
A = "ami-11111"
B = "ami-22222"
C = "ami-33333"
D = "ami-4444"
}
}
variable "instance_eni" {
description = "Pre created Network Interfaces"
default = [
{
name = "A"
id = "eni-0a15890a6f567f487"
},
{
name = "B"
id = "eni-089a68a526af5775b"
},
{
name = "C"
id = "eni-09ec8ad891c8e9d91"
},
{
name = "D"
id = "eni-0fd5ca23d3af654a9"
}
]
}
resource "aws_instance" "instance" {
for_each = local.instance_ami
ami = each.value
instance_type = var.instance_type
key_name = var.keypair
root_block_device {
delete_on_termination = true
volume_size = 80
volume_type = "gp2"
}
dynamic "network_interface" {
for_each = [for eni in var.instance_eni : {
eni_id = eni.id
}]
content {
device_index = 0
network_interface_id = network_interface.value.eni_id
delete_on_termination = false
}
}
}
I am getting below error:
Error: Error launching source instance: InvalidParameterValue: Each network interface requires a
unique device index.
status code: 400, request id: 4a482753-bddc-4fc3-90f4-2f1c5e2472c7
I think terraform is tyring to attach all 4 ENI's to single instance only.
What should be done to attach ENI's to an individual instance?
The configuration you shared in your question is asking Terraform to manage four instances, each of which has four network interfaces associated with it. That's problematic in two different ways:
All for of the network interfaces on each instance are configured with the same device_index, which is invalid and is what the error message here is reporting.
Even if you were to fix that, it would then try to attach the same four network interfaces to four different EC2 instances, which is invalid: each network interface can be attached to only one instance at a time.
To address that and get the behavior you wanted, you only need one network_interface block, whose content is different for each of the instances:
locals {
instance_ami = {
A = "ami-11111"
B = "ami-22222"
C = "ami-33333"
D = "ami-4444"
}
}
variable "instance_eni" {
description = "Pre created Network Interfaces"
default = [
{
name = "A"
id = "eni-0a15890a6f567f487"
},
{
name = "B"
id = "eni-089a68a526af5775b"
},
{
name = "C"
id = "eni-09ec8ad891c8e9d91"
},
{
name = "D"
id = "eni-0fd5ca23d3af654a9"
}
]
}
locals {
# This expression is transforming the instance_eni
# value into a more convenient shape: a map from
# instance key to network interface id. You could
# also choose to just change directly the
# definition of variable "instance_eni" to already
# be such a map, but I did it this way to preserve
# your module interface as given.
instance_network_interfaces = {
for ni in var.instance_eni : ni.name => ni.id
}
}
resource "aws_instance" "instance" {
for_each = local.instance_ami
ami = each.value
instance_type = var.instance_type
key_name = var.keypair
root_block_device {
delete_on_termination = true
volume_size = 80
volume_type = "gp2"
}
network_interface {
device_index = 0
network_interface_id = local.instance_network_interfaces[each.key]
delete_on_termination = false
}
}
Now each instance has only one network interface, with each one attaching to the corresponding ENI ID given in your input variable. Referring to each.key and each.value is how we can create differences between each of the instances declared when using resource for_each; we don't need any other repetition constructs inside unless we want to create nested repetitions, like having a dynamic number of network interfaces for each instance.