So, in my old .11 code, I have a file where i my output modules locals section, I'm building:
this_assigned_nat_ip = google_compute_instance.this_public.*.network_interface.0.access_config.0.assigned_nat_ip--
Which later gets fed to the output statement. This module could create N instances. So what it used to do was give me the first nat ip on the first access_config block on the first network interface of all the instances we created. (Someone locally wrote the code so we know that there's only going to be one network interface with one access config block).
How do I translate that to t12? I'm unsure of the syntax to keep the nesting.
Update:
Here's a chunk of the raw data out of a terraform show from tf11 (slightly sanitized)
module.gcp_bob_servers_ams.google_compute_instance.this_public.0:
machine_type = n1-standard-2
min_cpu_platform =
network_interface.# = 1
network_interface.0.access_config.# = 1
network_interface.0.access_config.0.assigned_nat_ip =
network_interface.0.access_config.0.nat_ip = 1.2.3.4
network_interface.0.access_config.0.network_tier = PREMIUM
Terraform show of equivalent host in tf12:
# module.bob.module.bob_gcp_ams.module.atom_d.google_compute_instance.this[1]:
resource "google_compute_instance" "this" {
allow_stopping_for_update = true
network_interface {
name = "nic0"
network = "https://www.googleapis.com/compute/v1/projects/stuff-scratch/global/networks/scratch-public"
network_ip = "10.112.112.6"
subnetwork = "https://www.googleapis.com/compute/v1/projects/stuff-scratch/regions/europe-west4/subnetworks/scratch-europe-west4-x-public-subnet"
subnetwork_project = "stuff-scratch"
access_config {
nat_ip = "35.204.132.177"
network_tier = "PREMIUM"
}
}
scheduling {
automatic_restart = true
on_host_maintenance = "MIGRATE"
preemptible = false
}
}
If I understand correctly this_assigned_nat_ip is a list of IPs. You should be able to get the same thing in Terraform 0.12 by doing:
this_assigned_nat_ip = [for i in google_compute_instance.this_public : i.network_interface.0.access_config.0.assigned_nat_ip]
I did not test is, so I might have some small syntax error, but the for is the key to get that done.
Turns out this[*].network_interface[*].access_config[*].nat_ip[*] gave me what I needed. Given there's only every going to be one address on the interface, it comes out fine.
Related
I'm setting up an Azure CDN Front Door Profile using Terraform.
I'm having a problem with Terraform thinking that my routes have been changed every time I run a plan, even though they haven't been modified:
# azurerm_cdn_frontdoor_route.main-fe-resources will be updated in-place
~ resource "azurerm_cdn_frontdoor_route" "main-fe-resources" {
~ cdn_frontdoor_origin_group_id = "/subscriptions/e68adbb2-af8e-4b01-a7e8-2bf599d6d818/resourcegroups/ci-redacted-frontdoor/providers/Microsoft.Cdn/profiles/ci-redacted-frontdoor/origingroups/main-fe" -> "/subscriptions/e68adbb2-af8e-4b01-a7e8-2bf599d6d818/resourceGroups/ci-redacted-frontdoor/providers/Microsoft.Cdn/profiles/ci-redacted-frontdoor/originGroups/main-fe"
id = "/subscriptions/e68adbb2-af8e-4b01-a7e8-2bf599d6d818/resourceGroups/ci-redacted-frontdoor/providers/Microsoft.Cdn/profiles/ci-redacted-frontdoor/afdEndpoints/ci-main/routes/main-fe-resources"
name = "main-fe-resources"
# (8 unchanged attributes hidden)
# (2 unchanged blocks hidden)
}
The problems seems to be related to casing discrepancies between "resourceGroups" / "resourcegroups" and "originGroups" / "origingroups".
I've tried lowercasing the origin group ID in the Terraform script, but Terraform then complains that the ID doesn't contain the required string "originGroups".
I'm creating the routes like so:
resource "azurerm_cdn_frontdoor_route" "main-fe-resources" {
name = "main-fe-resources"
cdn_frontdoor_endpoint_id = azurerm_cdn_frontdoor_endpoint.main.id
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.main-fe.id
cdn_frontdoor_origin_ids = []
cdn_frontdoor_rule_set_ids = []
enabled = true
forwarding_protocol = "MatchRequest"
https_redirect_enabled = true
patterns_to_match = ["/assets-2022/*", "/_next/*"]
supported_protocols = ["Http", "Https"]
}
Any ideas?
So it does appear to be a bug in the provider. I originally created the routes manually, then added them to the Terraform state. I found that if I delete the routes and let Terraform recreate them then the problem goes away.
It's not an ideal solution, but at least the Terraform plans no longer detect changes when there aren't any.
I'm trying to extract IP addresses from a range with Terraform.
For example, I defined this range 192.168.1.10-192.168.1.20 as a string and I would like to get a list like this: [192.168.1.10,192.168.1.11,…,192.168.1.20].
I checked for Terraform functions but didn’t find a way to do that.
Is this possible?
For further context, I am deploying MetalLB in a Kubernetes cluster and need to define the VIP range as a string like this 192.168.1.10-192.168.1.20.
The Kubernetes cluster is deployed on OpenStack and I need to configure Neutron OpenStack port to accept all IP addresses from this range:
resource "openstack_networking_port_v2" "k8s_worker_mgmt_port" {
name = "k8s_worker_mgmt_port"
network_id = data.openstack_networking_network_v2.k8s_openstack_mgmt_network_name.id
admin_state_up = "true"
allowed_address_pairs {
ip_address = "192.168.1.10"
}
allowed_address_pairs {
ip_address = "192.168.1.11"
}
}
....
}
If you can rely on the first 3 octets of the IP range being the same then you can get away with using a combination of split, slice, join, range and formatlist functions to do this natively inside Terraform with something like the following:
variable "ip_range" {
default = "192.168.1.10-192.168.1.20"
}
locals {
ip_range_start = split("-", var.ip_range)[0]
ip_range_end = split("-", var.ip_range)[1]
# Note that this naively only works for IP ranges using the same first three octects
ip_range_first_three_octets = join(".", slice(split(".", local.ip_range_start), 0, 3))
ip_range_start_fourth_octet = split(".", local.ip_range_start)[3]
ip_range_end_fourth_octet = split(".", local.ip_range_end)[3]
list_of_final_octet = range(local.ip_range_start_fourth_octet, local.ip_range_end_fourth_octet)
list_of_ips_in_range = formatlist("${local.ip_range_first_three_octets}.%s", local.list_of_final_octet)
}
output "list_of_ips_in_range" {
value = local.list_of_ips_in_range
}
This outputs the following:
list_of_ips_in_range = [
"192.168.1.10",
"192.168.1.11",
"192.168.1.12",
"192.168.1.13",
"192.168.1.14",
"192.168.1.15",
"192.168.1.16",
"192.168.1.17",
"192.168.1.18",
"192.168.1.19",
]
If you need to offset that range so you end up with IP addresses from .11 to .20 from the same input then you can do that by changing the local.list_of_final_octet like so:
list_of_final_octet = range(local.ip_range_start_fourth_octet + 1, local.ip_range_end_fourth_octet + 1)
Unfortunately Terraform doesn't have any built in functions for doing more elaborate CIDR math beyond cidrhost, cidrnetmask, cidrsubnet, cidrsubnets functions so if you have more complex requirements then you may need to delegate this to an external script that can calculate it and be called via the external data source.
terraform version 0.11.13
Error: Error refreshing state: 1 error(s) occurred:
data.aws_subnet.private_subnet: data.aws_subnet.private_subnet: value of 'count' cannot be computed
VPC code generated the error above:
resources.tf
data "aws_subnet_ids" "private_subnet_ids" {
vpc_id = "${module.vpc.vpc_id}"
}
data "aws_subnet" "private_subnet" {
count = "${length(data.aws_subnet_ids.private_subnet_ids.ids)}"
#count = "${length(var.private-subnet-mapping)}"
id = "${data.aws_subnet_ids.private_subnet_ids.ids[count.index]}"
}
Change the above code to use count = "${length(var.private-subnet-mapping)}", I successfully provisioned the VPC. But, the output of vpc_private_subnets_ids is empty.
vpc_private_subnets_ids = []
Code provisioned VPC, but got empty list of vpc_private_subnets_ids:
resources.tf
data "aws_subnet_ids" "private_subnet_ids" {
vpc_id = "${module.vpc.vpc_id}"
}
data "aws_subnet" "private_subnet" {
#count = "${length(data.aws_subnet_ids.private_subnet_ids.ids)}"
count = "${length(var.private-subnet-mapping)}"
id = "${data.aws_subnet_ids.private_subnet_ids.ids[count.index]}"
}
outputs.tf
output "vpc_private_subnets_ids" {
value = ["${data.aws_subnet.private_subnet.*.id}"]
}
The output of vpc_private_subnets_ids:
vpc_private_subnets_ids = []
I need the values of vpc_private_subnets_ids. After successfully provisioned VPC use the line, count = "${length(var.private-subnet-mapping)}", I changed code back to count = "${length(data.aws_subnet_ids.private_subnet_ids.ids)}". terraform apply, I got values of the list vpc_private_subnets_ids without above error.
vpc_private_subnets_ids = [
subnet-03199b39c60111111,
subnet-068a3a3e76de66666,
subnet-04b86aa9dbf333333,
subnet-02e1d8baa8c222222
......
]
I cannot use count = "${length(data.aws_subnet_ids.private_subnet_ids.ids)}" when I provision VPC. But, I can use it after VPC provisioned. Any clue?
The problem here seems to be that your VPC isn't created yet and so the data "aws_subnet_ids" "private_subnet_ids" data source read must wait until the apply step, which in turn means that the number of subnets isn't known, and thus the number of data "aws_subnet" "private_subnet" instances isn't predictable and Terraform returns this error.
If this configuration is also the one responsible for creating those subnets then the better design would be to refer to the subnet objects directly. If your module.vpc is also the module creating the subnets then I would suggest to export the subnet ids as an output from that module. For example:
output "subnet_ids" {
value = "${aws_subnet.example.*.id}"
}
Your calling module can then just get those ids directly from module.vpc.subnet_ids, without the need for a redundant extra API call to look them up:
output "vpc_private_subnets_ids" {
value = ["${module.vpc.subnet_ids}"]
}
Aside from the error about count, the configuration you showed also has a race condition because the data "aws_subnet_ids" "private_subnet_ids" block depends only on the VPC itself, and not on the individual VPCs, and so Terraform can potentially read that data source before the subnets have been created. Exporting the subnet ids through module output means that any reference to module.vpc.subnet_ids indirectly depends on all of the subnets and so those downstream actions will wait until all of the subnets have been created.
As a general rule, a particular Terraform configuration should either be managing an object or reading that object via a data source, and not both together. If you do both together then it may sometimes work but it's easy to inadvertently introduce race conditions like this, where Terraform can't tell that the data resource is attempting to consume the result of another resource block that's participating in the same plan.
I have written a Terraform script to create a few Azure Virtual Machines.
The number of VMs created is based upon a variable called type in my .tfvars file:
type = [ "Master-1", "Master-2", "Master-3", "Slave-1", "Slave-2", "Slave-3" ]
My variables.tf file contains the following local:
count_of_types = "${length(var.type)}"
And my resources.tf file contains the code required to actual create the relevant number of VMs from this information:
resource "azurerm_virtual_machine" "vm" {
count = "${local.count_of_types}"
name = "${replace(local.prefix_specific,"##TYPE##",var.type[count.index])}-VM"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
network_interface_ids = ["${azurerm_network_interface.main.*.id[count.index]}"]
vm_size = "Standard_B2ms"
tags = "${local.tags}"
Finally, in my output.tf file, I output the IP address of each server:
output "public_ip_address" {
value = ["${azurerm_public_ip.main.*.ip_address}"]
}
I am creating a Kubernetes cluster with 1x Master and 1x Slave VM. For this purpose, the script works fine - the first IP output is the Master and the second IP output is the Slave.
However, when I move to 8+ VMs in total, I'd like to know which IP refers to which VM.
Is there a way of amending my output to include the type local, or just the server's hostname alongside the Public IP?
E.g. 54.10.31.100 // Master-1.
Take a look at formatlist (which is one of the functions for string manipulations) and can be used to iterate over the instance attributes and list tags and other attributes of interest.
output "ip-address-hostname" {
value = "${
formatlist(
"%s:%s",
azurerm_public_ip.resource_name.*.fqdn,
azurerm_public_ip.resource_name.*.ip_address
)
}"
}
Note this is just a draft pseudo code. You may have to tweak this and create additional data sources in your TF file for effective enums
More reading available - https://www.terraform.io/docs/configuration/functions/formatlist.html
Raunak Jhawar's answer pointed me in the right direction, and therefore got the green tick.
For reference, here's the exact code I used in the end:
output "public_ip_address" {
value = "${formatlist("%s: %s", azurerm_virtual_machine.vm.*.name, azurerm_public_ip.main.*.ip_address)}"
}
This resulted in the following output:
I am not able to target a single aws_volume_attachment with its corresponding aws_instance via -target.
The problem is that the aws_instance is taken from a list by using count.index, which forces terraform to refresh all aws_instance resources from that list.
In my concrete case I am trying to manage a consul cluster with terraform.
The goal is to be able to reinit a single aws_instance resource via the -target flag, so I can upgrade/change the whole cluster node by node without downtime.
I have the following tf code:
### IP suffixes
variable "subnet_cidr" { "10.10.0.0/16" }
// I want nodes with addresses 10.10.1.100, 10.10.1.101, 10.10.1.102
variable "consul_private_ips_suffix" {
default = {
"0" = "100"
"1" = "101"
"2" = "102"
}
}
###########
# EBS
#
// Get existing data EBS via Name Tag
data "aws_ebs_volume" "consul-data" {
count = "${length(keys(var.consul_private_ips_suffix))}"
filter {
name = "volume-type"
values = ["gp2"]
}
filter {
name = "tag:Name"
values = ["${var.platform_type}.${var.platform_id}.consul.data.${count.index}"]
}
}
#########
# EC2
#
resource "aws_instance" "consul" {
count = "${length(keys(var.consul_private_ips_suffix))}"
...
private_ip = "${cidrhost(aws_subnet.private-b.cidr_block, lookup(var.consul_private_ips_suffix, count.index))}"
}
resource "aws_volume_attachment" "consul-data" {
count = "${length(keys(var.consul_private_ips_suffix))}"
device_name = "/dev/sdh"
volume_id = "${element(data.aws_ebs_volume.consul-data.*.id, count.index)}"
instance_id = "${element(aws_instance.consul.*.id, count.index)}"
}
This works perfectly fine for initializing the cluster.
Now I make a change in my user_data init script of the consul nodes and want to rollout node by node.
I run terraform plan -target=aws_volume_attachment.consul_data[0] to reinit node 0.
This is when I run into the above mentioned problem, that terraform renders all aws_instance resources because of instance_id = "${element(aws_instance.consul.*.id, count.index)}".
Is there a way to "force" tf to target a single aws_volume_attachment with only its corresponding aws_instance resource?
At the time of writing this sort of usage is not possible due to the fact that, as you've seen, an expression like aws_instance.consul.*.id creates a dependency on all the instances, before the element function is applied.
The -target option is not intended for routine use and is instead provided only for exceptional circumstances such as recovering carefully from an unintended change.
For this specific situation it may work better to use the ignore_changes lifecycle setting to prevent automatic replacement of the instances when user_data changes, like this:
resource "aws_instance" "consul" {
count = "${length(keys(var.consul_private_ips_suffix))}"
...
private_ip = "${cidrhost(aws_subnet.private-b.cidr_block, lookup(var.consul_private_ips_suffix, count.index))}"
lifecycle {
ignore_changes = ["user_data"]
}
}
With this set, Terraform will detect but ignore changes to the user_data attribute. You can then get the gradual replacement behavior you want by manually tainting the resources one at a time:
$ terraform taint aws_instance.consul[0]
On the next plan, Terraform will then see that this resource instance is tainted and produce a plan to replace it. This gives you direct control over when the resources are replaced, so you can therefore ensure that e.g. the consul leave step gets a chance to run first, or whatever other cleanup you need to do.
This workflow is recommended over -target because it makes the replacement step explicit. -target can be confusing in a collaborative environment because there is no evidence of its use, and thus no clear explanation of how the current state was reached. taint, on the other hand, explicitly marks your intention in the state where other team members can see it, and then replaces the resource via the normal plan/apply steps.