I have a data management issue that I have been banging my head against. I have some Openstack resources managed outside Terrafrom that I need to find the IP addresses for, then add them to a list that can be handed through Ansible or cloud-init. There are an arbitrary number of these resources, rather than being a fixed list size, or names.
I have the names of the resources, so I am looking them up via for_each:
data "openstack_networking_port_v2" "ports" {
for_each = toset(var.assigned_ports)
name = "${each.key}port"
}
which results in a data source for each resource like this:
data.openstack_networking_port_v2.ports["host1port"]
data.openstack_networking_port_v2.ports["host2port"]
data.openstack_networking_port_v2.ports["host3port"]
where the content includes the IP address I'm after via a field (below is truncated for brevity):
data "openstack_networking_port_v2" "ports" {
admin_state_up = true
all_fixed_ips = [
"10.1.2.3",
]
all_security_group_ids = [
"2cccdd5f-dec0-4f2e-80a3-ceefbb3625ff",
]
}
I would like to build a local that is a list of these IP addresses that I can use somewhere, but I am struggling to get anywhere, especially as the IP address I am after is element 0 in the list, eg:
data.openstack_networking_port_v2.ports["host3port"].all_fixed_ips[0]
any help would be greatly appreciated.
I managed to solve it by creating a local like below:
locals {
ips = [ for ip in data.openstack_networking_port_v2.ports: ip.all_fixed_ips[0]]
}
I had tried something similar before, but was incorrectly iterating on:
data.openstack_networking_port_v2.ports[*]
Related
I can't quite work out how to add a provisioning "remote-exec" section to my module, where I would like it to copy configuration scripts from the project directory and execute them. But when I add this module I can't seem to have it target the VM instance and as it has multiple network cards, I would just like to target the primary card.
I have used this to deploy a Linux VM via Terraform on a on-premise vSphere instance.
provider "vsphere" {
user = var.vsphere_user
password = var.vsphere_password
vsphere_server = var.vsphere_server
# If you have a self-signed cert
allow_unverified_ssl = true
}
This is the sample Linux deployment script outlining the network part, which allows configuring of multiple network card to a VM
resource "vsphere_virtual_machine" "Linux" {
count = var.is_windows_image ? 0 : var.instances
depends_on = [var.vm_depends_on]
name = "%{if var.vmnameliteral != ""}${var.vmnameliteral}%{else}${var.vmname}${count.index + 1}${var.vmnamesuffix}%{endif}"
........
dynamic "network_interface" {
for_each = keys(var.network) #data.vsphere_network.network[*].id #other option
content {
network_id = data.vsphere_network.network[network_interface.key].id
adapter_type = var.network_type != null ? var.network_type[network_interface.key] : data.vsphere_virtual_machine.template.network_interface_types[0]
}
}
........
//Copy the file to execute
provisioner "file" {
source = var.provisioner_file_source // eg ./scripts/*
destination = var.provisioner_file_destination // eg /tmp/filename
connection {
type = "ssh" // for Linux its SSH
user = var.provisioner_ssh_username
password = var.provisioner_ssh_password
host = self.vsphere_virtual_machine.Linux.*.guest_ip_address
}
}
//Run the script
provisioner "remote-exec" {
inline = [
"chmod +x ${var.provisioner_file_destination}",
"${var.provisioner_file_destination} args",
]
connection {
type = "ssh" // for Linux its SSH
user = var.provisioner_ssh_username
password = var.provisioner_ssh_password
host = self.vsphere_virtual_machine.Linux.*.guest_ip_address
}
}
}
} // end of resource "vsphere_virtual_machine" "Linux"
So I have tried self. reference but thus far self.vsphere_virtual_machine.Linux.*.guest_ip_address this just shows the entire array of guest IPs?
Anyone able to point me in the right direction or even a good guide on terraform modules?
The first issue I notice is that the vsphere_virtual_machine resource doesn't have a property guest_ip_address, it is guest_ip_addresses. This does indeed return a list, so you would need to figure out how to select the IP you want from the list. I'm not sure if the ordering is predictable in vSphere. If I recall, it isn't.
The simplest approach, without knowing what you're trying to accomplish, would probably be to use the default_ip_address as this returns a single address and selects the ip for the "most likely" scenario.
It looks like you're also setting up the host to be multi-homed. This adds additional complexity. If default_ip_address doesn't give you what you want, you'll need to resort to using some more complex expression to find your IP. Perhaps you could use the sort function so the ordering is more predictable. You may also be able to "find" the IP using a for loop.
Regarding building modules, if the above code is in a module, the first thing I would recommend is avoiding the use of count. Reasoning for this is explained in the following texts. Hashicorp has a lot of good documentation directly on their site. Also, the folks over at gruntwork also have a blog series that they developed into a book called Terraform Up and Running. I recommend checking this out.
I have the following use-case: I'm using a combination of the Azure DevOps pipelines and Terraform to synchronize our TAP for Grafana (v7.4). Intention is that we can tweak and tune our dashboards on Test, and push the changes to Acceptance (and Production) via the pipelines.
I've got one pipeline that pulls in the state of the Test environment and writes it to a set of json files (for the dashboards) and a single json array (for the folders).
The second pipeline should use these resources to synchronize the Acceptance environment.
This works flawlessly for the dashboards, but I'm hitting a snag putting the dashboards in the right folder dynamically. Here's my latest working code:
resource "grafana_folder" "folders" {
for_each = toset(var.grafana_folders)
title = each.key
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset(path.module, "../dashboards/*.json")
config_json = file("${path.module}/${each.key}")
}
The folder resources pushes the folders based on a variable list of names that I pass via variables. This generates the folders correctly.
The dashboard resource pushes the dashboards correctly, based on all dashboard files in the specified folder.
But now I'd like to make sure the dashboards end up in the right folder. The provider specifies that I need to do this based on the folder UID, which is generated when the folder is created. So I'd like to take the output from the grafana_folder resource and use it in the grafana_dashboard resource. I'm trying the following:
resource "grafana_folder" "folders" {
for_each = toset(var.grafana_folders)
title = each.key
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset(path.module, "../dashboards/*.json")
config_json = file("${path.module}/${each.key}")
folder = lookup(transpose(grafana_folder.folders), "Station_Details", "Station_Details")
depends_on = [grafana_folder.folders]
}
If I read the Grafana Provider github correctly, the grafana_folder resource should output a map of [uid, title]. So I figured if I transpose that map, and (by way of test) lookup a folder title that I know exists, I can test the concept.
This gives the following error:
on main.tf line 38, in resource "grafana_dashboard" "dashboards":
38: folder = lookup(transpose(grafana_folder.folders),
"Station_Details", "Station_Details")
Invalid value for "default" parameter: the default value must have the
same type as the map elements.
Both Uid and Title should be strings, so I'm obviously overlooking something.
Does anyone have an inkling where I'm going wrong and/or have suggestions on how I can do this (better)?
I think the problem this error is trying to report is that grafana_folder.folders is a map of objects, and so passing it to transpose doesn't really make sense but seems to be succeeding because Terraform has found some clever way to do automatic type conversions to produce some result, but then that result (due to the signature of transpose) is a map of lists rather than a map of strings, and so "Station_Details" (a string, rather than a list) isn't a valid fallback value for that lookup.
My limited familiarity with folders in Grafana leaves me unsure as to what to suggest instead, but I expect the final expression will look something like the following:
folder = grafana_folder.folders[SOMETHING].id
SOMETHING here will be an expression that allows you to know for a given dashboard which folder key it ought to belong to. I'm not seeing an answer to that from what you shared in your question, but just as a placeholder to make this a complete answer I'll suggest that one option would be to make a local map from dashboard filename to folder name:
locals {
# a local value probably isn't actually the right answer
# here, but I'm just showing it as a placeholder for one
# possible way to map from dashboard filename to folder
# name. These names should all be elements of
# var.grafana_folders in order for this to work.
dashboard_folders = {
"example1.json" = "example-folder"
"example2.json" = "example-folder"
"example3.json" = "another-folder"
}
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset("${path.module}/dashboards", "*.json")
config_json = file("${path.module}/dashboards/${each.key}")
folder = grafana_folder.folders[local.dashboard_folders[each.key]].id
}
I have 3 environments for my infrastructure. All of them the same, but with various sizes. I understand this is a good use case for Terraform workspaces. And indeed it works well in that regard. But please correct me if this is not the right way to go.
Now my only issue is with managing the DNS within the workspaces. I use the Google provider and that works by having 2 types of resources: a google_dns_managed_zone which represents the zone, and a google_dns_record_set type for each DNS record.
Note that the record set type needs to have a reference to the managed zone type.
With that in mind, I need to manage the DNS zone from the production environment. I can't share that resource in the other workspaces because I should be able to destroy the dev or staging workspace without destroying the DNS zone.
I try to solve that issue with count. I use it as a boolean as shown in the code below and find it pretty hackish but that's what I have found in the Terraform community. Any improvement is welcome.
That allows me to have the zone and the production records (like MX shown below as example) only present in the prod workspace.
But then I am stuck when it comes to managing record sets only in a specific workspace. I need that for example in the case of creating an nginx in the dev workspace and automatically create a DNS record set for it, like dev.example.com.
For that I need to access the managed zone resource. As shown below I use terraform_remote_state in order to access the resource from the prod workspace. To the extent of my understanding, that works with an output, which you can see below. When I select the prod workspace, I can indeed output the managed zone. And then if I select another workspace, using the remote state retrieves the managed zone from prod successfully. But my issue is that Terraform fails when it comes to the output line since it is only present in the prod workspace and does not exist in any other workspace and thus can't be outputted.
So it's a bit of a nonsense and I don't understand if there is a better way to achieve this. I did a fair bit of research and asked the community but could not find an answer to that. It seems to me that managing DNS is common to all infrastructures and should be pretty well covered. What am I doing wrong and how should it be done?
locals {
environment="${terraform.workspace}"
dns_zone_managers = {
"dev" = "0"
"staging" = "0"
"prod" = "1"
}
dns_zone_manager = "${lookup(local.dns_zone_managers, local.environment)}"
}
resource "google_dns_managed_zone" "base_zone" {
name = "base_zone"
dns_name = "example.com."
count = "${local.dns_zone_manager}"
}
resource "google_dns_record_set" "mx" {
name = "${google_dns_managed_zone.base_zone.dns_name}"
managed_zone = "${google_dns_managed_zone.base_zone.name}"
type = "MX"
ttl = 300
rrdatas = [
"10 spool.mail.example.com.",
"50 fb.mail.example.com."
]
count = "${local.dns_zone_manager}"
}
data "terraform_remote_state" "dns" {
backend = "local"
workspace = "prod"
}
output "dns_zone_name" {
value = "${google_dns_managed_zone.base_zone.*.name[0]}"
}
Then I can introduce record sets in a specific workspace only, using count again and referring to the managed zone through the remote state like so:
resource "google_dns_record_set" "a" {
name = "dev"
managed_zone = "${data.terraform_remote_state.dns.dns_zone_name}"
type = "A"
ttl = 300
rrdatas = ["1.2.3.4"]
}
I'm trying to configure a list at the top of my file to list all the SQS resources that should subscribe to a SNS topic. It throws a "resource variables must be three parts: TYPE.NAME.ATTR"
I used locals because it seems they support interpolated values while variables did not.
locals {
update-subscribers = [
"${var.prefix}-${terraform.workspace}-contribution-updates"
]
}
Here is a snippet of my sns topic subscription.
resource "aws_sns_topic_subscription" "subscription" {
count = "${length(locals.update-subscribers.*)}"
topic_arn = "${aws-sns-update-topic.topic.arn}"
protocol = "sqs"
endpoint = "arn:aws:sqs:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:${element(locals.update-subscribers, count.index)}"
endpoint_auto_confirms = true
}
It would be nice to be able to use my variable list so I can switch around the workspaces without having any issues on the AWS site. All examples I can find point to a static list of CIDR settings, while I want my list to be based on the interpolated strings. I also tried
locals.contribution-update-subscribers[count.index]
Terraform did not like that either. How should my file be setup to support this or can it be supported?
There are two problems with the configuration given here:
The object name for accessing local values is called local, not locals.
You don't need to (and currently, cannot) use the splat syntax to count the number of elements in what is already a list.
Addressing both of these would give the following configuration, which I think should work:
resource "aws_sns_topic_subscription" "subscription" {
count = "${length(local.update-subscribers)}"
topic_arn = "${aws_sns_update_topic.topic.arn}"
protocol = "sqs"
endpoint = "arn:aws:sqs:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:${local.update-subscribers[count.index])}"
endpoint_auto_confirms = true
}
Although dashes are allowed in identifiers in the Terraform language to allow the use of different naming schems in other systems, the idiomatic style is to use underscores for names defined within Terraform itself, such as your local value name update-subscribers.
I have a remote state attribute called subnets which is stored in: data.terraform_remote_state.alb.subnets
Depending on what I'm deploying, this attribute either exists or doesn't exist.
When I try to create an ECS cluster, it requires an input of the subnet groups in which I would like to either use:
data.terraform_remote_state.alb.subnets
or
var.vpc_subnets (the subnets of the VPC)
Unfortunately, because of the way the interpolation works, it needed to be hacked together:
"${split(",", length(var.vpc_subnets) == 0 ? join(",",data.terraform_remote_state.alb.subnets) : join(",",var.vpc_subnets))}"
(Refering to: https://github.com/hashicorp/terraform/issues/12453)
However, because Terraform does not seem to 'lazily' evaluate ternary operators, it throws me the following error even if var.vpc_subnets is NOT zero:
Resource 'data.terraform_remote_state.alb' does not have attribute 'subnets' for variable 'data.terraform_remote_state.alb.subnets'
How can I properly handle remote state resources that could be undefined?
EDIT: Typo: Subnet->Subnets
Managed to figure it out.
When using Terraform Remote State, you have the ability to set a default: https://www.terraform.io/docs/providers/terraform/d/remote_state.html
This works in my situation when my data "terraform_remote_state.alb.subnets does not return a value. I can preset the variable to be "" and use locals to do a check for this variable.
Will it be subnet or subnets?
Suppose you have below data source:
data "terraform_remote_state" "alb" {
backend = "s3"
config {
name = "alb"
}
}
You need check the remote state attribute have any outputs with name subnet or not. Or the key name is subnets, you need confirm by yourself.