Use vsphere_vmfs_datastore to create one DS per volume - terraform

Following this v0.11 example from the official documentation, I had to make minor changes to it to make it work with v0.12 (provider.vsphere is v1.11.0).
resource "vsphere_vmfs_datastore" "datastore" {
name = "test"
host_system_id = "${data.vsphere_host.esxi_host.id}"
disks = data.vsphere_vmfs_disks.available.disks
}
This, however, creates a single datastore comprised of all discovered volume. What I want is one new datastore per each discovered volumes.
I tried to use count = 2 in above resource; with 2 volumes that attempts to create 2 datastores (the good), but each each still comprises the both volumes (the bad).
vsphere_vmfs_datastore should count the number of volumes returned by vsphere_vmfs_disks (so that I don't have to set it), loop through the list and create one datastore on each, which makes me think this resource section should be inside of a loop and each datastore would assign a unique name and use data.vsphere_vmfs_disks.available.disks.[N] but I don't know how to do that in Terraform 0.12 (there are relatively few examples and still some bugs).

Would the following work for you? It still uses the count method but passes in the count index to select a disk inside of the data.vsphere_vmfs_disks.available.disks data source.
resource "vsphere_vmfs_datastore" "datastore" {
count = 2
name = "test${count.index}"
host_system_id = data.vsphere_host.esxi_host.id
disks = [
data.vsphere_vmfs_disks.available.disks[count.index]
]
}
Unfortunately I don't have an ESXi host I can test against, but the logic should still apply.

Related

terraform to append consul_key values in json

I have a project on which I have to use terraform and in the end of the terraform, I need to append consul key's values places on a /path. I have the following:
resource "consul_keys" "write" {
datacenter = "dc1"
token = "xxxx-x-x---xxxxxx--xx-x-x-x"
key {
path = "path/to/name"
value = jsonencode([
{
cluster_name = "test", "region" : "us-east1"
},
{
cluster_name = "test2", "region" : "us-central1"
}
])
}
}
But if I run the terraform again with new values, it deletes all previous values and update new values.
Any way I can keep appending the values keeping previous values as it is?
The consul_keys resource type in the hashicorp/consul provider only supports situations where it is responsible for managing the entirety of the value of each of the given keys. This is because the underlying Consul API itself treats each key as a single atomic unit, and doesn't support partial updates of the sort you want to achieve here.
If you are able to change the design of the system that is consuming these values, a different way to get a comparable result would be to set aside a particular key prefix as a collection of values that the consumer will merge together after reading them. Consul's Read Key API includes a mode recurse=true which allows you to provide a prefix to read all of the entries with a given prefix in a single request.
By designing your key structure this way, you can use a separate keys for the data that Terraform will provide and the data provided by each other system that will generate data under this shared prefix. These different systems can therefore each maintain their own designated sub-key and not need to take any special extra steps to preserve existing data already stored at that location.
If you are using consul-template then consul-template's ls function wraps the multi-key lookup I described above.
If you are reading the data from Consul in some other Terraform configuration, the consul_key_prefix data source similarly implements the operation of fetching all key/value pairs under a given prefix.

How to put Dashboards in the right folder dynamically using the Terraform Grafana provider

I have the following use-case: I'm using a combination of the Azure DevOps pipelines and Terraform to synchronize our TAP for Grafana (v7.4). Intention is that we can tweak and tune our dashboards on Test, and push the changes to Acceptance (and Production) via the pipelines.
I've got one pipeline that pulls in the state of the Test environment and writes it to a set of json files (for the dashboards) and a single json array (for the folders).
The second pipeline should use these resources to synchronize the Acceptance environment.
This works flawlessly for the dashboards, but I'm hitting a snag putting the dashboards in the right folder dynamically. Here's my latest working code:
resource "grafana_folder" "folders" {
for_each = toset(var.grafana_folders)
title = each.key
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset(path.module, "../dashboards/*.json")
config_json = file("${path.module}/${each.key}")
}
The folder resources pushes the folders based on a variable list of names that I pass via variables. This generates the folders correctly.
The dashboard resource pushes the dashboards correctly, based on all dashboard files in the specified folder.
But now I'd like to make sure the dashboards end up in the right folder. The provider specifies that I need to do this based on the folder UID, which is generated when the folder is created. So I'd like to take the output from the grafana_folder resource and use it in the grafana_dashboard resource. I'm trying the following:
resource "grafana_folder" "folders" {
for_each = toset(var.grafana_folders)
title = each.key
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset(path.module, "../dashboards/*.json")
config_json = file("${path.module}/${each.key}")
folder = lookup(transpose(grafana_folder.folders), "Station_Details", "Station_Details")
depends_on = [grafana_folder.folders]
}
If I read the Grafana Provider github correctly, the grafana_folder resource should output a map of [uid, title]. So I figured if I transpose that map, and (by way of test) lookup a folder title that I know exists, I can test the concept.
This gives the following error:
on main.tf line 38, in resource "grafana_dashboard" "dashboards":
38: folder = lookup(transpose(grafana_folder.folders),
"Station_Details", "Station_Details")
Invalid value for "default" parameter: the default value must have the
same type as the map elements.
Both Uid and Title should be strings, so I'm obviously overlooking something.
Does anyone have an inkling where I'm going wrong and/or have suggestions on how I can do this (better)?
I think the problem this error is trying to report is that grafana_folder.folders is a map of objects, and so passing it to transpose doesn't really make sense but seems to be succeeding because Terraform has found some clever way to do automatic type conversions to produce some result, but then that result (due to the signature of transpose) is a map of lists rather than a map of strings, and so "Station_Details" (a string, rather than a list) isn't a valid fallback value for that lookup.
My limited familiarity with folders in Grafana leaves me unsure as to what to suggest instead, but I expect the final expression will look something like the following:
folder = grafana_folder.folders[SOMETHING].id
SOMETHING here will be an expression that allows you to know for a given dashboard which folder key it ought to belong to. I'm not seeing an answer to that from what you shared in your question, but just as a placeholder to make this a complete answer I'll suggest that one option would be to make a local map from dashboard filename to folder name:
locals {
# a local value probably isn't actually the right answer
# here, but I'm just showing it as a placeholder for one
# possible way to map from dashboard filename to folder
# name. These names should all be elements of
# var.grafana_folders in order for this to work.
dashboard_folders = {
"example1.json" = "example-folder"
"example2.json" = "example-folder"
"example3.json" = "another-folder"
}
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset("${path.module}/dashboards", "*.json")
config_json = file("${path.module}/dashboards/${each.key}")
folder = grafana_folder.folders[local.dashboard_folders[each.key]].id
}

What problem does the keepers map for the random provider solve?

I am trying to understand the use case for keepers feature of the terraform random provider. I read the docs but it's not clicking-in for me. What is a concrete example, situation where keeper map would be used and why. Example form the docs reproduced below.
resource "random_id" "server" {
keepers = {
# Generate a new id each time we switch to a new AMI id
ami_id = "${var.ami_id}"
}
byte_length = 8
}
resource "aws_instance" "server" {
tags = {
Name = "web-server ${random_id.server.hex}"
}
# Read the AMI id "through" the random_id resource to ensure that
# both will change together.
ami = "${random_id.server.keepers.ami_id}"
# ... (other aws_instance arguments) ...
}
The keepers are seeds for the random string that is generated. They contain data that you can use to ensure, essentially, that your random string is deterministic - until something happens that means it should change.
If you had a random string without any keepers, and you were using it in your server's Name tag as in this example, then Terraform would generate a plan to change the Name (containing a new random ID) every time you ran terraform plan/terraform apply.
This is not desirable, because while you might want randomness when you first create the server, you probably don't want so much randomness that it constantly changes. That is, once you apply your plan, your infrastructure should remain stable and subsequent plans should generate no changes as long as everything else remains the same.
When it comes time to make changes to this server - such as, in this case, changing the image it's built from - you may well want the server name to automatically change to a new random value to represent that this is no longer the same server as before. Using the AMI ID in the keepers for the random ID therefore means that when your AMI ID changes, a new random ID will be generated for the server's Name as well.

Terraform doesn't seem to pick up manual changes

I have a very frustrating Terraform issue, I made some changes to my terraform script which failed when I applied the plan. I've gone through a bunch of machinations and probably made the situation worse as I ended up manually deleting a bunch of AWS resources in trying to resolve this.
So now I am unable to use Terraform at all (refresh, plan, destroy) all get the same error.
The Situation
I have a list of Fargate services, and a set of maps which correlate different features of the fargate services such as the "Target Group" for the load balancer (I've provided some code below). The problem appears to be that Terraform is not picking up that these resources have been manually deleted or is somehow getting confused because they don't exist. At this point if I run a refresh, plan or destroy I get an error stating that a specific list is empty, even though it isn't (or should not be).
In the failed run I added a new service to the list below along with a new url (see code below)
Objective
At this point I would settle for destroying the entire environment (its my dev environment), however; ideally I want to just get the system working such that Terraform will detect the changes and work properly.
Terraform Script is Valid
I have reverted my Terraform scripts back to the last known good version. I have run the good version against our staging environment and it works fine.
Configuration Info
MacOS Mojave 10.14.6 (18G103)
Terraform v0.12.24.
provider.archive v1.3.0
provider.aws v2.57.0
provider.random v2.2.1
provider.template v2.1.2
The Terraform state file is being stored in a S3 bucket, and terraform init --reconfigure has been called.
What I've done
I was originally getting a similar error but it was in a different location, after many hours Googling and trying stuff (which I didn't write down) I decided to manually remove the AWS resources associated with the problematic code (the ALB, Target Groups, security groups)
Example Terraform Script
Unfortunately I can't post the actual script as it is private, but I've posted what I believe is the pertinent parts but have redacted some info. The reason I mention this is that any syntax type error you might see would be caused by this redaction, as I stated above the script works fine when run in our staging environment.
globalvars.tf
In the root directory. In the case of the failed Terraform run I added a new name to the service_names (edd = "edd") list (I added as the first element). In the service_name_map_2_url I added the new entry (edd = "edd") as the last entry. I'm not sure if the fact that I added these elements in different 'order' is the problem, although it really shouldn't since I access the map via the name and not by index
variable "service_names" {
type = list(string)
description = "This is a list/array of the images/services for the cluster"
default = [
"alert",
"alert-config"
]
}
variable service_name_map_2_url {
type = map(string)
description = "This map contains the base URL used for the service"
default = {
alert = "alert"
alert-config = "alert-config"
}
}
alb.tf
In modules/alb. In this module we create an ALB and then a target group for each service, which looks like this. The items from globalvars.tf are passed into this script
locals {
numberOfServices = length(var.service_names)
}
resource "aws_alb" "orchestration_alb" {
name = "orchestration-alb"
subnets = var.public_subnet_ids
security_groups = [var.alb_sg_id]
tags = {
environment = var.environment
group = var.tag_group_name
app = var.tag_app_name
contact = var.tag_contact_email
}
}
resource "aws_alb_target_group" "orchestration_tg" {
count = local.numberOfServices
name = "${var.service_names[count.index]}-tg"
port = 80
protocol = "HTTP"
vpc_id = var.vpc_id
target_type = "ip"
deregistration_delay = 60
tags = {
environment = var.environment
group = var.tag_group_name
app = var.tag_app_name
contact = var.tag_contact_email
}
health_check {
path = "/${var.service_name_map_2_url[var.service_names[count.index]]}/health"
port = var.app_port
protocol = "HTTP"
healthy_threshold = 2
unhealthy_threshold = 5
interval = 30
timeout = 5
matcher = "200-308"
}
}
output.tf
This is the output of the alb.tf, other things are outputted but this is the one that matters for this issue
output "target_group_arn_suffix" {
value = aws_alb_target_group.orchestration_tg.*.arn_suffix
}
cloudwatch.tf
In modules/cloudwatch. I attempt to create a dashboard
data "template_file" "Dashboard" {
template = file("${path.module}/dashboard.json.template")
vars = {
...
alert-tg = var.target_group_arn_suffix[0]
alert-config-tg = var.target_group_arn_suffix[1]
edd-cluster-name = var.ecs_cluster_name
alb-arn-suffix = var.alb-arn-suffix
}
}
Error
When I run terraform refresh (or plan or destroy) I get the following error (I get the same error for alert-config as well)
Error: Invalid index
on modules/cloudwatch/cloudwatch.tf line 146, in data "template_file" "Dashboard":
146: alert-tg = var.target_group_arn_suffix[0]
|----------------
| var.target_group_arn_suffix is empty list of string
The given key does not identify an element in this collection value.
AWS Environment
I have manually deleted the ALB. Dashboard and all Target Groups. I would expect (and this has worked in the past) that Terraform would detect this and update its state file appropriately such that when running a plan it would know it has to create the ALB and target groups.
Thank you
Terraform trusts its state as the single source of truth. Using Terraform in the presence of manual change is possible, but problematic.
If you manually remove infrastructure, you need to run terraform state rm [resource path] on the manually removed resource.
Gruntwork has what they call The Golden Rule of Terraform:
The master branch of the live repository should be a 1:1 representation of what’s actually deployed in production.

Configure interpolated list in Terraform variable to create SNS subscriptions

I'm trying to configure a list at the top of my file to list all the SQS resources that should subscribe to a SNS topic. It throws a "resource variables must be three parts: TYPE.NAME.ATTR"
I used locals because it seems they support interpolated values while variables did not.
locals {
update-subscribers = [
"${var.prefix}-${terraform.workspace}-contribution-updates"
]
}
Here is a snippet of my sns topic subscription.
resource "aws_sns_topic_subscription" "subscription" {
count = "${length(locals.update-subscribers.*)}"
topic_arn = "${aws-sns-update-topic.topic.arn}"
protocol = "sqs"
endpoint = "arn:aws:sqs:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:${element(locals.update-subscribers, count.index)}"
endpoint_auto_confirms = true
}
It would be nice to be able to use my variable list so I can switch around the workspaces without having any issues on the AWS site. All examples I can find point to a static list of CIDR settings, while I want my list to be based on the interpolated strings. I also tried
locals.contribution-update-subscribers[count.index]
Terraform did not like that either. How should my file be setup to support this or can it be supported?
There are two problems with the configuration given here:
The object name for accessing local values is called local, not locals.
You don't need to (and currently, cannot) use the splat syntax to count the number of elements in what is already a list.
Addressing both of these would give the following configuration, which I think should work:
resource "aws_sns_topic_subscription" "subscription" {
count = "${length(local.update-subscribers)}"
topic_arn = "${aws_sns_update_topic.topic.arn}"
protocol = "sqs"
endpoint = "arn:aws:sqs:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:${local.update-subscribers[count.index])}"
endpoint_auto_confirms = true
}
Although dashes are allowed in identifiers in the Terraform language to allow the use of different naming schems in other systems, the idiomatic style is to use underscores for names defined within Terraform itself, such as your local value name update-subscribers.

Resources