I am trying to understand the use case for keepers feature of the terraform random provider. I read the docs but it's not clicking-in for me. What is a concrete example, situation where keeper map would be used and why. Example form the docs reproduced below.
resource "random_id" "server" {
keepers = {
# Generate a new id each time we switch to a new AMI id
ami_id = "${var.ami_id}"
}
byte_length = 8
}
resource "aws_instance" "server" {
tags = {
Name = "web-server ${random_id.server.hex}"
}
# Read the AMI id "through" the random_id resource to ensure that
# both will change together.
ami = "${random_id.server.keepers.ami_id}"
# ... (other aws_instance arguments) ...
}
The keepers are seeds for the random string that is generated. They contain data that you can use to ensure, essentially, that your random string is deterministic - until something happens that means it should change.
If you had a random string without any keepers, and you were using it in your server's Name tag as in this example, then Terraform would generate a plan to change the Name (containing a new random ID) every time you ran terraform plan/terraform apply.
This is not desirable, because while you might want randomness when you first create the server, you probably don't want so much randomness that it constantly changes. That is, once you apply your plan, your infrastructure should remain stable and subsequent plans should generate no changes as long as everything else remains the same.
When it comes time to make changes to this server - such as, in this case, changing the image it's built from - you may well want the server name to automatically change to a new random value to represent that this is no longer the same server as before. Using the AMI ID in the keepers for the random ID therefore means that when your AMI ID changes, a new random ID will be generated for the server's Name as well.
Related
Context: I'm developing a TF provider (here's the official guide from HashiCorp).
I run into the following situation:
# main.tf
resource "foo" "example" {
id = "foo-123"
name = "foo-name"
lastname = "foo-lastname"
}
resource "bar" "example" {
id = "bar-123"
parent_id = foo.example.id
parent_name = foo.example.name
parent_lastname = foo.example.lastname
}
where I have to declare parent_name and parent_lastname (effectively duplicate them) explicitly to be able to read these values that are necessary for read / create request for Bar resource.
Is it possible to use a fancy trick with d *schema.ResourceData in
func resourceBarRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
to avoid duplicated in my TF config, i.e. have just:
resource "foo" "example" {
id = "foo-123"
name = "foo-name"
lastname = "foo-lastname"
}
resource "bar" "example" {
id = "bar-123"
parent_id = foo.example.id
}
infer foo.example.name and foo.example.lastname just based on foo.example.id in resourceBarRead() somehow so I won't have to duplicate those fields in both resources?
Obviously, this is minimal example and let's assume I need both foo.example.name and foo.example.lastname to send a read / create request for Bar resource? In other words, can I iterate through other resource in TF state / main.tf file based on target ID to find its other attributes? It seems to be a useful feauture howerever it's not mentioned in HashiCorp's guide so I guess it's undesirable and I have to duplicate those fields.
In Terraform's provider/resource model, each resource block is independent of all others unless the user explicitly connects them using references like you showed.
However, in most providers it's sufficient to pass only the id (or similar unique identifier) attribute downstream to create a relationship like this, because the other information about that object is already known to the remote system.
Without knowledge about the particular remote system you are interacting with, I would expect that you'd be able to use the value given as parent_id either directly in an API call (and thus have the remote system connect it with the existing object), or to make an additional read request to the remote API to look up the object using that ID and obtain the name and lastname values that were saved earlier.
If those values only exist in the provider's context and not in the remote API then I don't think there will be any alternative but to have the user pass them in again, since that is the only way that local values (as opposed to values persisted in the remote API) can travel between resources.
I have the following use-case: I'm using a combination of the Azure DevOps pipelines and Terraform to synchronize our TAP for Grafana (v7.4). Intention is that we can tweak and tune our dashboards on Test, and push the changes to Acceptance (and Production) via the pipelines.
I've got one pipeline that pulls in the state of the Test environment and writes it to a set of json files (for the dashboards) and a single json array (for the folders).
The second pipeline should use these resources to synchronize the Acceptance environment.
This works flawlessly for the dashboards, but I'm hitting a snag putting the dashboards in the right folder dynamically. Here's my latest working code:
resource "grafana_folder" "folders" {
for_each = toset(var.grafana_folders)
title = each.key
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset(path.module, "../dashboards/*.json")
config_json = file("${path.module}/${each.key}")
}
The folder resources pushes the folders based on a variable list of names that I pass via variables. This generates the folders correctly.
The dashboard resource pushes the dashboards correctly, based on all dashboard files in the specified folder.
But now I'd like to make sure the dashboards end up in the right folder. The provider specifies that I need to do this based on the folder UID, which is generated when the folder is created. So I'd like to take the output from the grafana_folder resource and use it in the grafana_dashboard resource. I'm trying the following:
resource "grafana_folder" "folders" {
for_each = toset(var.grafana_folders)
title = each.key
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset(path.module, "../dashboards/*.json")
config_json = file("${path.module}/${each.key}")
folder = lookup(transpose(grafana_folder.folders), "Station_Details", "Station_Details")
depends_on = [grafana_folder.folders]
}
If I read the Grafana Provider github correctly, the grafana_folder resource should output a map of [uid, title]. So I figured if I transpose that map, and (by way of test) lookup a folder title that I know exists, I can test the concept.
This gives the following error:
on main.tf line 38, in resource "grafana_dashboard" "dashboards":
38: folder = lookup(transpose(grafana_folder.folders),
"Station_Details", "Station_Details")
Invalid value for "default" parameter: the default value must have the
same type as the map elements.
Both Uid and Title should be strings, so I'm obviously overlooking something.
Does anyone have an inkling where I'm going wrong and/or have suggestions on how I can do this (better)?
I think the problem this error is trying to report is that grafana_folder.folders is a map of objects, and so passing it to transpose doesn't really make sense but seems to be succeeding because Terraform has found some clever way to do automatic type conversions to produce some result, but then that result (due to the signature of transpose) is a map of lists rather than a map of strings, and so "Station_Details" (a string, rather than a list) isn't a valid fallback value for that lookup.
My limited familiarity with folders in Grafana leaves me unsure as to what to suggest instead, but I expect the final expression will look something like the following:
folder = grafana_folder.folders[SOMETHING].id
SOMETHING here will be an expression that allows you to know for a given dashboard which folder key it ought to belong to. I'm not seeing an answer to that from what you shared in your question, but just as a placeholder to make this a complete answer I'll suggest that one option would be to make a local map from dashboard filename to folder name:
locals {
# a local value probably isn't actually the right answer
# here, but I'm just showing it as a placeholder for one
# possible way to map from dashboard filename to folder
# name. These names should all be elements of
# var.grafana_folders in order for this to work.
dashboard_folders = {
"example1.json" = "example-folder"
"example2.json" = "example-folder"
"example3.json" = "another-folder"
}
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset("${path.module}/dashboards", "*.json")
config_json = file("${path.module}/dashboards/${each.key}")
folder = grafana_folder.folders[local.dashboard_folders[each.key]].id
}
I have a very frustrating Terraform issue, I made some changes to my terraform script which failed when I applied the plan. I've gone through a bunch of machinations and probably made the situation worse as I ended up manually deleting a bunch of AWS resources in trying to resolve this.
So now I am unable to use Terraform at all (refresh, plan, destroy) all get the same error.
The Situation
I have a list of Fargate services, and a set of maps which correlate different features of the fargate services such as the "Target Group" for the load balancer (I've provided some code below). The problem appears to be that Terraform is not picking up that these resources have been manually deleted or is somehow getting confused because they don't exist. At this point if I run a refresh, plan or destroy I get an error stating that a specific list is empty, even though it isn't (or should not be).
In the failed run I added a new service to the list below along with a new url (see code below)
Objective
At this point I would settle for destroying the entire environment (its my dev environment), however; ideally I want to just get the system working such that Terraform will detect the changes and work properly.
Terraform Script is Valid
I have reverted my Terraform scripts back to the last known good version. I have run the good version against our staging environment and it works fine.
Configuration Info
MacOS Mojave 10.14.6 (18G103)
Terraform v0.12.24.
provider.archive v1.3.0
provider.aws v2.57.0
provider.random v2.2.1
provider.template v2.1.2
The Terraform state file is being stored in a S3 bucket, and terraform init --reconfigure has been called.
What I've done
I was originally getting a similar error but it was in a different location, after many hours Googling and trying stuff (which I didn't write down) I decided to manually remove the AWS resources associated with the problematic code (the ALB, Target Groups, security groups)
Example Terraform Script
Unfortunately I can't post the actual script as it is private, but I've posted what I believe is the pertinent parts but have redacted some info. The reason I mention this is that any syntax type error you might see would be caused by this redaction, as I stated above the script works fine when run in our staging environment.
globalvars.tf
In the root directory. In the case of the failed Terraform run I added a new name to the service_names (edd = "edd") list (I added as the first element). In the service_name_map_2_url I added the new entry (edd = "edd") as the last entry. I'm not sure if the fact that I added these elements in different 'order' is the problem, although it really shouldn't since I access the map via the name and not by index
variable "service_names" {
type = list(string)
description = "This is a list/array of the images/services for the cluster"
default = [
"alert",
"alert-config"
]
}
variable service_name_map_2_url {
type = map(string)
description = "This map contains the base URL used for the service"
default = {
alert = "alert"
alert-config = "alert-config"
}
}
alb.tf
In modules/alb. In this module we create an ALB and then a target group for each service, which looks like this. The items from globalvars.tf are passed into this script
locals {
numberOfServices = length(var.service_names)
}
resource "aws_alb" "orchestration_alb" {
name = "orchestration-alb"
subnets = var.public_subnet_ids
security_groups = [var.alb_sg_id]
tags = {
environment = var.environment
group = var.tag_group_name
app = var.tag_app_name
contact = var.tag_contact_email
}
}
resource "aws_alb_target_group" "orchestration_tg" {
count = local.numberOfServices
name = "${var.service_names[count.index]}-tg"
port = 80
protocol = "HTTP"
vpc_id = var.vpc_id
target_type = "ip"
deregistration_delay = 60
tags = {
environment = var.environment
group = var.tag_group_name
app = var.tag_app_name
contact = var.tag_contact_email
}
health_check {
path = "/${var.service_name_map_2_url[var.service_names[count.index]]}/health"
port = var.app_port
protocol = "HTTP"
healthy_threshold = 2
unhealthy_threshold = 5
interval = 30
timeout = 5
matcher = "200-308"
}
}
output.tf
This is the output of the alb.tf, other things are outputted but this is the one that matters for this issue
output "target_group_arn_suffix" {
value = aws_alb_target_group.orchestration_tg.*.arn_suffix
}
cloudwatch.tf
In modules/cloudwatch. I attempt to create a dashboard
data "template_file" "Dashboard" {
template = file("${path.module}/dashboard.json.template")
vars = {
...
alert-tg = var.target_group_arn_suffix[0]
alert-config-tg = var.target_group_arn_suffix[1]
edd-cluster-name = var.ecs_cluster_name
alb-arn-suffix = var.alb-arn-suffix
}
}
Error
When I run terraform refresh (or plan or destroy) I get the following error (I get the same error for alert-config as well)
Error: Invalid index
on modules/cloudwatch/cloudwatch.tf line 146, in data "template_file" "Dashboard":
146: alert-tg = var.target_group_arn_suffix[0]
|----------------
| var.target_group_arn_suffix is empty list of string
The given key does not identify an element in this collection value.
AWS Environment
I have manually deleted the ALB. Dashboard and all Target Groups. I would expect (and this has worked in the past) that Terraform would detect this and update its state file appropriately such that when running a plan it would know it has to create the ALB and target groups.
Thank you
Terraform trusts its state as the single source of truth. Using Terraform in the presence of manual change is possible, but problematic.
If you manually remove infrastructure, you need to run terraform state rm [resource path] on the manually removed resource.
Gruntwork has what they call The Golden Rule of Terraform:
The master branch of the live repository should be a 1:1 representation of what’s actually deployed in production.
Following this v0.11 example from the official documentation, I had to make minor changes to it to make it work with v0.12 (provider.vsphere is v1.11.0).
resource "vsphere_vmfs_datastore" "datastore" {
name = "test"
host_system_id = "${data.vsphere_host.esxi_host.id}"
disks = data.vsphere_vmfs_disks.available.disks
}
This, however, creates a single datastore comprised of all discovered volume. What I want is one new datastore per each discovered volumes.
I tried to use count = 2 in above resource; with 2 volumes that attempts to create 2 datastores (the good), but each each still comprises the both volumes (the bad).
vsphere_vmfs_datastore should count the number of volumes returned by vsphere_vmfs_disks (so that I don't have to set it), loop through the list and create one datastore on each, which makes me think this resource section should be inside of a loop and each datastore would assign a unique name and use data.vsphere_vmfs_disks.available.disks.[N] but I don't know how to do that in Terraform 0.12 (there are relatively few examples and still some bugs).
Would the following work for you? It still uses the count method but passes in the count index to select a disk inside of the data.vsphere_vmfs_disks.available.disks data source.
resource "vsphere_vmfs_datastore" "datastore" {
count = 2
name = "test${count.index}"
host_system_id = data.vsphere_host.esxi_host.id
disks = [
data.vsphere_vmfs_disks.available.disks[count.index]
]
}
Unfortunately I don't have an ESXi host I can test against, but the logic should still apply.
I wrote a new module for launch_configuration and autoscaling group with terraform. Everything works fine, but my only concern is the name_prefix value I get after the resources are created.
Is there a way to make the name shorter?
From example-asg-20180303000844652900000002 to something much shorter, like:
example-asg-201803030
I know this name is generated randomly by terraform, but would be best if the name was shorter.
Thanks in advance for the help.
Currently the one way you have available to you is to manually construct the name attribute instead of using name_prefix. You interpolate some timestamp or whatever onto your name manually, with the inclusion of ignore_changes = ["name"] to prevent changes in the timestamp from causing your resource to be renamed every time you run terraform apply.
resource "some_resource" "foo" {
# Change 8 for your desired substring length of timestamp
name = "my-thing-${substr(replace(timestamp(), "/[-:]/", ""), 0, 8)}"
lifecycle {
ignore_changes = ["name"]
}
}