Terraform - Adding exceptions to ignore_changes - terraform

I currently have a module which contains a block with many configurable properties
resource "example_resource" "instance1" {
block1 {
property1 = var.variable1 # Should generate a diff if changed
property2 = var.variable2 # Ignore
property3 = var.variable3 # Ignore
....
....
propertyN = var.variableN # Ignore
}
lifecycle {
ignore_changes = [
block1[0].property2, block1[0].property3, ... ,block1[0].propertyN
]
}
}
Once the resource has been generated many of the properties within block1 are likely to change due to interaction with the user. I want to ignore such changes when running a terraform plan apart from a few exceptions which should generate a difference if changed in the future. (For example in the above resource if property1 is changed it should generate a diff but not for the others)
Ignoring such changes can be done using the ignore_changes within lifecycle block. But it seems like to do the above. Adding the entire block1 argument to this will cause all changes within to be ignored, or we have to add all ignored properties within the block one by one to the ignore_changes block as I have mentioned in the example.
Manually doing as such makes things a bit harder to maintain, as you will have to keep adding/removing new properties as a new property is added/removed to the block. So is it possible to configure the ignore_changes block to ignore all changes and specifically add the required exceptions?
P.S.
I do not believe this question is specific to a certain resource, But the resource I am trying to implement this concept to is the Azure App Service Resource, specifically to the site_config block within it.

The simplest approach I can think of is to just list out all the individual properties possible in that block, except the ones you don't want to ignore changes on. This would still be tedious and ugly.
Here's a more clever (untested) approach to try
#get the existing resource
data "example_resource" "instance1" {
}
resource "example_resource" "instance1" {
block1 {
...
}
lifecycle {
#transform the list of properties so the values all start with block[0].
ignore_changes = [for prop in local.ignore_change_props : "block[0].${prop}"]
}
}
locals {
#these properties we want to exclude from ignore_changes
change_exceptions = ["property1", "property10"],
#get all the properties from the data block in a map, then remove properties to be excluded
ignore_change_props = setsubtract(keys(data.example_resource.instance1.block1), local.change_exceptions)
}

Related

How do I apply a CRD from github to a cluster with terraform?

I want to install a CRD with terraform, I was hoping it would be easy as doing this:
data "http" "crd" {
url = "https://raw.githubusercontent.com/kubernetes-sigs/application/master/deploy/kube-app-manager-aio.yaml"
request_headers = {
Accept = "text/plain"
}
}
resource "kubernetes_manifest" "install-crd" {
manifest = data.http.crd.body
}
But I get this error:
can't unmarshal tftypes.String into *map[string]tftypes.Value, expected
map[string]tftypes.Value
Trying to convert it to yaml with yamldecode also doesn't work because yamldecode doesn't support multi-doc yaml files.
I could use exec, but I was already doing that while waiting for the kubernetes_manifest resource to be released. Does kubernetes_manifest only support a single resource or can it be used to create several from a raw text manifest file?
kubernetes_manifest (emphasis mine)
Represents one Kubernetes resource by supplying a manifest attribute
That sounds to me like it does not support multiple resources / a multi doc yaml file.
However you can manually split the incoming document and yamldecode the parts of it:
locals {
yamls = [for data in split("---", data.http.crd.body): yamldecode(data)]
}
resource "kubernetes_manifest" "install-crd" {
count = length(local.yamls)
manifest = local.yamls[count.index]
}
Unfortunately on my machine this then complains about
'status' attribute key is not allowed in manifest configuration
for exactly one of the 11 manifests.
And since I have no clue of kubernetes I have no idea what that means or wether or not it needs fixing.
Alternatively you can always use a null_resource with a script that fetches the yaml document and uses bash tools or python or whatever is installed to convert and split and filter the incoming yaml.
I got this to work using kubectl provider. Eventually kubernetes_manifest should work as well, but it is currently (v2.5.0) still beta and has some bugs. This example only uses kind+name, but for full uniqueness, it should also include the API and the namespace params.
resource "kubectl_manifest" "cdr" {
# Create a map { "kind--name" => yaml_doc } from the multi-document yaml text.
# Each element is a separate kubernetes resource.
# Must use \n---\n to avoid splitting on strings and comments containing "---".
# YAML allows "---" to be the first and last line of a file, so make sure
# raw yaml begins and ends with a newline.
# The "---" can be followed by spaces, so need to remove those too.
# Skip blocks that are empty or comments-only in case yaml began with a comment before "---".
for_each = {
for pair in [
for yaml in split(
"\n---\n",
"\n${replace(data.http.crd.body, "/(?m)^---[[:blank:]]*(#.*)?$/", "---")}\n"
) :
[yamldecode(yaml), yaml]
if trimspace(replace(yaml, "/(?m)(^[[:blank:]]*(#.*)?$)+/", "")) != ""
] : "${pair.0["kind"]}--${pair.0["metadata"]["name"]}" => pair.1
}
yaml_body = each.value
}
Once Hashicorp fixes kubernetes_manifest, I would recommend using the same approach. Do not use count+element() because if the ordering of the elements change, Terraform will delete/recreate many resources without needed it.
resource "kubernetes_manifest" "crd" {
for_each = {
for value in [
for yaml in split(
"\n---\n",
"\n${replace(data.http.crd.body, "/(?m)^---[[:blank:]]*(#.*)?$/", "---")}\n"
) :
yamldecode(yaml)
if trimspace(replace(yaml, "/(?m)(^[[:blank:]]*(#.*)?$)+/", "")) != ""
] : "${value["kind"]}--${value["metadata"]["name"]}" => value
}
manifest = each.value
}
P.S. Please support Terraform feature request for multi-document yamldecode. Will make things far easier than the above regex.
Terraform can split a multi-resource yaml (---) for you (docs):
# fetch a raw multi-resource yaml
data "http" "knative_serving_crds" {
url = "https://github.com/knative/serving/releases/download/knative-v1.7.1/serving-crds.yaml"
}
# split raw yaml into individual resources
data "kubectl_file_documents" "knative_serving_crds" {
content = data.http.knative_serving_crds.body
}
# apply each resource from the yaml one by one
resource "kubectl_manifest" "knative_serving_crds" {
depends_on = [kops_cluster_updater.updater]
for_each = data.kubectl_file_documents.knative_serving_crds.manifests
yaml_body = each.value
}

Terraform repeat blocks

I have a rather - I think - trivial question but I can not see the answer.
The pagerduty terraform provider allows to define a list of targets.
We live a "Full Ownership, Full Empowerment" culture, so each team has assigned their own .tf file where they can reign their garden.
This is a classical team file:
# create the teams terraform landscape
locals {
#some team locals
}
resource "pagerduty_service" "teletubbies" {
#...
}
resource "pagerduty_escalation_policy" "teletubbies" {
#...
rule {
escalation_delay_in_minutes = 10
target {
type = "schedule_reference"
id = pagerduty_schedule.draco.id
}
}
# PLACE OF QUESTION
}
resource "pagerduty_schedule" "teletubbies" {
#...
}
resource "pagerduty_service_integration" "teletubbies" {
#...
}
resource "pagerduty_extension" "teletubbies"{
#...
}
Now, I marked a PLACE OF QUESTION in my code.
Our teams are actually motivated to operate their service "alone". But you know how it works, some things just do not work out. I want to add to each teams policy 1 or two more rules that will trigger when the owning team does not react to the alert (a safety net).
As I also want clean code I do not want to copy pasta around 15 (15 teams) times the same code.
I fancy something like this:
resource "pagerduty_escalation_policy" "teletubbies" {
#...
#owning teams rules
template(escalation_rules)
}
I have found nothing that kind of "just loads text". I have seen that with TF>0.11 or so, you had a template_data provider, but this has been deprecated. I have also seen the new templatefile() but this seems to only work with an assignment.
The challenge I faced here is that e.g. templatefile() wants an assignment, you can not just do
resource "pagerduty_escalation_policy" "teletubbies" {
#...
#owning teams rules
templatefile(file, vars)
}
because it will complain until you do
resource "pagerduty_escalation_policy" "teletubbies" {
#...
#owning teams rules
xxxx = templatefile(file, vars)
}
which again is not what the provider wants of course.
Anyone has an idea how I can plainly render some definitions (without assignment)?
I checked modules but somehow this is also not what I need. I just need a "take a longer string and paste it neatly to the place I want" function.
Thank you for any pointers
PS:
terraform {
required_version = "~> 0.14.4"
required_providers {
pagerduty = {
source = "pagerduty/pagerduty"
version = "~> 1.8.0"
}
}
}

How can I use a variable as an attribute name in terraform 3.0?

Is it possible to somehow create an arbitrary attribute from a variable? Here is what I am trying to achieve.
How I currently do it (now deprecated in 3.0.0):
resource "aws_lb_listener_rule" "example" {
condition {
field = var.condition_field
values = var.condition_values
}
}
The new syntax requires a nested block with the condition field. But my condition is stored in a variable:
resource "aws_lb_listener_rule" "example" {
condition {
var.condition_field {
values = var.condition_values
}
}
}
Is it possible to somehow create an arbitrary attribute from a variable?
or: Can I store a nested attribute block in a variable?
Background on my question: I am currently trying to upgrade from 2.70.0 to 3.0.0 and there are quite a few breaking changes in my system. One of them includes the aws_lb_listener_rule. If it is not possible to create the attribute from the variable I would have to either pin the version or change the module API used by a ton of projects.
It actually seems like it is not possible to do that. The closes thing I have found that allows me to use 3.0.0 without changing my module variables and with that all the Terraform scripts that use it are dynamic conditional blocks.
dynamic "condition" {
for_each = var.field == "path-pattern" ? [var.field] : []
content {
path_pattern {
values = var.patterns
}
}
}
This is repeated for all possible var.field values.

Terraform & OpenStack - Zero downtime flavor change

I’m using openstack_compute_instance_v2 to create instances in OpenStack. There is a lifecycle setting create_before_destroy = true present. And it works just fine in case I e.g. change volume size, where instances needs to be replaced.
But. When I do flavor change, which can be done by using resize instance option from OpenStack, it does just that, but doesn’t care about any HA. All instances in the cluster are unavailable for 20-30 seconds, before resize finishes.
How can I change this behaviour?
Some setting like serial from Ansible, or some other options would come in handy. But I can’t find anything.
Just any solution that would allow me to say “at least half of the instances needs to be online at all times”.
Terraform version: 12.20.
TF plan: https://pastebin.com/ECfWYYX3
The Openstack Terraform provider knows that it can update the flavor by using a resize API call instead of having to destroy the instance and recreate it.
Unfortunately there's not currently a lifecycle option that forces mutable things to do a destroy/create or create/destroy when coupled with the create_before_destroy lifecycle customisation so you can't easily force this to replace the instance instead.
One option in these circumstances is to find a parameter that can't be modified in place (these are noted by the ForceNew flag on the schema in the underlying provider source code for the resource) and then have a change in the mutable parameter also cascade a change to the immutable parameter.
A common example here would be replacing an AWS autoscaling group when the launch template (which is mutable compared to the immutable launch configurations) changes so you can immediately roll out the changes instead of waiting for the ASG to slowly replace the instances over time. A simple example would look something like this:
variable "ami_id" {
default = "ami-123456"
}
resource "random_pet" "ami_random_name" {
keepers = {
# Generate a new pet name each time we switch to a new AMI id
ami_id = var.ami_id
}
}
resource "aws_launch_template" "example" {
name_prefix = "example-"
image_id = var.ami_id
instance_type = "t2.small"
vpc_security_group_ids = ["sg-123456"]
}
resource "aws_autoscaling_group" "example" {
name = "${aws_launch_template.example.name}-${random_pet.ami_random_name.id}"
vpc_zone_identifier = ["subnet-123456"]
min_size = 1
max_size = 3
launch_template {
id = aws_launch_template.example.id
version = "$Latest"
}
lifecycle {
create_before_destroy = true
}
}
In the above example a change to the AMI triggers a new random pet name which changes the ASG name which is an immutable field so this triggers replacing the ASG. Because the ASG has the create_before_destroy lifecycle customisation then it will create a new ASG, wait for the minimum amount of instances to pass EC2 health checks and then destroy the old ASG.
For your case you can also use the name parameter on the openstack_compute_instance_v2 resource as that is an immutable field as well. So a basic example might look like this:
variable "flavor_name" {
default = "FLAVOR_1"
}
resource "random_pet" "flavor_random_name" {
keepers = {
# Generate a new pet name each time we switch to a new flavor
flavor_name = var.flavor_name
}
}
resource "openstack_compute_instance_v2" "example" {
name = "example-${random_pet.flavor_random_name}"
image_id = "ad091b52-742f-469e-8f3c-fd81cadf0743"
flavor_name = var.flavor_name
key_pair = "my_key_pair_name"
security_groups = ["default"]
metadata = {
this = "that"
}
network {
name = "my_network"
}
}
So. At first I've started digging how, as #ydaetskcoR proposed, to use random instance name.
Name wasn't an option, both because in openstack it is a mutable parameter, and because I have a decided naming schema which I can't change.
I've started to look for other parameters that I could modify to force instance being created instead of modified. I've found about personality.
https://www.terraform.io/docs/providers/openstack/r/compute_instance_v2.html#instance-with-personality
But it didn't work either. Mainly, because personality is no longer supported as it seems:
The use of personality files is deprecated starting with the 2.57 microversion. Use metadata and user_data to customize a server instance.
https://docs.openstack.org/api-ref/compute/
Not sure if terraform doesn't support it, or there are any other issues. But I went with user_data. I've already used user_data in compute instance module, so adding some flavor data there shouldn't be an issue.
So, within user_data I've added the following:
user_data = "runcmd:\n - echo ${var.host["flavor"]} > /tmp/tf_flavor"
No need for random pet names, no need to change instances names. Just change their "personality" by adding flavor name somewhere. This does force instance to be recreated when flavor changes.
So. Instead of simply:
# module.instance.openstack_compute_instance_v2.server[0] will be updated in-place
~ resource "openstack_compute_instance_v2" "server" {
I have now:
-/+ destroy and then create replacement
+/- create replacement and then destroy
Terraform will perform the following actions:
# module.instance.openstack_compute_instance_v2.server[0] must be replaced
+/- resource "openstack_compute_instance_v2" "server" {

Collection or Template in Terraform HCL

I'm trying to find directions on how to do a pretty simple thing in HCL. I have one block like this
resource "aws_elastic_beanstalk_environment" "qa" {
name "qa1"
#insert settings here
}
And I want to insert a collection of settings where that comment is. But the config is not an array it should be something like
desired_block "settings" {
setting {}
setting {}
}
How would I inject something like desired block?
Instead of creating multiple blocks you can put an array of settings and It would work. Like
resource "aws_elastic_beanstalk_environment" "qa" {
name = "qa1"
settings = ["${var.settings_array}"]
}
Here var.settings_array is an array of settings, like [<settings1>, <settings2>, ...].

Resources