Terraform: can I pass ignore_changes to the module? - terraform

Can I pass ignore_changes to the terraform module?
In my case, I do not want to update the autoscaling group, when AMI was updated.
After a brief review, it seems that it cannot be done - https://github.com/hashicorp/terraform/issues/21546
Of course, I can have two copy-pasted module versions - one with ignore_changes and another without, but it doesn't look good. Maybe I'm just missing something?

As of 2-16-2022, it seems this isn't possible. There are two issues documenting this on GitHub, one as you pointed out here and another more generic issue about handling interpolation in lifecycle attributes at: https://github.com/hashicorp/terraform/issues/3116 . As of now, the reasoning seems to be:
#phinze:
The real issue here is that lifecycle blocks cannot contain interpolated values. This is because lifecycle modifications can change the shape of the graph, which makes handling computed values for them extra tricky. It's something we can theoretically do, but requires some thought and effort. Tagging as an enhancement.
For a workaround (albeit not optimal), we can hardcode the values into ignore_changes on the module scope and then use count with a variable e.g. create_resource_with_ignore_changes = 1 to get our resource provisioned with ignore_changes from the module scope. I know this isn't what the question requested, instead this is hardcoding ignore_changes into the module and provisioning it via a count switch. Below is an example of how this could work:
variables.tf
variable "create_resource_with_ignore_changes" {
type = number
description = "Choose whether to create a version that uses hardcoded ignore_changes"
default = 1
}
calling-a-module.tf
module "servers" {
source = "./app-cluster"
create_resource_with_ignore_changes = var.create_resource_with_ignore_changes
}
inside-servers-module.tf
resource "a_terraform_resource" "example" {
count = var.create_resource_with_ignore_changes
# ...
lifecycle {
ignore_changes = [
# your hardcoded changes to ignore here
]
}
}
A benefit of this approach is that you can have different configurations by still using a single module. You can also nest other logic into the count argument, for instance, via creating a string and determining if it matches using a ternary operator:
resource "a_terraform_resource" "example" {
count = var.my_resource_config == "someHardcodedNamedConfig" ? 1 : 0
# ...
}
This also got me thinking if it's possible just to conditionally control the lifecycle block. There's a SO post which answers that question here: Terraform conditionally apply lifecycle block
Unfortunately this also isn't possible, for similar reasons to the explanation to this particular question, making the duplicate resource (one with and one without the lifecycle block) the current feasible workaround.

Related

how to resolve Terraform error Invalid count argument?

I'm trying to add New Relic One Synthetic moniter using common module "monitor" we use in terraform, where i also want to attach new alert condition policy. which is working fine if i create resources one by one but as i want to commit all changes it showing me error as below.
Error: Invalid count argument
on .terraform/modules/monitor/modules/synthetics/syn_alert.tf line 11, in resource "newrelic_alert_policy" "policy":
11: count = var.policy_id != null ? 0 : var.create_alerts == true ? 1 : var.create_multilocation_alerts == true ? 1 : 0
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
i expect this should work accurately as i tried stepwise, even i did tryed to look for solutions as resource dependencies so i also did added depends_on with required resources like
depends_on = [newrelic_alert_policy.harvester_ping_failure_alert_policy,newrelic_alert_channel.slack_channel]
but still not working as expected.
This error suggests that one of the input variables you included here has a value that won't be known until the apply step:
var.policy_id
var.create_alerts
var.create_multilocation_alerts
You didn't show how exactly how you're defining those variables in the calling module block, but I'm guessing that policy_id is probably the problematic one of these, because you've probably assigned an attribute from a managed resource instance in the parent module and the remote object corresponding to that resource instance hasn't been created yet, and so its ID isn't known yet.
If that's true, you'll need to define this differently so that the choice about whether to declare this object is made as a separate value from the ID itself, and then make sure the choice about whether to declare is not based on the outcome of any managed resource elsewhere in the configuration.
One way to do that would be like this:
variable "project" {
type = object({
id = string
})
default = null
}
This means that the decision about whether or not to set this object can be represented by the "nullness" of the entire object, even though the id attribute inside a non-null object might be unknown.
module "monitor" {
# ...
project = {
id = whatever_resource_type.name.id
}
}
If the object whose ID you're passing in here is itself a resource instance with an id attribute, as I showed above, then you can also make this more concise by assigning the whole object at once:
module "monitor" {
# ...
project = whatever_resource_type.name
}
Terraform will check to make sure that whatever_resource_type.name has an id attribute, and if so it will use it to populate the id attribute of the variable inside the module.

Create resource via terraform but do not recreate if manually deleted?

I want to initially create a resource using Terraform, but if the resource gets later deleted outside of TF - e.g. manually by a user - I do not want terraform to re-create it. Is this possible?
In my case the resource is a blob on an Azure Blob storage. I tried using ignore_changes = all but that didn't help. Every time I ran terraform apply, it would recreate the blob.
resource "azurerm_storage_blob" "test" {
name = "myfile.txt"
storage_account_name = azurerm_storage_account.deployment.name
storage_container_name = azurerm_storage_container.deployment.name
type = "Block"
source_content = "test"
lifecycle {
ignore_changes = all
}
}
The requirement you've stated is not supported by Terraform directly. To achieve it you will need to either implement something completely outside of Terraform or use Terraform as part of some custom scripting written by you to perform a few separate Terraform steps.
If you want to implement it by wrapping Terraform then I will describe one possible way to do it, although there are various other variants of this that would get a similar effect.
My idea for implementing it would be to implement a sort of "bootstrapping mode" which your custom script can enable only for initial creation, but then for subsequent work you would not use the bootstrapping mode. Bootstrapping mode would be a combination of an input variable to activate it and an extra step after using it.
variable "bootstrap" {
type = bool
default = false
description = "Do not use this directly. Only for use by the bootstrap script."
}
resource "azurerm_storage_blob" "test" {
count = var.bootstrap ? 1 : 0
name = "myfile.txt"
storage_account_name = azurerm_storage_account.deployment.name
storage_container_name = azurerm_storage_container.deployment.name
type = "Block"
source_content = "test"
}
This alone would not be sufficient because normally if you were to run Terraform once with -var="bootstrap=true" and then again without it Terraform would plan to destroy the blob, after noticing it's no longer present in the configuration.
So to make this work we need a special bootstrap script which wraps Terraform like this:
terraform apply -var="bootstrap=true"
terraform state rm azurerm_storage_blob.test
That second terraform state rm command above tells Terraform to forget about the object it currently has bound to azurerm_storage_blob.test. That means that the object will continue to exist but Terraform will have no record of it, and so will behave as if it doesn't exist.
If you run the bootstrap script then, you will have the blob existing but with Terraform unaware of it. You can therefore then run terraform apply as normal (without setting the bootstrap variable) and Terraform will both ignore the object previously created and not plan to create a new one, because it will now have count = 0.
This is not a typical use-case for Terraform, so I would recommend to consider other possible solutions to meet your use-case, but I hope the above is useful as part of that design work.
If you have a resource defined in terraform configuration then terraform will always try to create it. I can't imagine what is your setup, but maybe you want to take the blob creation to a CLI script and run terraform and the script in desired order.

Terraform ignore_changes for resource output

Is there anyway to ignore changes to resource output? Or tell terraform to not refresh it?
A terraform resource I'm using returns a state_info output (map of string) that can be modified by processes outside of Terraform. I want to ignore these changes. Is this possible.
resource "aiven_vpc_peering_connection" "this" {
lifecycle {
ignore_changes = [
state_info
]
}
}
state_info is getting set to null outside of Terraform. I'm using state_info in other terraform resources. It's failing with aiven_vpc_peering_connection.this.state_info is empty map of string on subsequent terraform plans I run
The ignore_changes mechanism instructs Terraform to disregard a particular argument when it's comparing values in the configuration with values in the prior state snapshot, so it doesn't have any effect for attributes that are only saved in the prior state due to them not being explicitly configurable.
It sounds like what you want is instead to have Terraform disregard a particular argument when it's updating the prior state to match remote objects (the "refresh" step), so that the result would end up being a mixture of new content from the remote API and content previously saved in the state. Terraform has no mechanism to achieve that: the values stored in the state after refreshing are exactly what the provider returned. This guarantee can be important for some resource types because retaining an old value for one argument while allowing others to change could make the result inconsistent, if e.g. the same information is presented in multiple different ways.
The closest you can get to what you described is to use the value as exported by the upstream resource and then specify ignore_changes on the resource where you ultimately use that value, telling Terraform to ignore the changes in the upstream object when comparing the downstream object with its configuration.

Terraform aws_ssm_parameter null/empty with ignore_changes

I have a Terraform config that looks like this:
resource "random_string" "foo" {
length = 31
special = false
}
resource "aws_ssm_parameter" "bar" {
name = "baz"
type = "SecureString"
value = random_string.foo.result
lifecycle {
ignore_changes = [value]
}
}
The idea is that on the first terraform apply the bar resource will be stored in baz in SSM based on the value of foo, and then on subsequent calls to apply I'll be able to reference aws_ssm_parameter.bar.value, however what I see is that it works on the first run, stores the newly created random value, and then on subsequent runs aws_ssm_parameter.bar.value is empty.
If I create a aws_ssm_parameter data source that can pull the value correctly, but it doesn't work on the first apply when it doesn't exist yet. How can I modify this config so I can get the value stored in baz in SSM and work for creating the value in the same config?
(Sorry not enough karma to comment)
To fix the chicken-egg problem, you could add depends_on = [aws_ssm_parameter.bar] to a data resource, but this introduces some awkwardness (especially if you need to call destroy often in your workflow). It's not particularly recommended (see here).
It doesn't really make sense that it's returning empty, though, so I wonder if you've hit a different bug. Does the value actually get posted to SSM (i.e. can you see it when you run aws ssm get-paramter ...)?
Edit- I just tested your example code above with:
output "bar" {
value = aws_ssm_parameter.bar.value
}
and it seems to work fine. Maybe you need to update tf or plugins?
Oh, I forgot about this question, but turns out I did figure out the problem.
The issue was that I was creating the ssm parameter inside a module that was being used in another module. The problem was because I didn't output anything related to this parameter, so it seemed to get dropped from state by Terraform on subsequent replans after it was created. Exposing it as output on the module fixed the issue.

In Terraform 0.12, how to skip creation of resource, if resource name already exists?

I am using Terraform version 0.12. I have a requirement to skip resource creation if resource with the same name already exists.
I did the following for this :
Read the list of custom images,
data "ibm_is_images" "custom_images" {
}
Check if image already exists,
locals {
custom_vsi_image = contains([for x in data.ibm_is_images.custom_images.images: "true" if x.visibility == "private" && x.name == var.vnf_vpc_image_name], "true")
}
output "abc" {
value="${local.custom_vsi_image}"
}
Create only if image exists is false.
resource "ibm_is_image" "custom_image" {
count = "${local.custom_vsi_image == true ? 0 : 1}"
depends_on = ["data.ibm_is_images.custom_images"]
href = "${local.image_url}"
name = "${var.vnf_vpc_image_name}"
operating_system = "centos-7-amd64"
timeouts {
create = "30m"
delete = "10m"
}
}
This works fine for the first time with "terraform apply". It finds that the image did not exists, so it creates image.
When I run "terraform apply" for the second time. It is deleting the resource "custom_image" that is created above. Any idea why it is deleting the resource, when it is run for the 2nd time ?
Also, how to create a resource based on some condition(like only when it does not exists) ?
In Terraform, you're required to decide explicitly what system is responsible for the management of a particular object, and conversely which systems are just consuming an existing object. There is no way to make that decision dynamically, because that would make the result non-deterministic and -- for objects managed by Terraform -- make it unclear which configuration's terraform destroy would destroy the object.
Indeed, that non-determinism is why you're seeing Terraform in your situation flop between trying to create and then trying to delete the resource: you've told Terraform to only manage that object if it doesn't already exist, and so the first time you run Terraform after it exists Terraform will see that the object is no longer managed and so it will plan to destroy it.
If you goal is to manage everything with Terraform, an important design task is to decide how object dependencies flow within and between Terraform configurations. In your case, it seems like there is a producer/consumer relationship between a system that manages images (which may or may not be a Terraform configuration) and one or more Terraform configurations that consume existing images.
If the images are managed by Terraform then that suggests either that your main Terraform configuration should assume the image does not exist and unconditionally create it -- if your decision is that the image is owned by the same system as what consumes it -- or it should assume that the image does already exist and retrieve the information about it using a data block.
A possible solution here is to write a separate Terraform configuration that manages the image and then only apply that configuration in situations where that object isn't expected to already exist. Then your configuration that consumes the existing image can just assume it exists without caring about whether it was created by the other Terraform configuration or not.
There's a longer overview of this situation in the Terraform documentation section Module Composition, and in particular the sub-section Conditional Creation of Objects. That guide is focused on interactions between modules in a single configuration, but the same underlying principles apply to dependencies between configurations (via data sources) too.

Resources