How to access old state value in new Terraform run - terraform

I am using Terraform to execute a script to apply some configuration that are not handled via a provider.
This is done using a null_resource.
The executed script will generate a configuration with a name based on current values. Terraform does not keep track of anything created within a null_resource.
The created configuration should also be updatable. But in order to do that I need to know the values used in the prior execution.
Then the old configuration can be targeted by its name, deleted and the new configuration can be added.
So two consecutive runs would look like this:
First execution has the variable setting var.name="ValA". It creates a configuration resource via REST API call called "config-ValA"
Next execution has the variable setting var.name="ValB". It first deletes the configuration "config-ValA" and then creates "config-ValB". But at this point I no longer have access to the var.name state from the "ValA" execution.
I imagine something like this:
resource "null_resource" "start_script" {
# Not actual code! I would like to access the old value with var.name.old
# but I don't know how.
provisioner "local-exec" {
command = "../scripts/dosth.sh ${resource.name.value} ${resource.name.value.old} ${var.name} ${var.name.old}"
interpreter = ["/bin/bash", "-c"]
}
}
Is there a way to access the "old" value of a resource/variable in general, or from the context of a null_resource in particular?
What I know I could already do when adding workarounds:
save the old Value in a separate store. E.g. SSM Parameterstore in AWS and access it as input for the new run.
Use a terraform state show name -raw to extract the value from the state and use it as an input in the new run.

Within the Terraform language itself (.tf files) we only work with the desired new state, and don't have any access to the prior state. Terraform evaluates the configuration first and then, with the help of the associated provider, compares the result of evaluating the configuration to the prior state in order to determine what needs to change.
If you want to respond to changes between the prior state and the desired new state then your logic will need to live inside a provider, rather than inside a module. A provider is the component that handles the transition from one state to another, and so only a provider can "see" both the prior state and the desired state represented by the configuration at the same time, during the planning phase.
The community provider scottwinkler/shell seems to have a resource type shell_script which can act as an adapter to forward the provider requests from Terraform Core to some externally-defined shell scripts. It seems that if you implement an update script then it can in principle access both the prior state and the new desired state, although I'm not personally familiar with the details because I've not actually used this provider. I suggest reviewing the documentation and the code to understand more about how it works, if you choose to use it.

Related

Is it OK for a provider to update resource ID on an update()

Context: I'm implementing a Terraform Provider (see https://developer.hashicorp.com/terraform/tutorials/providers/provider-setup).
Is it OK for a provider to edit resource ID in an update()? In other words, to call d.SetID()?
I did a quick search and I don't think there're a lot of TF Providers that do it (they call d.SetID() in create() / read() only) but I wonder are there any downsides / limitations around that (from user's perspective, resource has still got resource_foo.bar reference so changing internal ID might be OK).
From Terraform Core's perspective there is nothing special about the id attribute and the rules for changing it are just the same as for any other attribute.
Unfortunately, the old Terraform SDK (SDKv2) does treat this attribute as special, because it overloads it to use as a signal for whether the object exists at all, and therefore wraps this special setter d.SetId around it instead of using the normal d.Set API.
Because of this, it is generally not safe to change the ID during an Update function when using SDKv2. The rest of this answer includes some more detail about why, but if the answer "no" is enough for you then feel free to skip the rest. 😄
Terraform Core gives the old SDK some extra latitude in its implementation of the provider protocol because that SDK was originally designed for very old versions of Terraform (Terraform v0.11 and earlier), so changing id during update won't cause an error but it will cause Terraform to emit a warning to its internal logs, which you can see if you run Terraform with the environment variable TF_LOG=warn.
The warning will be something like this:
[WARN] Provider registry.terraform.io/example/example produced an unexpected new value for example_something.foo, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
.id: was "old value", but now "new value"
If any other resource in the configuration includes a reference to example_something.foo.id then it will appear to Terraform that the plan for that resource has changed between plan and apply, thereby causing an example of the sort of "confusing errors from downstream operations" that warning speaks of:
resource "something_else" "example" {
foo = example_something.foo.id
}
Error: Provider produced inconsistent final plan
When expanding the plan for something_else.example to include
new values learned so far during apply, provider
registry.terraform.io/something/something produced an invalid
new value for .foo: was "old value", now "new value".
This is a bug in the provider, which should be reported in
the provider's own issue tracker.
Because Terraform Core tried to tolerate the invalid answer from the first provider, the problem ended up being blamed on the second provider instead. This would therefore cause bug report noise for developers of other providers, and so is best avoided.
The newer Terraform Plugin Framework is designed around modern Terraform and doesn't have all the legacy limitations of SDKv2, so for providers written using that framework the id attribute is not special and follows the same rules as for any other attribute, just like Terraform Core.
The rules for changing an attribute during apply are:
During the planning step you must either set the attribute to its new value immediately or you must mark it as unknown so that Terraform will show it as (known after apply) in the plan. You can achieve this using the Plan Modification features of the framework.
During the apply step a provider is allowed to choose any value for an attribute that was marked as unknown during the planning step, but is required to return exactly the same value if the attribute was not marked as unknown during the planning step.
The old SDK is unable to follow these rules because it offers no way to mark the id attribute as unknown during planning. The only way to successfully change the ID of an object after it was already created would be to change the ID to the concrete new value during the planning phase, by implementing a CustomizeDiff function. This would be possible only if the configuration includes enough information to already know what the ID will be before reaching the apply step.

How do terraform providers implement resource lifecycle (resource's STOPPED status)?

Context: I'm developing a terraform provider.
HashiCorp's tutorial describes how to manage a resource that can be in 2 states pretty much: CREATED and DELETED (or such that GET resources/{resource_id} returns 404). It's easy to support CREATING (that means creation is in progress) state as well by adding a retries / wait in resourceCreate() method.
What about implementing a support for more advanced states like STOPPED? Is it a thing to add a computed status attribute or something that can be either CREATED or STOPPED and set it to CREATED in terraform configuration file (or make this attribute computed) and make status attribute updateable?

How to create a resource group that can be shared between modules in terraform?

What is the proper way to create a resource group in terraform for azure that can be shared across different modules? I've been banging my head against this for a while and it's not working. As you can see in this image. I have a resource group in a separate folder. In my main.tf file i load the modules appservice and cosmosdb. I cant seem to figure out how to make the appservice and cosmosdb tf files reference the resource group here that is in this location. How is this done? Any suggestions would be greatly appreciated. Thank you.
In general, it is not recommended to have a module with a single resource like you have organized your code. However, in this situation, you would need to provide the exported resource attributes as an output for that module. In your resource_group module:
output "my_env_rg" {
value = azurerm_resource_group.rg
description = "The my-env-rg Azure resource group."
}
Then, the output containing the map of exported resource attributes for the resource becomes accessible in a config module where you have declared the module. For example, in your root module config (presumably containing your main.tf referenced in the question):
module "azure_resource_group" {
source = "resource-group"
}
would make the output accessible with the namespace module.<MODULE NAME>.<OUTPUT NAME>. In this case, that would be:
module.azure_resource_group.my_env_rg
There's two different kinds of sharing that require different solutions. You need to decide which kind of sharing you're looking for because your example isn't very illustrative.
The first is where you want to make a pattern of creating things that you want to use twice. The goal is to create many different things, each with different parameters. The canonical example is a RDS instance or an EC2 instance. Think of the Terraform module as a function where you execute it with different inputs in different places and use the different results independently. This is exactly what Terraform modules are for.
The second is where you want to make a thing and reference it in multiple places. The canonical example is a VPC. You don't want to make a new VPC for every autoscaling group - you want to reuse it.
Terraform doesn't have a good way of stitching the outputs from one set of resources as inputs to another set. Atlas does and Cloudformation does as well. If you're not using those, then you have to stitch them together yourself. I have always written a wrapper around Terraform which enables me to do this (and other things, like validation and authentication). Save the outputs to a known place and then reference them as inputs later.

Removing last record terraform state in a terraform workspace

I am using terraform cloud workspaces.
By mistake, I upload the wrong terraform state to my workspace. Now my terraform plan is using it, but I don’t want to use it since that is not the state I wanted to get.
Let me explain with this image: I want to use the state from 10 months ago and not the New State I got recently:
I want to go back to my old state since in the new one there are no some resources and then they are being recreated in my terraform plan.
I am trying to import every separate resource, by executing terraform import <RESOURCE-INSTANCE> <ID> command in this way:
terraform import azurerm_app_service_plan.fem-plan /subscriptions/d37e88d5-e443-4285-98c9-91bf40d514f9/resourceGroups/rhd-spec-prod-rhdhv-1mo5mz5r2o4f6/providers/Microsoft.Web/serverfarms/fem-plan-afd1
Acquiring state lock. This may take a few moments...
azurerm_app_service_plan.fem-plan: Importing from ID "/subscriptions/d37e88d5-e443-4285-98c9-91bf40d514f9/resourceGroups/rhd-spec-prod-rhdhv-1mo5mz5r2o4f6/providers/Microsoft.Web/serverfarms/fem-plan-afd1"...
azurerm_app_service_plan.fem-plan: Import prepared!
Prepared azurerm_app_service_plan for import
azurerm_app_service_plan.fem-plan: Refreshing state... [id=/subscriptions/d37e88d5-e443-4285-98c9-91bf40d514f9/resourceGroups/rhd-spec-prod-rhdhv-1mo5mz5r2o4f6/providers/Microsoft.Web/serverfarms/fem-plan-afd1]
Error: Cannot import non-existent remote object
While attempting to import an existing object to
azurerm_app_service_plan.fem-plan, the provider detected that no object exists
with the given id. Only pre-existing objects can be imported; check that the
id is correct and that it is associated with the provider's configured region
or endpoint, or use "terraform apply" to create a new remote object for this
resource.
Releasing state lock. This may take a few moments...
But I got in the output, that resource does not exist, because terraform is using my latest New state where that resource is not included.
How can I use my old 10 months ago state?
If someone can point me out in the right path I will appreciate it.
Maybe you can use the command terraform refresh to update the state file to match the current state of the resources. You can get the details of the command here.

How to list all of the attributes in a terraform_remote_state datasource?

I've been trying to debug an issue related to a terraform_remote_state datasource.
Goal: Show all of the attributes in data terraform_remote_state my_remote_state
Attempt: terraform state show data.terraform_remote_state.my_remote_state
Expected: show all attributes for the datasource
Result:
No instance found for the given address!
This command requires that the address references one specific instance.
To view the available instances, use "terraform state list". Please modify
the address to reference a specific instance.
I ran terraform state list to view the list addresses, but the datasource address wasn't listed. If anyone can help me with this I would greatly appreciate it.
references:https://www.terraform.io/docs/commands/state/show.html
The commands terraform state show and terraform state list will show you the resources contained within the state for your current backend. Remote state is not state managed by your current backend.
If you have access to the definition for the other backend, simply change to that project/repo, initialise, make sure you're in the correct workspace, then use terraform state show and terraform state list there instead.

Resources