Context: I'm developing a terraform provider.
HashiCorp's tutorial describes how to manage a resource that can be in 2 states pretty much: CREATED and DELETED (or such that GET resources/{resource_id} returns 404). It's easy to support CREATING (that means creation is in progress) state as well by adding a retries / wait in resourceCreate() method.
What about implementing a support for more advanced states like STOPPED? Is it a thing to add a computed status attribute or something that can be either CREATED or STOPPED and set it to CREATED in terraform configuration file (or make this attribute computed) and make status attribute updateable?
Related
Context: I'm implementing a Terraform Provider (see https://developer.hashicorp.com/terraform/tutorials/providers/provider-setup).
Is it OK for a provider to edit resource ID in an update()? In other words, to call d.SetID()?
I did a quick search and I don't think there're a lot of TF Providers that do it (they call d.SetID() in create() / read() only) but I wonder are there any downsides / limitations around that (from user's perspective, resource has still got resource_foo.bar reference so changing internal ID might be OK).
From Terraform Core's perspective there is nothing special about the id attribute and the rules for changing it are just the same as for any other attribute.
Unfortunately, the old Terraform SDK (SDKv2) does treat this attribute as special, because it overloads it to use as a signal for whether the object exists at all, and therefore wraps this special setter d.SetId around it instead of using the normal d.Set API.
Because of this, it is generally not safe to change the ID during an Update function when using SDKv2. The rest of this answer includes some more detail about why, but if the answer "no" is enough for you then feel free to skip the rest. 😄
Terraform Core gives the old SDK some extra latitude in its implementation of the provider protocol because that SDK was originally designed for very old versions of Terraform (Terraform v0.11 and earlier), so changing id during update won't cause an error but it will cause Terraform to emit a warning to its internal logs, which you can see if you run Terraform with the environment variable TF_LOG=warn.
The warning will be something like this:
[WARN] Provider registry.terraform.io/example/example produced an unexpected new value for example_something.foo, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
.id: was "old value", but now "new value"
If any other resource in the configuration includes a reference to example_something.foo.id then it will appear to Terraform that the plan for that resource has changed between plan and apply, thereby causing an example of the sort of "confusing errors from downstream operations" that warning speaks of:
resource "something_else" "example" {
foo = example_something.foo.id
}
Error: Provider produced inconsistent final plan
When expanding the plan for something_else.example to include
new values learned so far during apply, provider
registry.terraform.io/something/something produced an invalid
new value for .foo: was "old value", now "new value".
This is a bug in the provider, which should be reported in
the provider's own issue tracker.
Because Terraform Core tried to tolerate the invalid answer from the first provider, the problem ended up being blamed on the second provider instead. This would therefore cause bug report noise for developers of other providers, and so is best avoided.
The newer Terraform Plugin Framework is designed around modern Terraform and doesn't have all the legacy limitations of SDKv2, so for providers written using that framework the id attribute is not special and follows the same rules as for any other attribute, just like Terraform Core.
The rules for changing an attribute during apply are:
During the planning step you must either set the attribute to its new value immediately or you must mark it as unknown so that Terraform will show it as (known after apply) in the plan. You can achieve this using the Plan Modification features of the framework.
During the apply step a provider is allowed to choose any value for an attribute that was marked as unknown during the planning step, but is required to return exactly the same value if the attribute was not marked as unknown during the planning step.
The old SDK is unable to follow these rules because it offers no way to mark the id attribute as unknown during planning. The only way to successfully change the ID of an object after it was already created would be to change the ID to the concrete new value during the planning phase, by implementing a CustomizeDiff function. This would be possible only if the configuration includes enough information to already know what the ID will be before reaching the apply step.
I am using Terraform to execute a script to apply some configuration that are not handled via a provider.
This is done using a null_resource.
The executed script will generate a configuration with a name based on current values. Terraform does not keep track of anything created within a null_resource.
The created configuration should also be updatable. But in order to do that I need to know the values used in the prior execution.
Then the old configuration can be targeted by its name, deleted and the new configuration can be added.
So two consecutive runs would look like this:
First execution has the variable setting var.name="ValA". It creates a configuration resource via REST API call called "config-ValA"
Next execution has the variable setting var.name="ValB". It first deletes the configuration "config-ValA" and then creates "config-ValB". But at this point I no longer have access to the var.name state from the "ValA" execution.
I imagine something like this:
resource "null_resource" "start_script" {
# Not actual code! I would like to access the old value with var.name.old
# but I don't know how.
provisioner "local-exec" {
command = "../scripts/dosth.sh ${resource.name.value} ${resource.name.value.old} ${var.name} ${var.name.old}"
interpreter = ["/bin/bash", "-c"]
}
}
Is there a way to access the "old" value of a resource/variable in general, or from the context of a null_resource in particular?
What I know I could already do when adding workarounds:
save the old Value in a separate store. E.g. SSM Parameterstore in AWS and access it as input for the new run.
Use a terraform state show name -raw to extract the value from the state and use it as an input in the new run.
Within the Terraform language itself (.tf files) we only work with the desired new state, and don't have any access to the prior state. Terraform evaluates the configuration first and then, with the help of the associated provider, compares the result of evaluating the configuration to the prior state in order to determine what needs to change.
If you want to respond to changes between the prior state and the desired new state then your logic will need to live inside a provider, rather than inside a module. A provider is the component that handles the transition from one state to another, and so only a provider can "see" both the prior state and the desired state represented by the configuration at the same time, during the planning phase.
The community provider scottwinkler/shell seems to have a resource type shell_script which can act as an adapter to forward the provider requests from Terraform Core to some externally-defined shell scripts. It seems that if you implement an update script then it can in principle access both the prior state and the new desired state, although I'm not personally familiar with the details because I've not actually used this provider. I suggest reviewing the documentation and the code to understand more about how it works, if you choose to use it.
I'm working on terraform-provider-ovh where I need to reload configuration only after all the other resources performed their changes.
I can't issue refresh after every resource makes it's own changes as it creates async job and would result in conflicts, plus reload of configuration is a very long operation. I can't find any way to trigger a "handler", "async operation" or some kind of provider scoped post-processing.
My current idea is to create a dedicated resource for "refresh" which can call api to see if there were any changes made and trigger refresh if there are any waiting changes. The problem is that it also needs to be triggered after all the others are done, and I would really want to avoid requiring user to explicitly define depends_on pointing to all resources that might trigger the refresh.
Any ideas for how to solve this reasonable would be very welcome.
I am working on a project which contains a module with nearly 100 of exec resources guarded by creates attribute to ensure that exec resource is idempotent.
On applying puppet class, it just logs only those resources which are getting applied but not the other resources which are not getting applied.
I am trying to refactor module and want to get list of all resources in that module along with their status like applied , not applied etc. If any of the resource is not getting applied then also the reason why it's not getting applied.
Is there any way to produce such a report in puppet.
Thanks
Following is my requirement :
Whenever site is created, with help of GroupListener we are adding some custom attributes to created site.
So assume that you are crating "Liferay Test" site then it will have some fix custom attribute "XYZ" with value set in GroupListeners onAfterCreate method.
I can see this value in custom fields under site settings.
Now based on this values, we are creating groups in another system(out of liferay, using webservices).
So far so good.
Whenever we are deleting the site we need to remove the equivalent groups from other system via web service.
But while deleting site, in GroupListener we are not able to retrieve the custom attributes.
On further debug by adding expando listener, I observed that Expando listeners are getting called first and then delete method of GroupLocalService/GroupListener.
And hence we are not able to delete the groups present in another system.
So I was wondering if we can have ordering defined for listeneres.
Note: Since we were not getting custom attributes in listeners we implemented GroupLocalServiceImpl and with this we are getting custom attributes in delete method on local environment but not on our stage environment which has clustering.
You shouldn't use the ModelListeners for this kind of change, rather create ServiceWrappers, e.g. wrap the interesting methods in GroupLocalService (for creation as well as deletion).
This will also enable you to react to failures to create records in your external system etc.