context: I'm adding a new resource foo to Terraform provider.
The interesting detail about foo is that the corresponding POST API request requires to pass 0 or more secret values (best represented as a TypeMap) and then these secrets are pretty much useless (both READ, UPDATE work without it) and I think it doesn't really makes sense to store these secrets in Terraform configuration / state which is why I'd like to set this secrets_map attribute to either null or empty map.
So I started with this definition for this resource:
"secrets_map": {
Type: schema.TypeMap,
Elem: &schema.Schema{
Type: schema.TypeString,
},
Sensitive: true,
Optional: true,
Computed: true,
ForceNew: false,
},
I picked Optional + Computed over Required: true because d.Set("secrets_map", make(map[string]string)) was failing but I'm still running into some issues. Is there an example of such a setup?
My ideal workflow would be:
Import works doesn't require user to set secrets_map at all.
Initially a user sets secrets_map attribute to allow Terraform Provider to read secrets to set them in CREATE request and then terraform apply / plan to ensure everything is in sync and then a user deletes this block from their TF configuration file, runs terraform apply to update secrets_map in TF state and then there're no changes when someone runs terraform plan.
I also figured DiffSurpressFunc doesn't really work for TypeMap which is a bummer: Is it possible to use DiffSuppressFunc for a TypeMap in Terraform SDK v2?
My newest idea is to set secrets_map to null in fooRead().
Related
Context: I'm developing a TF Provider using Terraform SDKv2.
I've noticed there's prevent_destroy attribute that should protect against accidental deletions of important resources (that should only be deleted in exceptional cases).
However it looks like it won't prevent from issues where the resource is accidentally removed from the TF configuration or some other silly developer error, since the prevent_destroy attribute will be lost as well and not applied.
And then we noticed there's deletion_protection attribute for aws_rds_cluster resource of AWS TF Provider. Is it a thing or there're any other built in mechanism to address it?
For example, GCP's Best practices for using Terraform mentions:
resource "google_sql_database_instance" "main" {
name = "primary-instance"
settings {
tier = "D0"
}
lifecycle {
prevent_destroy = true
}
}
so it sounds like prevent_destroy might be good enough. That said, there's deletion_protection attribute for sql_database_instance resource too. What's even more interesting, for google_bigquery_table deletion_protection attribute defaults to true and there's a note:
: On newer versions of the provider, you must explicitly set deletion_protection=false (and run terraform apply to write the field to state) in order to destroy an instance. It is recommended to not set this field (or set it to true) until you're ready to destroy.
Context: I'm working on a new TF Provider using SDKv2.
I'm adding a new data plane resource which has a very weird API. Namely, there're some sensitive attributes (that are specific to this resource so they can't be set under provider block -- think about DataDog / Slack API secrets that this resource needs to interact with under the hood) I need to pass on creation that are not necessary later on (for example, even for update operation). My minimal code sample:
resource "foo" "bar" {
name = "abc"
sensitive_creds = {
"datadog_api_secret" = "abc..."
// might pass "slack_api_secret" instead
}
...
}
How can I implement it in Terraform to avoid state drifts etc?
So far I can see 3 options:
Make a user pass it first, don't save "sensitive_creds" to TF state. Make a user set it to sensitive_creds = {} to avoid a state drift for the next terraform plan run.
Make a user pass it first, don't save "sensitive_creds" to TF state. Make a user add ignore_changes = [sensitive_creds] } to their Terraform configuration.
Save "sensitive_creds" to TF state and live with it since users are likely to encrypt TF state anyways.
The most typical compromise is for the provider to still save the user's specified value to the state during create and then to leave it unchanged in the "read" operation that would normally update the state to match the remote system.
The result of this compromise is that Terraform can still detect when the user has intentionally changed the secret value in the configuration, but Terraform will not be able to detect changes made to the value outside of Terraform.
This is essentially your option 3. The Terraform provider protocol requires that the values saved to the state after create exactly match anything the user has specified in the configuration, so your first two options would violate the expected protocol and thus be declared invalid by Terraform Core.
Since you are using SDKv2, you can potentially "get away with it" because Terraform Core permits that older SDK to violate some of the rules as a pragmatic way to deal with the fact that SDKv2 was designed for older versions of Terraform and therefore doesn't implement the type system correctly, but Terraform Core will still emit warnings into its own logs noting that your provider produced an invalid result, and there may be error messages raised against downstream resources if they have configuration derived from the value of your sensitive_creds argument.
I am working with Terraform V11 and AWS provider; I am looking for a way to prevent destroying few resources during the destroy phase. So I used the following approach.
lifecycle {
prevent_destroy = true
}
When I run a "terraform plan" I get the following error.
the plan would destroy this resource, but it currently has
lifecycle.preven_destroy set to true. to avoid this error and continue with the plan.
either disable or adjust the scope.
All that I am looking for is a way to avoid destroying one of the resources and its dependencies during the destroy command.
AFAIK This feature is not yet supported
You need to remove that resource from state file and then reimport it
terraform plan | grep <resource> | grep id
terraform state rm <resource>
terraform destroy
terraform import <resource> <ID>
The easiest way to do this would be to comment out all of the the resources that you want to destroy and then do a terraform apply.
I've found the most practical way to manage this is through a combination of variables that allow the resource in question to be conditionally created or not on via the use of count, alongside having all other resources depend on the associated Data Source instead of the conditionally created resource.
A good example of this is a Route 53 Hosted Zone which can be a pain to destroy and recreate if you manage your domain outside of AWS and need to update your nameservers, waiting for DNS propagation each time you spin it up.
1. By specifying some variable
variable "should_create_r53_hosted_zone" {
type = bool
description = "Determines whether or not a new hosted zone should be created on apply."
}
2. you can use it alongside count on the resource to conditionally create it.
resource "aws_route53_zone" "new" {
count = var.should_create_r53_hosted_zone ? 1 : 0
name = "my.domain.com"
}
3. Then, by following up with a call to the associated Data Source
data "aws_route53_zone" "existing" {
name = "my.domain.com"
depends_on = [
aws_route53_zone.new
]
}
4. you can give all other resources consistent access to the resource's attributes regardless of whether or not your flag has been set.
resource "aws_route53_record" "rds_reader_endpoint" {
zone_id = data.aws_route53_zone.existing.zone_id
# ...
}
This approach is only slightly better than commenting / uncommenting resources during apply, but at least gives some consistent, documented way of working around it.
I am doing some learnings with Terraform and sentinel.
I cant get some of the basic functionality working.
I have a policy here:
import "tfconfig"
default_foo = rule { tfconfig.variables.foo.default is "bar" }
default_number = rule { tfconfig.variables.number.default is 42 }
main = rule { default_foo and default_number }
and a variables file here:
variable "foo" {
default = "bar"
}
variable "number" {
default = 42
}
But when I run:
sentinel apply policy.sentinel
I get the following error:
policy.sentinel:1:1: Import "tfconfig" is not available.
Any ideas as I have been looking for a solution for a number of hours now.
thanks
In order to use the Terraform-specific imports in the Sentinel SDK, you need to use mock data to produce a data structure to test against.
When you run Terraform via Terraform Cloud, a successful plan will produce a Sentinel mocks file that contains the same data that Terraform Cloud would itself use when evaluating policies against that plan, and so you can check that mock data into your repository as part of your test suite for your policies.
You can use speculative plans (run terraform plan on the command line with the remote backend enabled) to create mock data for intentionally-invalid configurations that you want to test your policy against, without having to push those invalid configurations into your version control system.
You can use sentinel test against test cases whose JSON definitions include a mock object referring to those mock files, and then the policies evaluated by those test cases will be able to import tfconfig, tfplan and tfstate and get an equivalent result to if the policies were run against the original plan in Terraform Cloud.
I had my mock data and everything and I was still getting this error.
Runtime error while running the policy:
ensure-policy.sentinel:3:1: Import "tfplan" is not available
I realized I had forgotten the 'sentinel.json' file at the root of my policies dir, which tells sentinel where to look for the mock data
https://www.terraform.io/docs/cloud/sentinel/mock.html
here's how they recommend a default dir setup, but you still need that .json file to tell sentinel where to look
and here's what the sentinel.json file should look like
I'm modifying Terraform provider.go file locally for development testing purposes. I need to add efs endpoint, something that looks like this
"efs": {
Type: schema.TypeString,
Optional: true,
Default: "",
Description: descriptions["efs_endpoint"],
},
I'm trying to put it under endpointsSchema function
My question:
What is required to successfully build Terraform locally with my changes?
Do I need to manually build the plugin and place it under home/user/.terraform.d/plugins (link)? Or make dev for Terraform would be enough?
I solved the problem by modifying the endpoints as in this PR.
Terraform searches for plugins in $GOPATH/bin. My advice is to run terraform from $GOPATH/bin and having the plugin in the same directory. For some reason, Terraform wasn't able to pickup the plugin properly otherwise.