What's the common pattern for sensitive attributes of Terraform resource that are only required on creation? - terraform

Context: I'm working on a new TF Provider using SDKv2.
I'm adding a new data plane resource which has a very weird API. Namely, there're some sensitive attributes (that are specific to this resource so they can't be set under provider block -- think about DataDog / Slack API secrets that this resource needs to interact with under the hood) I need to pass on creation that are not necessary later on (for example, even for update operation). My minimal code sample:
resource "foo" "bar" {
name = "abc"
sensitive_creds = {
"datadog_api_secret" = "abc..."
// might pass "slack_api_secret" instead
}
...
}
How can I implement it in Terraform to avoid state drifts etc?
So far I can see 3 options:
Make a user pass it first, don't save "sensitive_creds" to TF state. Make a user set it to sensitive_creds = {} to avoid a state drift for the next terraform plan run.
Make a user pass it first, don't save "sensitive_creds" to TF state. Make a user add ignore_changes = [sensitive_creds] } to their Terraform configuration.
Save "sensitive_creds" to TF state and live with it since users are likely to encrypt TF state anyways.

The most typical compromise is for the provider to still save the user's specified value to the state during create and then to leave it unchanged in the "read" operation that would normally update the state to match the remote system.
The result of this compromise is that Terraform can still detect when the user has intentionally changed the secret value in the configuration, but Terraform will not be able to detect changes made to the value outside of Terraform.
This is essentially your option 3. The Terraform provider protocol requires that the values saved to the state after create exactly match anything the user has specified in the configuration, so your first two options would violate the expected protocol and thus be declared invalid by Terraform Core.
Since you are using SDKv2, you can potentially "get away with it" because Terraform Core permits that older SDK to violate some of the rules as a pragmatic way to deal with the fact that SDKv2 was designed for older versions of Terraform and therefore doesn't implement the type system correctly, but Terraform Core will still emit warnings into its own logs noting that your provider produced an invalid result, and there may be error messages raised against downstream resources if they have configuration derived from the value of your sensitive_creds argument.

Related

Azure - Create Function App hostkey with Terraform azapi/bicep/powershell

I'm working on automating the rotation of my azure function app's host key, which is used to maintain a more secure connection between my API Management and my function apps. The issue is that I can not figure out how to accomplish this based on the lack of clear documentation. I found a document for how to create a key for a specific function within the function app, but not for the host level. I've tried using the web ui resource manager to figure out what the proper values are, but host seems to have no values available by GET request to help me see what the formatting needs to be. In fact, I can't find any reference to my function app's host keys anywhere in the resource manager UI. (Of course I can in the portal).
I don't care if it's powershell, bicep, ARM, terraform azapi, whatever, I'd just like to find a way to accomplish the creation of a new hostkey so that I can control it's rotation with terraform. Does anyone know how to accomplish this?
Right now my attempt looks like
resource "azapi_resource" "function_host_key" {
type = "Microsoft.Web/sites/host/functionkeys#2018-11-01"
name = "${azurerm_windows_function_app.api_function.name}-host-key"
parent_id = "${azurerm_windows_function_app.api_function.id}/host"
body = jsonencode({
properties = {
name = "test-key-terraform"
value = "asdfasdfasdfasdfasdfasdfasdf"
}
})
}
I also tried
resource "azapi_resource" "function_host_key" {
type = "Microsoft.Web/sites#2018-11-01"
name = "${azurerm_windows_function_app.api_function.name}-host-key"
parent_id = "${azurerm_windows_function_app.api_function.id}/functionsAppKeys"
location = var.region
}
since it said the body was invalid, but this also throws an error due to there being no body. I'm wondering if this just isn't possible.
I also just tried
resource "azapi_resource" "function_host_key" {
type = "Microsoft.Web/host/functionkeys#2018-11-01"
name = "${azurerm_windows_function_app.api_function.name}-host-key"
parent_id = "${azurerm_windows_function_app.api_function.id}/host"
location = var.region
}
and the result said that it was expecting
parent_id of `parent_id is invalid`: expect ID of `Microsoft.Web/host`
so I'm not sure what that parent_id should be.
I found an example through a bash/powershell script using the azure rest API, but I get a 403 error when I attempt to do it, I can only assume because my function app is secured, but I'm not sure a good way to determine that.
There must be a way to create a key programmatically...
UPDATE
I believe that this has been purposely made impossible now to do with terraform and I need to, as grose and backwards as it may be, use a CLI command in my pipeline. I understand you can do this, but it is (ofc my opinion) that if I am using terraform, I have terraform manage something, not have random CLI commands outside of terraform doing things that TF should be able to manage.
I created a key using az functionapp keys set and that worked, and the output explicitly stated that the type of resource which was created was Microsoft.Web/sites/host/functionKeys, so I went to the Azure Resource Explorer to see what versions were available for this type, since it clearly exists.. and found that nope, azure does not have it listed.
What confuses me is that I see this being done w/ ARM templates and I believe that my code matches theirs, just I'm using AZAPI.. and I get a not found error. Giving up for now

How to manage locally generated stateful files in Terraform

I have a Terraform (1.0+) script that generates a local config file from a template based on some inputs, e.g:
locals {
config_tpl = templatefile("${path.module}/config.tpl", {
foo = "bar"
})
}
resource "local_file" "config" {
content = local._config_tpl
filename = "${path.module}/config.yaml"
}
This file is then used by a subsequent command run from a local-exec block, which in turn also generates local config files:
resource "null_resource" "my_command" {
provisioner "local-exec" {
when = create
command = "../scripts/my_command.sh"
working_dir = "${path.module}"
}
depends_on = [
local_file.config,
]
}
my_command.sh generates infrastructure for which there is no Terraform provider currently available.
All of the generated files should form part of the configuration state, as they are required later during upgrades and ultimately to destroy the environment.
I also would like to run these scripts from a CI/CD pipeline, so naturally you would expect the workspace to be clean on each run, which means the generated files won't be present.
Is there a pattern for managing files such as these? My initial though is to create cloud storage bucket, zip the files up, and store them there before pulling them back down whenever they're needed. However, this feels even more dirty than what is already happening, and it seems like there is the possibility to run into dependency issues.
Or, am I missing something completely different to solve issues such as this?
The problem you've encountered here is what the warning in the hashicorp/local provider's documentation is discussing:
Terraform primarily deals with remote resources which are able to outlive a single Terraform run, and so local resources can sometimes violate its assumptions. The resources here are best used with care, since depending on local state can make it hard to apply the same Terraform configuration on many different local systems where the local resources may not be universally available. See specific notes in each resource for more information.
The short and unfortunate answer is that what you are trying to do here is not a problem Terraform is designed to address: its purpose is to manage long-lived objects in remote systems, not artifacts on your local workstation where you are running Terraform.
In the case of your config.yaml file you may find it a suitable alternative to use a cloud storage object resource type instead of local_file, so that Terraform will just write the file directly to that remote storage and not affect the local system at all. Of course, that will help only if whatever you intend to have read this file is also able to read from the same cloud storage, or if you can write a separate glue script to fetch the object after terraform apply is finished.
There is no straightforward path to treating the result of a provisioner as persistent data in the state. If you use provisioners then they are always, by definition, one-shot actions taken only during creation of a resource.

Terraform using both required_providers and provider blocks

I am going through a terraform guide, where the author is spinning up a docker setup using the docker_image and docker_container resources.
In the sample code the main.tf file includes both the required_providers and the provider blocks, as follows:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
}
}
}
provider "docker" {}
Why are they both needed?
Shouldn't terraform be able to understand the need for a docker provider, only by this line?
provider "docker" {}
When considering Terraform providers there are two related notions to think about: the provider itself, and a configuration for the provider.
As an analogy, the provider kreuzwerker/docker here is a bit like a class you're importing from another library, giving it the local name docker. I'll use a pseudo-JavaScript syntax just to make this a bit more concrete:
var docker = require("kreuzwerker/docker");
However, all we have here so far is the class itself. In order to use it we need to create an instance of it, which in Terraform's vernacular is called a "configuration". Again, using pseudo-JavaScript syntax:
var dockerInstance = new docker({});
Terraform's syntax here is decidedly less explicit than this pseudo-JavaScript form, but we can make the distinction more visible by adding a second instance of the provider to the configuration, which in Terraform we do by assigning it a configuration "alias":
provider "docker" {
alias = "example"
host = "ssh://user#remote-host:22"
}
This is like creating a second instance of the provider "class" in our pseudo-JavaScript example:
var dockerInstance2 = new docker({
host: 'ssh://user#remote-host:22'
});
Another variant that shows the distinction is when a module inherits a provider configuration from its calling module. In that case, it's as if the calling module were implicitly passing the provider configuration (instance) into the module, but the child module still needs to import the provider "class" so Terraform can see that we're talking about kreuzwerker/docker as opposed to any other provider that might have the name "docker".
Terraform has some automatic "magic" behaviors that try to make simpler cases implicit, but unfortunately that comes at the cost of making it harder to understand what's going on when things get more complicated. Providers and provider configurations are a particularly hard example of this, because providers have been in the Terraform language for a long time and the current incarnation of the language is trying to stay broadly backward-compatible with the simple uses while still allowing for the newer features like having third-party providers installable from multiple namespaces.
The particularly confusing assumption here is that if you don't declare a particular provider Terraform will create an implicit required_providers declaration assuming that you mean a provider in the hashicorp/ namespace, which makes it seem as though required_providers is only for third-party providers. In fact though, that is largely a backward-compatibility mechanism and so I'd suggest always writing out the required_providers entries, even for the providers in the hashicorp/ namespace, so that less-experienced readers don't need to know about this special backward-compatibility behavior. In your case though, the provider you're using is in a third-party namespace anyway and so the required_providers entry is mandatory.
The source needs to be provided since this isn't one of the "official" HashiCorp providers. There could be multiple providers with the name "docker" in the provider registry, so providing the source is needed in order to tell Terraform exactly which provider to download.

Terraform provisioner trigger only for new instances / only run once

I have conditional provision steps I want to run for all compute instances created, but only run once.
I know I can put the provisioning within the compute resource, but then it cannot be conditional.
If I put it in a null_resource, I need a trigger, and I don't know how to trigger on only the newly created resources (i.e. if I already have 1 instance, and want to scale to 2, I want to only run provisioning on the 2nd being created, not run again on the 1st which is already provisioned).
How can I get a variable that only gives me the id or ip of the instance just created, as opposed to all of them?
Below an example of the provisioner.
resource "null_resource" "provisioning" {
count = var.condition ? length(var.instance_ips) : 0
triggers = {
instance_ids = join(",", var.instance_ips)
}
connection {
agent = false
timeout = "4m"
host = var.instance_ips[count.index]
user = "user"
private_key = var.ssh_private_key
}
provisioner "remote-exec" {
inline = [ do something, then remove the public key from authorized_keys ]
}
}
PS: the reason I only can run once (as opposed to run again and do nothing if already provisioned) is that I want to destroy the provisioning public key after I'm done, since it is using a tf generated key pair and the private key ends up in the state file, I want to make sure someone who gets access to the key pair still cannot access the instance.
Once the public key is removed from the authorized_keys the provisioner running a second time will just fail to connect, timeout and fail.
I found that I can use the on_failure: continue key, but then if it actual fails for legitimate reasons it would continue too.
I also could use a key pair that is generated locally with a local-exec provisioner so it doesn't show in the state file, but then the key is a file, which is not much different if someone get access to it; the file needs to stay on the machine, which may not work well with a cloud resource manager env that is recreated on a need to run basis.
And then I'm sure there are other ways to provision a file or script, but in this case it contains instance dependency data generated by TF, that I don't want left in a cloud-init.
So, I come down to needing to figure a way to use a trigger that only contains the new instance(s)
Any ideas how to do this?
https://www.terraform.io/docs/provisioners/
This documentation lists provisioners as a last resource and provides some suggestions on how to avoid having to use it, for various common resources.
Execute the script from the user_data, which is specifically designed for provisional, run-once actions. Since defining the user_data supports all regular Terraform interpolation, you can use that opportunity to pass environment variables or selectively include/exclude parts of a script, if you need conditional logic.
The downside is that any change in user_data results in recreating the instances, or creating a new launch configuration/template.

terraform lifecycle prevent destroy

I am working with Terraform V11 and AWS provider; I am looking for a way to prevent destroying few resources during the destroy phase. So I used the following approach.
lifecycle {
prevent_destroy = true
}
When I run a "terraform plan" I get the following error.
the plan would destroy this resource, but it currently has
lifecycle.preven_destroy set to true. to avoid this error and continue with the plan.
either disable or adjust the scope.
All that I am looking for is a way to avoid destroying one of the resources and its dependencies during the destroy command.
AFAIK This feature is not yet supported
You need to remove that resource from state file and then reimport it
terraform plan | grep <resource> | grep id
terraform state rm <resource>
terraform destroy
terraform import <resource> <ID>
The easiest way to do this would be to comment out all of the the resources that you want to destroy and then do a terraform apply.
I've found the most practical way to manage this is through a combination of variables that allow the resource in question to be conditionally created or not on via the use of count, alongside having all other resources depend on the associated Data Source instead of the conditionally created resource.
A good example of this is a Route 53 Hosted Zone which can be a pain to destroy and recreate if you manage your domain outside of AWS and need to update your nameservers, waiting for DNS propagation each time you spin it up.
1. By specifying some variable
variable "should_create_r53_hosted_zone" {
type = bool
description = "Determines whether or not a new hosted zone should be created on apply."
}
2. you can use it alongside count on the resource to conditionally create it.
resource "aws_route53_zone" "new" {
count = var.should_create_r53_hosted_zone ? 1 : 0
name = "my.domain.com"
}
3. Then, by following up with a call to the associated Data Source
data "aws_route53_zone" "existing" {
name = "my.domain.com"
depends_on = [
aws_route53_zone.new
]
}
4. you can give all other resources consistent access to the resource's attributes regardless of whether or not your flag has been set.
resource "aws_route53_record" "rds_reader_endpoint" {
zone_id = data.aws_route53_zone.existing.zone_id
# ...
}
This approach is only slightly better than commenting / uncommenting resources during apply, but at least gives some consistent, documented way of working around it.

Resources