Issue passing tfvars to tf files - terraform

Another terraform question...
I have a tfvar variable called "IsMultiAz", its value is set to "true" (boolean, but in string as recommended).
When I use this in RDS to control the multi-az variable, I get this error:
aws_db_instance.kong_private_rds: Error modifying DB Instance kongprivateinstance: InvalidParameterCombination: No modifications were requested
What is the way to pass bool variables from tfvars to my tf files?

Terraform 0.12 adds rich support for data types. Give it try by upgrading your terraform to 0.12+ and use boolean without string quotes.

Related

Why all attributes in the access_config becomes require while the documentation said they are optional?

I am new to Terraform and its CDK. I am confuse about the following:
When I try to run the tf.json generated through cdktf synth using cdktf deploy, terraform plan or terraform apply, the console keeps telling me that all attributes inside the access_config are required and emit errors, but I checked the documentation, it is said that these field can be optional.
So, I want to know is it a bug or the documentation is wrong ?
If you are checking the correct version of Terraform documentation and still see in tf plan/apply these attributes as required then you should add these attributes in your config. Might happen that the documentation is not up to date
After discuss with my colleagues, I managed to solve the problem. For the access_config, you have to fill in the attributes while leave them blank if you don't want to give any value to them:
"access_config":[{
"nat_ip":"google_compute_address.some_name.address",
"public_ptr_domain_name":"",
"network_tier":""
}]
The use for access_config is required in terraform cdk in comparison to terraform hcl. In terraform where we use HCL to write the configurations,
access_config can be left empty but for terraform cdk is needs to be populated with parameters which can be left empty.

Azure policy Terraform import '' expected type 'string', got unconvertible type '[]interface {}'

Created Azure policy to Enforce labels on pods in Kubernetes cluster as below.
Policy Name: Enforce labels on pods in Kubernetes cluster
I am trying to import policy using below command
terraform import azurerm_policy_set_definition.test /subscriptions/<SubscriptionId>/providers/Microsoft.Authorization/policySetDefinitions/<Id>
Whenever I am trying to import resource using terraform, getting below error.
Error: setting `policy_definition_reference`: policy_definition_reference.0.parameters.labelsList: '' expected type 'string', got unconvertible type '[]interface {}'
Any help much appreciated.
It's similar to the issue in Github. I also think it looks like the parameters need to be fixed. You can see the policy_definition_reference.0.parameters is a mapping of the parameter values for the referenced policy rule and each member needs a string value. But the Policy Set Definition in Azure, the parameters property is an object in the JSON format like this:
Maybe it cannot convert from an object into a string in Terraform.
And I think you also need to change the format you input in the Azure portal, it should be the same as the namespace, without quotes and separated with the character ;.
If you hover over the ⓘ for List of labels it gives you a hint about how to submit values to the field. You need to have labels separated with a ;. I suspect terraform is incorrectly interpreting the values as JSON.
EDIT: I misspoke, this bug was fixed sooner than I thought. It was fixed in azurerm 2.29.0
This could be because of this bug in the azurerm provider. A fix is in the works but they mention it will likely not be released until the next major release of the azurerm provider (3.0).
I ran into this issue and had to change existing initiative definitions to have parameters at the initiative level, rather than per policy inside of policy_definition_reference and have the policy parameters reference the parameters of the initiative.
Not sure if this is the best way to get around this as modifying initiative parameters requires that that the initiative has no policy assignments.

Terraform. Map in tfvars in a module

I'm writing a terraform module that will select a value from a map, based on a key passed to it.
Nothing unusual.
The values in that map are secret however, and do not change based on who is calling the module.
A simple way I thought to keep those secrets secret would be to define the variable as a map in variables.tf in the module, put the keys/values in terraform.tfvars in the module, .gitignore terraform.tfvars, and encrypt it to terraform.tfvars.gpg or something.
But that doesn't work, because I have no default for the variable in the module terraform is expecting the variable to be set by the caller.
I can define the variable in the caller without a value, add it to the call, and either specify manually --var-file or include a terraform.tfvars in the caller. But that means the user having to remember a magic --var-file invocation or the duplication of terraform.tfvars everywhere my new module is used.
Remembering magic strings and duplication are both not good options.
Is it possible for a module to use its own tfvars to fill in variables not passed to it?
There is no way to use an automatic .tfvars file with a non-root module. Child modules always get all of their values from the calling module block (with default values inserted where appropriate); .tfvars is only for assigning values to root module variables.
Another option with similar characteristics to what you're describing is to use a data file in either JSON or YAML format inside the module directory, and load it in using the file function and one of the decoding functions. For example:
locals {
# This file must be created before using this module by
# decrypting secrets.yaml.gpg in this directory.
secrets = yamldecode(file("${path.module}/secrets.yaml"))
}
If the caller neglects to decrypt into the .gitignored file before using the module then the file call will fail with a "file not found" error message.
I'm not totally sure I understand but taking a stab. Are you using AWS? A fun solution I've used in the past is SSM parameters.
data "aws_ssm_parameter" "foo" {
name = "foo"
}
...
value = data.aws_ssm_parameter.foo.value
...
The SSM param could be created outside of tf and looked up in your module (and policies granting access depending on caller via IAM, or whatever).

Terraform replace function doesn't work in conditional

I have a code which checks if key in the loop has word Ops and if yes then assigns value to provider either aws.peer or aws.default.
provider = "${replace(each.key, "Ops", "") != each.key ? "aws.peer" : "aws.default"}"
After I run it it returns:
Error: Invalid provider reference
On ../../modules/Stack/Peering/main.tf line 13: Provider argument requires a provider name followed by an optional alias, like "aws.foo".
Not sure why
Provider selection is not allowed to be dynamic in Terraform. If you share more of your script, we might be able to give you a workaround that is specific to the solution you are building.
Provider selections cannot be dynamic like this. Although it didn’t produce an error in Terraform 0.11, it also didn’t work: Terraform 0.11 just ignored it and treated it as a literal string, just as the terraform 0.12upgrade tool showed. Terraform 0.12 has an explicit validation check for it to give you better feedback that it’s not supported.
The connections between resources and their providers happens too early for Terraform to be able to evaluate expressions in that context, because the provider must be known in order to understand the other contents of the block.
Resource w/ possible work around:
https://discuss.hashicorp.com/t/defining-provider-aliases-with-string-interpolation-not-working-in-terraform-0-12/1569/4

terraform.tfvars vs variables.tf difference [duplicate]

This question already has answers here:
What is the difference between variables.tf and terraform.tfvars?
(5 answers)
Closed 3 years ago.
I've been researching this but can't find the distinction. A variables.tf file can store variable defaults/values, like a terraform.tfvars file.
What's the difference between these two and the need for one over the other? My understanding is if you pass in the var file as an argument in terraform via the command line.
There is a thread about this already and the only benefit seems to be passing in the tfvars file as an argument, as you can "potentially" do assignment of variables in a variable.tf file.
Is this the correct thinking?
The distinction between these is of declaration vs. assignment.
variable blocks (which can actually appear in any .tf file, but are in variables.tf by convention) declare that a variable exists:
variable "example" {}
This tells Terraform that this module accepts an input variable called example. Stating this makes it valid to use var.example elsewhere in the module to access the value of the variable.
There are several different ways to assign a value to this input variable:
Include -var options on the terraform plan or terraform apply command line.
Include -var-file options to select one or more .tfvars files to set values for many variables at once.
Create a terraform.tfvars file, or files named .auto.tfvars, which are treated the same as -var-file arguments but are loaded automatically.
For a child module, include an expression to assign to the variable inside the calling module block.
A variable can optionally be declared with a default value, which makes it optional. Variable defaults are used for situations where there's a good default behavior that would work well for most uses of the module/configuration, while still allowing that behavior to be overridden in exceptional cases.
The various means for assigning variable values are for dealing with differences. What that means will depend on exactly how you are using Terraform, but for example if you are using the same configuration multiple times to deploy different "copies" of the same infrastructure (environments, etc) then you might choose to have a different .tfvars file for each of these copies.
Because terraform.tfvars and .auto.tfvars are automatically loaded without any additional options, they behave similarly to defaults, but the intent of these is different. When running Terraform in automation, some users have their automation generate a terraform.tfvars file or .auto.tfvars just before running Terraform in order to pass in values the automation knows, such as what environment the automation is running for, etc.
The difference between the automatically-loaded .tfvars files and variable defaults is more clear when dealing with child modules. .tfvars files (and -var, -var-file options) only apply to the root module variables, while variable defaults apply when that module is used as a child module too, meaning that variables with defaults can be omitted in module blocks.
A variables.tf file is used to define the variables type and optionally set a default value.
A terraform.tfvars file is used to set the actual values of the variables.
You could set default values for all your variables and not use tfvars files at all.
Actually the objective of splitting between the definitions and the values, is to allow the definition of a common infrastructure design, and then apply specific values per environment.
Using multiple tfvars files that you give as an argument allows you to set different values per environment : secrets, VM size, number of instances, etc.

Resources