GCP terraform provider - possible documentation bug? - terraform

The documentation for google_compute_subnetwork > private_ip_google_access states that private_ip_google_access is an exported attribute as opposed to being an argument which I assume means that it cannot be specified in my terraform code. However, I have just run a successful terraform apply using this terraform HCL code:
resource "google_compute_subnetwork" "subnetwork" {
name = "${var.subnetname}"
ip_cidr_range = "${var.subnet_range}"
network = "${var.network}"
region = "${var.region}"
private_ip_google_access = "true"
}
So one of the following must be true:
* I am misunderstanding what it means to be an attribute. My assumption up to now has been that arguments can be specified, attributes cannot. Am I wrong in this assumption?
* The documentation incorrectly states that private_ip_google_access is an attribute whereas actually it should be an argument.
Which of those is true?

You are right in both cases.
A resource has two sets of elements, arguments for input and attribute for output.
In this case, as you are able to set private_ip_google_access when calling the resource, it means that it's actually an argument and not an attribute.

Related

how to resolve Terraform error Invalid count argument?

I'm trying to add New Relic One Synthetic moniter using common module "monitor" we use in terraform, where i also want to attach new alert condition policy. which is working fine if i create resources one by one but as i want to commit all changes it showing me error as below.
Error: Invalid count argument
on .terraform/modules/monitor/modules/synthetics/syn_alert.tf line 11, in resource "newrelic_alert_policy" "policy":
11: count = var.policy_id != null ? 0 : var.create_alerts == true ? 1 : var.create_multilocation_alerts == true ? 1 : 0
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
i expect this should work accurately as i tried stepwise, even i did tryed to look for solutions as resource dependencies so i also did added depends_on with required resources like
depends_on = [newrelic_alert_policy.harvester_ping_failure_alert_policy,newrelic_alert_channel.slack_channel]
but still not working as expected.
This error suggests that one of the input variables you included here has a value that won't be known until the apply step:
var.policy_id
var.create_alerts
var.create_multilocation_alerts
You didn't show how exactly how you're defining those variables in the calling module block, but I'm guessing that policy_id is probably the problematic one of these, because you've probably assigned an attribute from a managed resource instance in the parent module and the remote object corresponding to that resource instance hasn't been created yet, and so its ID isn't known yet.
If that's true, you'll need to define this differently so that the choice about whether to declare this object is made as a separate value from the ID itself, and then make sure the choice about whether to declare is not based on the outcome of any managed resource elsewhere in the configuration.
One way to do that would be like this:
variable "project" {
type = object({
id = string
})
default = null
}
This means that the decision about whether or not to set this object can be represented by the "nullness" of the entire object, even though the id attribute inside a non-null object might be unknown.
module "monitor" {
# ...
project = {
id = whatever_resource_type.name.id
}
}
If the object whose ID you're passing in here is itself a resource instance with an id attribute, as I showed above, then you can also make this more concise by assigning the whole object at once:
module "monitor" {
# ...
project = whatever_resource_type.name
}
Terraform will check to make sure that whatever_resource_type.name has an id attribute, and if so it will use it to populate the id attribute of the variable inside the module.

module.db is a list of object, known only after apply

rds.tf:-
module "db" {
**count = var.environment == "dev" || var.environment == "qa" ? 1 : 0**
source = "../rds"
identifier = var.db_name
engine = var.rds_engine
engine_version = var.rds_engine_version
output.tf:
output "rds_instance_endpoint" {
description = "The connection endpoint"
value = module.db.db_instance_endpoint
}
ERROR:-
Error: Unsupported attribute
on outputs.tf line 28, in output "rds_instance_endpoint":
28: value = module.db.db_instance_endpoint
module.db is a list of object, known only after apply
Can't access attributes on a list of objects. Did you mean to access attribute "db_instance_endpoint" for a specific element of the list, or across all elements of the list?
getting the above error while declaring count in the rds.tf module.
if I remove count then its working fine, not sure what is this error.
The error message in this case is the sentence "Can't access attributes on a list of objects", not the sentence "module.db is a list of object, known only after apply"; the latter is just additional context to help you understand the error message that follows.
In other words, Terraform is telling you that module.db is a list of objects and that it's invalid to use .db_instance_endpoint ("accessing an attribute") on that list.
The root problem here is that you have a module object that may or may not exist, and so you need to explain to Terraform how you want to handle the situation where it doesn't exist.
One answer would be to change your output value to return a set of instance endpoints that would be empty in environments other than the dev and QA environments:
output "rds_instance_endpoints" {
description = "The connection endpoints for any database instances in this environment."
value = toset(module.db[*].db_instance_endpoint)
}
Here I used the "splat" operator to take the db_instance_endpoint attribute value for each instance of the module, which currently means that it will either be a single-element set or an empty set depending on the situation. This approach most directly models the underlying implementation and would give you the freedom to add additional instances of this module in future if you need to, but you might consider the fact that this is a multi-instance module to be an implementation detail which should be encapsulated in the module itself.
The other main option, which does hide that implementation detail of the underlying module being a list, would be to have your output value be null in the situation where there are no instances of the module, or to be a single string when there is one instance. For that, we can slightly adapt the above example to use the one function instead of the toset function:
output "rds_instance_endpoints" {
description = "The connection endpoint for the database instance in this environment, if any."
value = one(module.db[*].db_instance_endpoint)
}
The one function has three possible outcomes depending on the number of elements in the given collection:
If the collection has no elements, it will return null.
If the collection has one element, it will return just the value of that element.
If the collection has more than one element, it will fail with an error message. But note that this outcome is impossible in your case, because count can only be set to zero or one.
Null values can potentially be annoying to deal with if this data is being returned to a calling module for use elsewhere, but sometimes it's the most reasonable representation of the described infrastructure and so worth that additional complexity. For root module output values in particular, Terraform will treat a null value as if the output value were not set at all, hiding it in the messaging from terraform apply, and so this second strategy is often the best choice in situations where your primary motivation is to return this information in the user interface for a human to use for some subsequent action.

Terraform aws_ssm_parameter null/empty with ignore_changes

I have a Terraform config that looks like this:
resource "random_string" "foo" {
length = 31
special = false
}
resource "aws_ssm_parameter" "bar" {
name = "baz"
type = "SecureString"
value = random_string.foo.result
lifecycle {
ignore_changes = [value]
}
}
The idea is that on the first terraform apply the bar resource will be stored in baz in SSM based on the value of foo, and then on subsequent calls to apply I'll be able to reference aws_ssm_parameter.bar.value, however what I see is that it works on the first run, stores the newly created random value, and then on subsequent runs aws_ssm_parameter.bar.value is empty.
If I create a aws_ssm_parameter data source that can pull the value correctly, but it doesn't work on the first apply when it doesn't exist yet. How can I modify this config so I can get the value stored in baz in SSM and work for creating the value in the same config?
(Sorry not enough karma to comment)
To fix the chicken-egg problem, you could add depends_on = [aws_ssm_parameter.bar] to a data resource, but this introduces some awkwardness (especially if you need to call destroy often in your workflow). It's not particularly recommended (see here).
It doesn't really make sense that it's returning empty, though, so I wonder if you've hit a different bug. Does the value actually get posted to SSM (i.e. can you see it when you run aws ssm get-paramter ...)?
Edit- I just tested your example code above with:
output "bar" {
value = aws_ssm_parameter.bar.value
}
and it seems to work fine. Maybe you need to update tf or plugins?
Oh, I forgot about this question, but turns out I did figure out the problem.
The issue was that I was creating the ssm parameter inside a module that was being used in another module. The problem was because I didn't output anything related to this parameter, so it seemed to get dropped from state by Terraform on subsequent replans after it was created. Exposing it as output on the module fixed the issue.

Terraform outputs 'Error: Variables not allowed' when doing a plan

I've got a variable declared in my variables.tf like this:
variable "MyAmi" {
type = map(string)
}
but when I do:
terraform plan -var 'MyAmi=xxxx'
I get:
Error: Variables not allowed
on <value for var.MyAmi> line 1:
(source code not available)
Variables may not be used here.
Minimal code example:
test.tf
provider "aws" {
}
# S3
module "my-s3" {
source = "terraform-aws-modules/s3-bucket/aws"
bucket = "${var.MyAmi}-bucket"
}
variables.tf
variable "MyAmi" {
type = map(string)
}
terraform plan -var 'MyAmi=test'
Error: Variables not allowed
on <value for var.MyAmi> line 1:
(source code not available)
Variables may not be used here.
Any suggestions?
This error can also occurs when trying to setup a variable's value from a dynamic resource (e.g: an output from a child module):
variable "some_arn" {
description = "Some description"
default = module.some_module.some_output # <--- Error: Variables not allowed
}
Using locals block instead of the variable will solve this issue:
locals {
some_arn = module.some_module.some_output
}
I had the same error, but in my case I forgot to enclose variable values inside quotes (" ") in my terraform.tfvars file.
This is logged as an issue on the official terraform repository here:
https://github.com/hashicorp/terraform/issues/24391
I see two things that could be causing the error you are seeing. Link to terraform plan documentation.
When running terraform plan, it will automatically load any .tfvars files in the current directory. If your .tfvars file is in another directory you must provide it as a -var-file parameter. You say in your question that your variables are in a file variables.tf which means the terraform plan command will not automatically load that file. FIX: rename variables.tf to variables.tfvars
When using the -var parameter, you should ensure that what you are passing into it will be properly interpreted by HCL. If the variable you are trying to pass in is a map, then it needs to be parse-able as a map.
Instead of terraform plan -var 'MyAmi=xxxx' I would expect something more like terraform plan -var 'MyAmi={"us-east-1":"ami-123", "us-east-2":"ami-456"}'.
See this documentation for more on declaring variables and specifically passing them in via the command line.
I had the same issue, but my problem was the missing quotes around default value of the variable
variable "environment_name" {
description = "Enter Environment name"
default= test
}
This is how I resolved this issues,
variable "environment_name" {
description = "Enter Environment name"
default= "test"
}
Check the terraform version.
I had something similar , the module was written on version 1.0 and I was using terraform version 0.12.
I had this error on Terraform when trying to pass a list into the module including my Data source:
The given value is not suitable for module. ...
In my case I was passing the wrong thing to the module:
security_groups_allow_to_msk_on_port_2181 = concat(var.security_groups_allow_to_msk_2181, [data.aws_security_group.client-vpn-sg])
It expected the id only and not the whole object. So instead this worked for me:
security_groups_allow_to_msk_on_port_2181 = concat(var.security_groups_allow_to_msk_2181, [data.aws_security_group.client-vpn-sg.id])
Also be sure what type of object you are receiving: is it a list? watch out for the types. I had the same error message when the first argument was also enclosed in [] (brackets), since it already was a list.

When is the equal sign (=) required in Terraform when assigning a map (TF 0.11)

I wasn't able to solve this question by any other method, so I have to ask this here...
What is the logic behind using the equal sign (=) when assigning a map to a value in Terraform 0.11.
With =
resource "pseude_resource" "pseudo_name" {
value = {
key1 = value1
key2 = value2
}
}
Without =
resource "pseude_resource" "pseudo_name" {
value {
key1 = value1
key2 = value2
}
}
The = seems to be required when using arrays ([]), but isn't when using a map. What is the reason behind this and why on earth?? Can it just be omited?
In the Terraform language, there are two distinct constructs that have quite similar-looking syntax in the common case.
Arguments that expect maps
An argument in Terraform is a single name/value pair where the provider dictates (in the resource type schema) what type of value it expects. You are free to create a value of that expected type any way you like, whether it be as a literal value or as a complex expression.
The argument syntax is, in general:
name = value
If a particular argument is defined as being a map then one way you can set it is with a literal value, like this:
tags = {
Name = "foo bar baz"
}
...but you can also use a reference to some other value that is compatible with the map type:
# Tags are the same as on some other VPC object
tags = aws_vpc.example.tags
...or you can combine maps together using built-in Terraform functions:
tags = merge(local.default_tags, var.override_tags)
Generally speaking, you can use any expression whose result is a map with the expected element type.
In Terraform 0.11 these non-literal examples all need to be presented in the template interpolation syntax ${ ... }, but the principle is the same nonetheless.
Nested blocks
Whereas an argument sets some specific configuration setting for the object whose block it's embedded in, the nested block syntax conventionally declares the existence of another object that is related to the one the block is embedded in. Sometimes this really is a separate physical object, such as a rule associated with a security group, and other times it's a more conceptual "object", such as versioning in aws_s3_bucket which models the versioning feature as a separate "object" because the presence of it activates the feature.
The nested block syntax follows the same conventions as the syntax for the top-level resource, variable, terraform, etc blocks:
block_type "label" {
nested_argument = value
}
Each block type will have a fixed number of labels it expects, which in many cases is no labels at all. Because each block represents the declaration of a separate object, the block structure is more rigid and must always follow the shape above; it's not possible to use arbitrary expressions in this case because Terraform wants to validate that each of the blocks has correct arguments inside it during its static validation phase.
Because the block syntax and the map literal syntax both use braces { }, they have a similar look in the configuration, but they mean something quite different to Terraform. With the block syntax, you can expect the content of the block to have a fixed structure with a specific set of argument names and nested block types decided by the resource type schema. With a map argument, you are free to choose whichever map keys you like (subject to any validation rules the provider or remote system might impose outside of the Terraform schema).
Recognizing the difference
Unfortunately today the documentation for providers is often vague about exactly how each argument or nested block should be used, and sometimes even omits the expected type of an argument. The major providers are very large codebases and so their documentation has variable quality due to the fact that they've been written by many different people over several years and that ongoing improvements to the documentation can only happen gradually.
With that said, the provider documentation will commonly use the words "nested block" or "block type" in the description of something that is a nested block, and will refer to some definition elsewhere on the page for exactly which arguments and nested blocks belong inside that block. If the documentation states that a particular argument is a map or implies that the keys are free-form then that suggests that it's an argument expecting a map value. Another clue is that block type names are conventionally singular nouns (because each block describes a single object) while arguments that take maps and other collections conventionally use plural nouns.
If you find specific cases where the documentation is ambiguous about whether a particular name is a nested block type or an argument, it can be helpful to open an issue for it in the provider's repository to help improve the documentation. Terraform's documentation pages have an "Edit This Page" link in the footer which you can use to propose simple (single-page-only) edits as a pull request in the appropriate repository.
The longer-form explanation of these concepts is in the documentation section Arguments and Blocks.
The confusion comes from the behavior of the hcl language used by terraform. The functionality isn't very well documented in terraform but... In hcl, you can define a list by using repeating blocks, which is how resources like aws_route_table define inline routes, e.g.
resource "aws_route_table" "r" {
vpc_id = "${aws_vpc.default.id}"
route {
cidr_block = "10.0.1.0/24"
gateway_id = "${aws_internet_gateway.main.id}"
}
route {
ipv6_cidr_block = "::/0"
egress_only_gateway_id = "${aws_egress_only_internet_gateway.foo.id}"
}
tags = {
Name = "main"
}
}
which would be equivalent to
resource "aws_route_table" "r" {
vpc_id = "${aws_vpc.default.id}"
route = [
{
cidr_block = "10.0.1.0/24"
gateway_id = "${aws_internet_gateway.main.id}"
},
{
ipv6_cidr_block = "::/0"
egress_only_gateway_id = "${aws_egress_only_internet_gateway.foo.id}"
}
]
tags = {
Name = "main"
}
}
You want to make sure you're using the = when you're assigning a value to something, and only use the repeating block syntax when you're working with lists. Also, from my experience I recommend NOT using inlines when an individual resource is available.
Some very limited documentation for hcl can be found in the readme for the repo.

Resources