Does Terraform locals visibility scope span children modules? - terraform

I've found that I can access a local coming from my root Terraform module in its children Terraform modules.
I thought that a local is scoped to the very module it's declared in.
See: https://developer.hashicorp.com/terraform/language/values/locals#using-local-values
A local value can only be accessed in expressions within the module where it was declared.
Seems like the documentation says locals shouldn't be visible outside their module. At my current level of Terraform knowledge I can't foresee what may be wrong with seeing locals of a root module in its children.
Does Terraform locals visibility scope span children (called) modules?
Why is that?
Is it intentional (by design) that a root local is visible in children modules?
Details added later:
Terraform version I use 1.1.5
My sample project:
.
├── childmodulecaller.tf
├── main.tf
└── child
└── some.tf
main.tf
locals {
a = 1
}
childmodulecaller.tf
locals {
b = 2
}
module "child" {
for_each = toset(try(local.a + local.b == 3, false) ? ["name"] : [])
source = "./child"
}
some.tf
resource "local_file" "a_file" {
filename = "${path.module}/file1"
content = "foo!"
}
Now I see that my question was based on a wrongly interpreted observation.
Not sure if it is still of any value but leaving it explained.
Perhaps it can help someone else to understand the same and avoid the confusion I experienced and explained in my corrected answer.

Each module has an entirely distinct namespace from others in the configuration.
The only way values can pass from one module to another is using input variables (from caller to callee) or output values (from callee to caller).
Local values from one module are never automatically visible in another module.

EDIT: Corrected answer
After reviewing my sample Terraform project code I see that my finding was wrong. The local a from main.tf I access in childmodulecaller.tf is actuallly accessed in a module block but still in the scope of my root module (I understand that is because childmodulecaller.tf is directly in the root module config dir). So I confused a module block in a calling parent with the child module called.
My experiments like changing child/some.tf the following way:
resource "local_file" "a_file" {
filename = "${path.module}/file1"
content = "foo!"
}
output "outa" {
value = local.a
}
output "outb" {
value = local.b
}
cause Error: Reference to undeclared local value
on terraform validate issued (similarly to what Mark B already mentioned in question comments for Terraform version 1.3.0)
So no, Terraform locals scope don't span children (called) modules.
Initial wrong answer:
I think I've understood why locals are visible in children modules.
It's because children (called) modules are included into the configuration of root (parent) module.
To call a module means to include the contents of that module into the configuration with specific values for its input variables.
https://developer.hashicorp.com/terraform/language/modules/syntax#calling-a-child-module
So yes, it's by design and not a bug. Just it may be not clear from locals documentation. As root (parent) module's locals visible in children module parts of configuration which are essentially also parts of the root (parent) modules being included into the root (parent) module.

Related

Locals depends_on - Terraform

I have a module a in terraform which creates a text file , i need to use that text file in another module b, i am using locals to pull the content of that text file like below in module b
locals {
ports = split("\n", file("ports.txt") )
}
But the terraform expects this file to be present at the start itself, throws error as below
Invalid value for "path" parameter: no file exists at
path/ports.txt; this function works only with files
that are distributed as part of the configuration source code, so if this file
will be created by a resource in this configuration you must instead obtain
this result from an attribute of that resource.
What am i missing here? Any help on this would be appreciated. Is there any depends_on for locals, how can i make this work
Modules are called from within other modules using module blocks. Most arguments correspond to input variables defined by the module. To reference the value from one module, you need to declare the output in that module, then you can call the output value from other modules.
For example, I suppose you have a text file in module a.
.tf file in module a
output "textfile" {
value = file("D:\\Terraform\\modules\\a\\ports.txt")
}
.tf file in module b
variable "externalFile" {
}
locals {
ports = split("\n", var.externalFile)
}
# output "b_test" {
# value = local.ports
# }
.tf file in the root module
module "a" {
source = "./modules/a"
}
module "b" {
source = "./modules/b"
externalFile = module.a.textfile
depends_on = [module.a]
}
# output "module_b_output" {
# value = module.b.b_test
# }
For more reference, you could read https://www.terraform.io/docs/language/modules/syntax.html#accessing-module-output-values
As the error message reports, the file function is only for files that are included on disk as part of your configuration, not for files generated dynamically during the apply phase.
I would typically suggest avoiding writing files to local disk as part of a Terraform configuration, because one of Terraform's main assumptions is that any objects you manage with Terraform will persist from one run to the next, but that could only be true for a local file if you always run Terraform in the same directory on the same computer, or if you use some other more complex approach such as a network filesystem. However, since you didn't mention why you are writing a file to disk I'll assume that this is a hard requirement and make a suggestion about how to do it, even though I would consider it a last resort.
The hashicorp/local provider includes a data source called local_file which will read a file from disk in a similar way to how a more typical data source might read from a remote API endpoint. In particular, it will respect any dependencies reflected in its configuration and defer reading the file until the apply step if needed.
You could coordinate this between modules then by making the output value which returns the filename also depend on whichever resource is responsible for creating the file. For example, if the file were created using a provisioner attached to an aws_instance resource then you could write something like this inside the module:
output "filename" {
value = "D:\\Terraform\\modules\\a\\ports.txt"
depends_on = [aws_instance.example]
}
Then you can pass that value from one module to the other, which will carry with it the implicit dependency on aws_instance.example to make sure the file is actually created first:
module "a" {
source = "./modules/a"
}
module "b" {
source = "./modules/b"
filename = module.a.filename
}
Then finally, inside the module, declare that input variable and use it as part of the configuration for a local_file data resource:
variable "filename" {
type = string
}
data "local_file" "example" {
filename = var.filename
}
Elsewhere in your second module you can then use data.local_file.example.content to get the contents of that file.
Notice that dependencies propagate automatically aside from the explicit depends_on in the output "filename" block. It's a good practice for a module to encapsulate its own behaviors so that everything needed for an output value to be useful has already happened by the time a caller uses it, because then the rest of your configuration will just get the correct behavior by default without needing any additional depends_on annotations.
But if there is any way you can return the data inside that ports.txt file directly from the first module instead, without writing it to disk at all, I would recommend doing that as a more robust and less complex approach.

Terraform Child module dependency on a calling resource

I am trying to create dependency between multiple sub modules which should be able to create the resource individually as well as should be able to create the resource if they are dependent on each other.
basically i am trying to create multiple VMs, and based on the ip addresses and vip ip address returned as the output i want to create the lbaas pool and lbaas pool members.
i have kept the project structure as below
- Root_Folder
- main.tf // create all the vm's
- output.tf
- variable.tf
- calling_module.tf
- modules
- lbaas-pool
- lbaas-pool.tf // create lbaas pool
- variable.tf
- output.tf
- lbaas-pool-members
- lbaas-pool-members.tf // create lbaas pool member
- variable.tf
- output.tf
calling_module.tf contains the reference to the lbaas-pool module and lbaas-pool-members, as these 2 modules are dependent on the output of the resource generated by main.tf file.
It is giving below error:
A managed resource has not been declared.
As the resource has not been generated yet, and while running terraform plan and apply command is trying to load the resource object which has not been created. Not sure with his structure declare the module implicit dependency between the resources so the module can work individually as well as when required the complete stack.
Expected behaviour:
main.tf output parameters should be create the dependency automatically in the terraform version 0.14 but seems like that is not the case from the above error.
Let's say you have a module that takes an instance ID as an input, so in modules/lbaas-pool you have this inside variable.tf
variable "instance_id" {
type = string
}
Now let's say you define that instance resource in main.tf:
resource "aws_instance" "my_instance" {
...
}
Then to pass that resource to any modules defined in calling_module.tf (or in any other .tf file in the same folder as main.tf), you would do so like this:
module "lbaas-pool" {
src="modules/lbaas-pool"
instance_id = aws_instance.my_instance.id
...
}
Notice how there is no output defined at all here. Any output at the root level is for exposing outputs to the command line console, not for sending things to child modules.
Also notice how there is no data source defined here. You are not writing a script that will run in a specific order, you are writing templates that tell Terraform what you want your final infrastructure to look like. Terraform reads all that, creates a dependency graph, and then deploys everything in the order it determines. At the time of running terraform plan or apply anything you reference via a data source has to already exist. Terraform doesn't create everything in the root module, then load the submodule and create everything there, it creates things in whatever order is necessary based on the dependency graph.

Terragrunt and common variables

I'm trying to something fairly simple, but can't seem to get my head around it. I have the following structure:
- terragrunt.hcl
-----dummy/
---------main.tf
---------terragrunt.hcl
I'm looking to set some common variables at the root level, and use them in main.tf. Howe would I go about declaring the varibale in the root terragrunt level, and have them available downstream?
I've tried setting them as inputs in the root, but then have to explicitly declare "variables" at the dummy level for the inputs to get picked up. I'm looking to somehow define these things at the root level and not repeat variable declarations at dummy/ level. Is this doable?
You can indeed do this documented here:
https://terragrunt.gruntwork.io/docs/reference/built-in-functions/#read_terragrunt_config
You can merge all inputs defined in some file above any module.
From the docs:
read_terragrunt_config(config_path, [default_val]) parses the terragrunt config at the given path and serializes the result into a map that can be used to reference the values of the parsed config. This function will expose all blocks and attributes of a terragrunt config.
For example, suppose you had a config file called common.hcl that contains common input variables:
inputs = {
stack_name = "staging"
account_id = "1234567890"
}
You can read these inputs in another config by using read_terragrunt_config, and merge them into the inputs:
locals {
common_vars = read_terragrunt_config(find_in_parent_folders("common.hcl"))
}
inputs = merge(
local.common_vars.inputs,
{
# additional inputs
}
)
This function also takes in an optional second parameter which will be returned if the file does not exist:
locals {
common_vars = read_terragrunt_config(find_in_parent_folders("i-dont-exist.hcl", "i-dont-exist.hcl"), {inputs = {}})
}
inputs = merge(
local.common_vars.inputs, # This will be {}
{
# additional inputs
}
)
Per the Terragrunt documentation: "Currently you can only reference locals defined in the same config file. Terragrunt does not automatically include locals defined in the parent config of an include block into the current context."
However, one way you can do this is as follows:
Create a file containing the common variables (e.g. myvars.hcl)
Load that in the child terragrunt:
locals {
myvars = read_terragrunt_config(find_in_parent_folders("myvars.hcl"))
foo = local.myvars.locals.foo
}
Hope that helps!
Other tools like Ansible has directory hierarchy where child can refer to or override the value of a variable set at the parent level.
Terraform does not have such a mechanism and each directory having tf files is a separate Terraform module. So directory hierarchy cannot be used to pass/inherit/reference Terraform variables.
Perhaps better to let the idea of "downstream or upstream" go.
One way to define common variables and share them among other modules is Data-only Modules . Extension of this and make the common variable world-wide available is using Terraform registry although it is not the intended use.

Terraform - why this is not causing circular dependency?

Terraform registry AWS VPC example terraform-aws-vpc/examples/complete-vpc/main.tf has the code below which seems to me a circular dependency.
data "aws_security_group" "default" {
name = "default"
vpc_id = module.vpc.vpc_id
}
module "vpc" {
source = "../../"
name = "complete-example"
...
# VPC endpoint for SSM
enable_ssm_endpoint = true
ssm_endpoint_private_dns_enabled = true
ssm_endpoint_security_group_ids = [data.aws_security_group.default.id] # <-----
...
data.aws_security_group.default refers to "module.vpc.vpc_id" and module.vpc refers to "data.aws_security_group.default.id".
Please explain why this does not cause an error and how come module.vpc can refer to data.aws_security_group.default.id?
In the Terraform language, a module creates a separate namespace but it is not a node in the dependency graph. Instead, each of the module's Input Variables and Output Values are separate nodes in the dependency graph.
For that reason, this configuration contains the following dependencies:
The data.aws_security_group.default resource depends on module.vpc.vpc_id, which is specifically the output "vpc_id" block in that module, not the module as a whole.
The vpc module's variable "ssm_endpoint_security_group_ids" variable depends on the data.aws_security_group.default resource.
We can't see the inside of the vpc module in your question here, but the above is okay as long as there is no dependency connection between output "vpc_id" and variable "ssm_endpoint_security_group_ids" inside the module.
I'm assuming that such a connection does not exist, and so the evaluation order of objects here would be something like this:
aws_vpc.example in module.vpc is created (I just made up a name for this because it's not included in your question)
The output "vpc_id" in module.vpc is evaluated, referring to module.vpc.aws_vpc.example, and producing module.vpc.vpc_id.
data.aws_security_group.default in the root module is read, using the value of module.vpc.vpc_id.
The variable "ssm_endpoint_security_group_ids" for module.vpc is evaluated, referring to data.aws_security_group.default.
aws_vpc_endpoint.example in module.vpc is created, including a reference to var.ssm_endpoint_security_group_ids.
Notice that in all of the above I'm talking about objects in modules, not modules themselves. The modules serve only to create separate namespaces for objects, and then the separate objects themselves (which includes individual variable and output blocks) are what participate in the dependency graph.
Normally this design detail isn't visible: Terraform normally just uses it to potentially optimize concurrency by beginning work on part of a module before the whole module is ready to process. In some interesting cases like this though, you can also intentionally exploit this design so that an operation for the calling module can be explicitly sandwiched between two operations for the child module.
Another reason why we might make use of this capability is when two modules naturally depend on one another, such as in an experimental module I built that hides some of the tricky details of setting up VPC peering connections:
locals {
vpc_nets = {
us-west-2 = module.vpc_usw2
us-east-1 = module.vpc_use1
}
}
module "peering_usw2" {
source = "../../modules/peering-mesh"
region_vpc_networks = local.vpc_nets
other_region_connections = {
us-east-1 = module.peering_use1.outgoing_connection_ids
}
providers = {
aws = aws.usw2
}
}
module "peering_use1" {
source = "../../modules/peering-mesh"
region_vpc_networks = local.vpc_nets
other_region_connections = {
us-west-2 = module.peering_usw2.outgoing_connection_ids
}
providers = {
aws = aws.use1
}
}
(the above is just a relevant snippet from an example in the module repository.)
In the above case, the peering-mesh module is carefully designed to allow this mutual referencing, internally deciding for each pair of regional VPCs which one will be the peering initiator and which one will be the peering accepter. The outgoing_connection_ids output refers only to the aws_vpc_peering_connection resource and the aws_vpc_peering_connection_accepter refers only to var.other_region_connections, and so the result is a bunch of concurrent operations to create aws_vpc_peering_connection resources, followed by a bunch of concurrent operations to create aws_vpc_peering_connection_accepter resources.

Referring to resources named with variables in Terraform

I'm trying to create a module in Terraform that can be instantiated multiple times with different variable inputs. Within the module, how do I reference resources when their names depend on an input variable? I'm trying to do it via the bracket syntax ("${aws_ecs_task_definition[var.name].arn}") but I just guessed at that.
(Caveat: I might be going about this in completely the wrong way)
Here's my module's (simplified) main.tf file:
variable "name" {}
resource "aws_ecs_service" "${var.name}" {
name = "${var.name}_service"
cluster = ""
task_definition = "${aws_ecs_task_definition[var.name].arn}"
desired_count = 1
}
resource "aws_ecs_task_definition" "${var.name}" {
family = "ecs-family-${var.name}"
container_definitions = "${template_file[var.name].rendered}"
}
resource "template_file" "${var.name}_task" {
template = "${file("task-definition.json")}"
vars {
name = "${var.name}"
}
}
I'm getting the following error:
Error loading Terraform: Error downloading modules: module foo: Error loading .terraform/modules/af13a92c4edda294822b341862422ba5/main.tf: Error reading config for aws_ecs_service[${var.name}]: parse error: syntax error
I was fundamentally misunderstanding how modules worked.
Terraform does not support interpolation in resource names (see the relevant issues), but that doesn't matter in my case, because the resources of each instance of a module are in the instance's namespace. I was worried about resource names colliding, but the module system already handles that.
The picture below shows what is going on.
The terraform documentation does not make their use of "NAME" clear versus the "name" values that are used for the actual resources created by the infrastructure vender (like, AWS or Google Cloud).
Additionally, it isn't always "name=, but sometimes, say, "endpoint= or even "resource_group_name= or whatever.
And there are a couple of ways to generate multiple "name" values -- using count, variables, etc., or inside tfvar files and running terraform apply -var-file=foo.tfvars

Resources