I have the following folder structure:
infrastructure
└───security-groups
│ │ main.tf
│ │ config.tf
│ │. security_groups.tf
│
└───instances
│ main.tf
│ config.tf
│ instances.tf
I would like to reference the security group id instantiated in security-groups folder by reference.
I have tried to output the required ids in the security_groups.tf file with
output "sg_id" {
value = "${aws_security_group.server_sg.id}"
}
And then in the instances file add it as a module:
module "res" {
source = "../security-groups"
}
The problem with this approach is that when I do terraform apply in the instances folder, it tries to create the security groups as well (which I have already created by doing terraform apply in the security-groups folder) and it fails because the SGs are existing.
What would be the easiest way to reference the resources created in a different folder, without changing the structure of the code?
Thank you.
To refer to an existing resource you need to use a data block. You won't refer directly to the resource block in the other folder, but instead specify a name or ID or whatever other unique identifier it has. So for a security group, you would add something like
data "aws_security_group" "sg" {
name = "the-security-group-name"
}
to your instances folder, and refer to that entity to associate your instances with that security group.
You should also consider whether you actually want to be just applying terraform to the whole tree, instead of each folder separately. Then you can refer between all your managed resources directly like you are trying to do, and you don't have to call terraform apply as many times.
While lxop's answer is a better practice, if you really do need to refer to output in another local folder, you can do it like this:
data "terraform_remote_state" "sg" {
backend = "local"
config = {
path = "../security-groups/terraform.tfstate"
}
}
and then refer to it using e.g.
locals {
sgId = data.terraform_remote_state.sg.outputs.sg_id
}
Related
I've found that I can access a local coming from my root Terraform module in its children Terraform modules.
I thought that a local is scoped to the very module it's declared in.
See: https://developer.hashicorp.com/terraform/language/values/locals#using-local-values
A local value can only be accessed in expressions within the module where it was declared.
Seems like the documentation says locals shouldn't be visible outside their module. At my current level of Terraform knowledge I can't foresee what may be wrong with seeing locals of a root module in its children.
Does Terraform locals visibility scope span children (called) modules?
Why is that?
Is it intentional (by design) that a root local is visible in children modules?
Details added later:
Terraform version I use 1.1.5
My sample project:
.
├── childmodulecaller.tf
├── main.tf
└── child
└── some.tf
main.tf
locals {
a = 1
}
childmodulecaller.tf
locals {
b = 2
}
module "child" {
for_each = toset(try(local.a + local.b == 3, false) ? ["name"] : [])
source = "./child"
}
some.tf
resource "local_file" "a_file" {
filename = "${path.module}/file1"
content = "foo!"
}
Now I see that my question was based on a wrongly interpreted observation.
Not sure if it is still of any value but leaving it explained.
Perhaps it can help someone else to understand the same and avoid the confusion I experienced and explained in my corrected answer.
Each module has an entirely distinct namespace from others in the configuration.
The only way values can pass from one module to another is using input variables (from caller to callee) or output values (from callee to caller).
Local values from one module are never automatically visible in another module.
EDIT: Corrected answer
After reviewing my sample Terraform project code I see that my finding was wrong. The local a from main.tf I access in childmodulecaller.tf is actuallly accessed in a module block but still in the scope of my root module (I understand that is because childmodulecaller.tf is directly in the root module config dir). So I confused a module block in a calling parent with the child module called.
My experiments like changing child/some.tf the following way:
resource "local_file" "a_file" {
filename = "${path.module}/file1"
content = "foo!"
}
output "outa" {
value = local.a
}
output "outb" {
value = local.b
}
cause Error: Reference to undeclared local value
on terraform validate issued (similarly to what Mark B already mentioned in question comments for Terraform version 1.3.0)
So no, Terraform locals scope don't span children (called) modules.
Initial wrong answer:
I think I've understood why locals are visible in children modules.
It's because children (called) modules are included into the configuration of root (parent) module.
To call a module means to include the contents of that module into the configuration with specific values for its input variables.
https://developer.hashicorp.com/terraform/language/modules/syntax#calling-a-child-module
So yes, it's by design and not a bug. Just it may be not clear from locals documentation. As root (parent) module's locals visible in children module parts of configuration which are essentially also parts of the root (parent) modules being included into the root (parent) module.
What is the syntax to assign the managed disk id from a newly created virtual machine to an output? I would like to use it as the "source_resource_id" of an "azurerm_managed_disk" resource.
I have tried the following within outputs.tf:
output "manageddisk" {
value = azurerm_virtual_machine.vm.storage_os_disk.[0].managed_disk_id
}
However, this results in the following error:
╷
│ Error: Invalid attribute name
│
│ On outputs.tf line 17: An attribute name is required after a dot.
Not all arguments can be used as attributes as they're two separate things. Terraform's docs distinguish this by having an "Argument Reference" and an "Attributes Reference" section for a given resource; only the attributes of a resource can be accessed.
One pathway is to create an azurerm_managed_disk resource separately, which does output the identifier of the managed disk. You can then use the managed disk's resource's id output as an input to the virtual machine you're provisioning:
output "manageddisk" {
value = azurerm_managed_disk.example.id
}
From here you have the output of the managed disk, so you can use it for whatever you'd like.
We inherited a terraform + modules layout like this where the databases for an environment (AWS RDS) are provisioned slightly differently depending on whether terraform is invoked on the main branch vs any feature/* branches in our CI/CD pipeliens.
☡ tree -P main.tf
.
├── feature-database
│ ├── dev
│ │ └── main.tf
│ └── modules
│ └── database
│ └── main.tf
└── main-database
├── dev
│ └── main.tf
└── modules
└── database
└── main.tf
8 directories, 4 files
The feature-database module provisions an RDS instance from a snapshot of the RDS instance created by main-database - apart from this difference, everything else in the feature-database module is an exact copy-paste of main-database.
It seems like a code-smell to have 2 very similar modules (i.e. */modules/database/main.tf) that are 95% identical to each other. We have concerns about maintaining, testing, deployments in this approach and want to restructure to make this DRY.
So the questions naturally are.
What would be a good (ideally terraform) way to manage these differences in provisioning depending on the environment? Is conditional execution a possibility or Do we just accept this as an overhead we manage as different mostly-identical modules?
Are there some out-of-the-box solutions with tools/approaches to help with something like this?
Non-idempotent operations such as creating and using snapshots/images are unfortunately not an ideal situation for Terraform's execution model, since they lend themselves more to an imperative execution model ("create a new instance using this particular snapshot" (where this particular is likely to change for each deployment) vs. "there should be an instance").
However, it is possible in principle to write such a thing. Without seeing the details of those modules it's hard to give specific advice, but at a high-level I'd expect to see the unified module have an optional input variable representing a snapshot ID, and then have the module vary its behavior based on whether that variable is set:
variable "source_snapshot_id" {
type = string
# This means that the variable is optional but
# it doesn't have a default value.
default = null
}
resource "aws_db_instance" "example" {
# ...
# If this variable isn't set then the value here
# will be null, which is the same as not setting
# snapshot_identifier at all.
snapshot_identifier = var.source_snapshot_id
}
The root module would then need to call this module twice and wire the result of the first instance into the second instance. Perhaps that would look something like this:
module "main_database" {
source = "../modules/database"
# ...
}
resource "aws_db_snapshot" "example" {
db_instance_identifier = module.main_database.instance_id
db_snapshot_identifier = "${module.main_database.instance_id}-feature-snapshot"
}
module "feature_database" {
source = "../modules/database"
source_snapshot_id = aws_db_snapshot.example.db_snapshot_identifier
# ...
}
On the first apply of this configuration, Terraform would first create the "main database", then immediately create a snapshot of it, and then create the "feature database" using that snapshot. In order for that to be useful the module would presumably need to encapsulate some actions to put some schema and possibly some data into the database, or else the snapshot would just be of an empty database. If those actions involve some other resources alongside the main aws_db_instance then you can encapsulate the correct ordering by declaring additional dependencies on the instance_id output value I presumed in my example above:
output "instance_id" {
# This reference serves as an implicit dependency
# on the DB instance itself.
value = aws_db_instance.example.id
# ...but if you have other resources that arrange
# for the database to have interesting data inside
# it then you'll likely want to declare those
# dependencies too, so that the root module won't
# start trying to create a snapshot until the
# database contents are ready
depends_on = [
aws_db_instance_role_association.example,
null_resource.example,
# ...
]
}
I've focused on the general Terraform patterns here rather than on the specific details of RDS, because I'm not super familiar with these particular resource types, but hopefully even if I got any details wrong above you can still see the general idea here and adapt it to your specific situation.
I am trying to create dependency between multiple sub modules which should be able to create the resource individually as well as should be able to create the resource if they are dependent on each other.
basically i am trying to create multiple VMs, and based on the ip addresses and vip ip address returned as the output i want to create the lbaas pool and lbaas pool members.
i have kept the project structure as below
- Root_Folder
- main.tf // create all the vm's
- output.tf
- variable.tf
- calling_module.tf
- modules
- lbaas-pool
- lbaas-pool.tf // create lbaas pool
- variable.tf
- output.tf
- lbaas-pool-members
- lbaas-pool-members.tf // create lbaas pool member
- variable.tf
- output.tf
calling_module.tf contains the reference to the lbaas-pool module and lbaas-pool-members, as these 2 modules are dependent on the output of the resource generated by main.tf file.
It is giving below error:
A managed resource has not been declared.
As the resource has not been generated yet, and while running terraform plan and apply command is trying to load the resource object which has not been created. Not sure with his structure declare the module implicit dependency between the resources so the module can work individually as well as when required the complete stack.
Expected behaviour:
main.tf output parameters should be create the dependency automatically in the terraform version 0.14 but seems like that is not the case from the above error.
Let's say you have a module that takes an instance ID as an input, so in modules/lbaas-pool you have this inside variable.tf
variable "instance_id" {
type = string
}
Now let's say you define that instance resource in main.tf:
resource "aws_instance" "my_instance" {
...
}
Then to pass that resource to any modules defined in calling_module.tf (or in any other .tf file in the same folder as main.tf), you would do so like this:
module "lbaas-pool" {
src="modules/lbaas-pool"
instance_id = aws_instance.my_instance.id
...
}
Notice how there is no output defined at all here. Any output at the root level is for exposing outputs to the command line console, not for sending things to child modules.
Also notice how there is no data source defined here. You are not writing a script that will run in a specific order, you are writing templates that tell Terraform what you want your final infrastructure to look like. Terraform reads all that, creates a dependency graph, and then deploys everything in the order it determines. At the time of running terraform plan or apply anything you reference via a data source has to already exist. Terraform doesn't create everything in the root module, then load the submodule and create everything there, it creates things in whatever order is necessary based on the dependency graph.
I have created some VMs with a main.tf, and terraform generates a cluster.tfstate file.
Now because of refactoring, I move the VM resource definitions into a module, and refer to this module in main.tf. When I run terraform apply --state=./cluster.tfstate, will terraform destroy and recreate these VMs?
I would expect it will not. Is my understanding correct?
Let's try this using the example given in the aws_instance documentation:
# Create a new instance of the latest Ubuntu 14.04 on an
# t2.micro node with an AWS Tag naming it "HelloWorld"
provider "aws" {
region = "us-west-2"
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t2.micro"
tags {
Name = "HelloWorld"
}
}
If we terraform apply this, we get an instance that is referenced within Terraform as aws_instance.web:
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
If we move this definition to a module ubuntu_instance, the directory structure might look like this with the above code in instance.tf:
.
├── main.tf
└── ubuntu_instance
└── instance.tf
Now you intend to create the same instance as before, but internally Terraform now names this resource module.ubuntu_instance.aws_instance.web
If you attempt to apply this, you would get the following:
Plan: 1 to add, 0 to change, 1 to destroy.
The reason this happens is that Terraform has no idea that the old and new code reference the same instance. When you refactor in a module, you are removing a resource, and thus Terraform deletes that resource.
Terraform maps your code to real resources in the state file. When you create an instance, you can only know that instance maps to your aws_instance because of the state file. So the proper way (as mentioned by Jun) is to refactor your code, then tell Terraform to move the mapping to the real instance from aws_instance.web to module.ubuntu_instance.aws_instance.web Then when you apply, Terraform will leave the instance alone because it matches what your code says. The article Jun linked to is a good discussion of this.