Terraform Child module dependency on a calling resource - terraform

I am trying to create dependency between multiple sub modules which should be able to create the resource individually as well as should be able to create the resource if they are dependent on each other.
basically i am trying to create multiple VMs, and based on the ip addresses and vip ip address returned as the output i want to create the lbaas pool and lbaas pool members.
i have kept the project structure as below
- Root_Folder
- main.tf // create all the vm's
- output.tf
- variable.tf
- calling_module.tf
- modules
- lbaas-pool
- lbaas-pool.tf // create lbaas pool
- variable.tf
- output.tf
- lbaas-pool-members
- lbaas-pool-members.tf // create lbaas pool member
- variable.tf
- output.tf
calling_module.tf contains the reference to the lbaas-pool module and lbaas-pool-members, as these 2 modules are dependent on the output of the resource generated by main.tf file.
It is giving below error:
A managed resource has not been declared.
As the resource has not been generated yet, and while running terraform plan and apply command is trying to load the resource object which has not been created. Not sure with his structure declare the module implicit dependency between the resources so the module can work individually as well as when required the complete stack.
Expected behaviour:
main.tf output parameters should be create the dependency automatically in the terraform version 0.14 but seems like that is not the case from the above error.

Let's say you have a module that takes an instance ID as an input, so in modules/lbaas-pool you have this inside variable.tf
variable "instance_id" {
type = string
}
Now let's say you define that instance resource in main.tf:
resource "aws_instance" "my_instance" {
...
}
Then to pass that resource to any modules defined in calling_module.tf (or in any other .tf file in the same folder as main.tf), you would do so like this:
module "lbaas-pool" {
src="modules/lbaas-pool"
instance_id = aws_instance.my_instance.id
...
}
Notice how there is no output defined at all here. Any output at the root level is for exposing outputs to the command line console, not for sending things to child modules.
Also notice how there is no data source defined here. You are not writing a script that will run in a specific order, you are writing templates that tell Terraform what you want your final infrastructure to look like. Terraform reads all that, creates a dependency graph, and then deploys everything in the order it determines. At the time of running terraform plan or apply anything you reference via a data source has to already exist. Terraform doesn't create everything in the root module, then load the submodule and create everything there, it creates things in whatever order is necessary based on the dependency graph.

Related

Create resource via terraform but do not recreate if manually deleted?

I want to initially create a resource using Terraform, but if the resource gets later deleted outside of TF - e.g. manually by a user - I do not want terraform to re-create it. Is this possible?
In my case the resource is a blob on an Azure Blob storage. I tried using ignore_changes = all but that didn't help. Every time I ran terraform apply, it would recreate the blob.
resource "azurerm_storage_blob" "test" {
name = "myfile.txt"
storage_account_name = azurerm_storage_account.deployment.name
storage_container_name = azurerm_storage_container.deployment.name
type = "Block"
source_content = "test"
lifecycle {
ignore_changes = all
}
}
The requirement you've stated is not supported by Terraform directly. To achieve it you will need to either implement something completely outside of Terraform or use Terraform as part of some custom scripting written by you to perform a few separate Terraform steps.
If you want to implement it by wrapping Terraform then I will describe one possible way to do it, although there are various other variants of this that would get a similar effect.
My idea for implementing it would be to implement a sort of "bootstrapping mode" which your custom script can enable only for initial creation, but then for subsequent work you would not use the bootstrapping mode. Bootstrapping mode would be a combination of an input variable to activate it and an extra step after using it.
variable "bootstrap" {
type = bool
default = false
description = "Do not use this directly. Only for use by the bootstrap script."
}
resource "azurerm_storage_blob" "test" {
count = var.bootstrap ? 1 : 0
name = "myfile.txt"
storage_account_name = azurerm_storage_account.deployment.name
storage_container_name = azurerm_storage_container.deployment.name
type = "Block"
source_content = "test"
}
This alone would not be sufficient because normally if you were to run Terraform once with -var="bootstrap=true" and then again without it Terraform would plan to destroy the blob, after noticing it's no longer present in the configuration.
So to make this work we need a special bootstrap script which wraps Terraform like this:
terraform apply -var="bootstrap=true"
terraform state rm azurerm_storage_blob.test
That second terraform state rm command above tells Terraform to forget about the object it currently has bound to azurerm_storage_blob.test. That means that the object will continue to exist but Terraform will have no record of it, and so will behave as if it doesn't exist.
If you run the bootstrap script then, you will have the blob existing but with Terraform unaware of it. You can therefore then run terraform apply as normal (without setting the bootstrap variable) and Terraform will both ignore the object previously created and not plan to create a new one, because it will now have count = 0.
This is not a typical use-case for Terraform, so I would recommend to consider other possible solutions to meet your use-case, but I hope the above is useful as part of that design work.
If you have a resource defined in terraform configuration then terraform will always try to create it. I can't imagine what is your setup, but maybe you want to take the blob creation to a CLI script and run terraform and the script in desired order.

Terraform - why this is not causing circular dependency?

Terraform registry AWS VPC example terraform-aws-vpc/examples/complete-vpc/main.tf has the code below which seems to me a circular dependency.
data "aws_security_group" "default" {
name = "default"
vpc_id = module.vpc.vpc_id
}
module "vpc" {
source = "../../"
name = "complete-example"
...
# VPC endpoint for SSM
enable_ssm_endpoint = true
ssm_endpoint_private_dns_enabled = true
ssm_endpoint_security_group_ids = [data.aws_security_group.default.id] # <-----
...
data.aws_security_group.default refers to "module.vpc.vpc_id" and module.vpc refers to "data.aws_security_group.default.id".
Please explain why this does not cause an error and how come module.vpc can refer to data.aws_security_group.default.id?
In the Terraform language, a module creates a separate namespace but it is not a node in the dependency graph. Instead, each of the module's Input Variables and Output Values are separate nodes in the dependency graph.
For that reason, this configuration contains the following dependencies:
The data.aws_security_group.default resource depends on module.vpc.vpc_id, which is specifically the output "vpc_id" block in that module, not the module as a whole.
The vpc module's variable "ssm_endpoint_security_group_ids" variable depends on the data.aws_security_group.default resource.
We can't see the inside of the vpc module in your question here, but the above is okay as long as there is no dependency connection between output "vpc_id" and variable "ssm_endpoint_security_group_ids" inside the module.
I'm assuming that such a connection does not exist, and so the evaluation order of objects here would be something like this:
aws_vpc.example in module.vpc is created (I just made up a name for this because it's not included in your question)
The output "vpc_id" in module.vpc is evaluated, referring to module.vpc.aws_vpc.example, and producing module.vpc.vpc_id.
data.aws_security_group.default in the root module is read, using the value of module.vpc.vpc_id.
The variable "ssm_endpoint_security_group_ids" for module.vpc is evaluated, referring to data.aws_security_group.default.
aws_vpc_endpoint.example in module.vpc is created, including a reference to var.ssm_endpoint_security_group_ids.
Notice that in all of the above I'm talking about objects in modules, not modules themselves. The modules serve only to create separate namespaces for objects, and then the separate objects themselves (which includes individual variable and output blocks) are what participate in the dependency graph.
Normally this design detail isn't visible: Terraform normally just uses it to potentially optimize concurrency by beginning work on part of a module before the whole module is ready to process. In some interesting cases like this though, you can also intentionally exploit this design so that an operation for the calling module can be explicitly sandwiched between two operations for the child module.
Another reason why we might make use of this capability is when two modules naturally depend on one another, such as in an experimental module I built that hides some of the tricky details of setting up VPC peering connections:
locals {
vpc_nets = {
us-west-2 = module.vpc_usw2
us-east-1 = module.vpc_use1
}
}
module "peering_usw2" {
source = "../../modules/peering-mesh"
region_vpc_networks = local.vpc_nets
other_region_connections = {
us-east-1 = module.peering_use1.outgoing_connection_ids
}
providers = {
aws = aws.usw2
}
}
module "peering_use1" {
source = "../../modules/peering-mesh"
region_vpc_networks = local.vpc_nets
other_region_connections = {
us-west-2 = module.peering_usw2.outgoing_connection_ids
}
providers = {
aws = aws.use1
}
}
(the above is just a relevant snippet from an example in the module repository.)
In the above case, the peering-mesh module is carefully designed to allow this mutual referencing, internally deciding for each pair of regional VPCs which one will be the peering initiator and which one will be the peering accepter. The outgoing_connection_ids output refers only to the aws_vpc_peering_connection resource and the aws_vpc_peering_connection_accepter refers only to var.other_region_connections, and so the result is a bunch of concurrent operations to create aws_vpc_peering_connection resources, followed by a bunch of concurrent operations to create aws_vpc_peering_connection_accepter resources.

In Terraform 0.12, how to skip creation of resource, if resource name already exists?

I am using Terraform version 0.12. I have a requirement to skip resource creation if resource with the same name already exists.
I did the following for this :
Read the list of custom images,
data "ibm_is_images" "custom_images" {
}
Check if image already exists,
locals {
custom_vsi_image = contains([for x in data.ibm_is_images.custom_images.images: "true" if x.visibility == "private" && x.name == var.vnf_vpc_image_name], "true")
}
output "abc" {
value="${local.custom_vsi_image}"
}
Create only if image exists is false.
resource "ibm_is_image" "custom_image" {
count = "${local.custom_vsi_image == true ? 0 : 1}"
depends_on = ["data.ibm_is_images.custom_images"]
href = "${local.image_url}"
name = "${var.vnf_vpc_image_name}"
operating_system = "centos-7-amd64"
timeouts {
create = "30m"
delete = "10m"
}
}
This works fine for the first time with "terraform apply". It finds that the image did not exists, so it creates image.
When I run "terraform apply" for the second time. It is deleting the resource "custom_image" that is created above. Any idea why it is deleting the resource, when it is run for the 2nd time ?
Also, how to create a resource based on some condition(like only when it does not exists) ?
In Terraform, you're required to decide explicitly what system is responsible for the management of a particular object, and conversely which systems are just consuming an existing object. There is no way to make that decision dynamically, because that would make the result non-deterministic and -- for objects managed by Terraform -- make it unclear which configuration's terraform destroy would destroy the object.
Indeed, that non-determinism is why you're seeing Terraform in your situation flop between trying to create and then trying to delete the resource: you've told Terraform to only manage that object if it doesn't already exist, and so the first time you run Terraform after it exists Terraform will see that the object is no longer managed and so it will plan to destroy it.
If you goal is to manage everything with Terraform, an important design task is to decide how object dependencies flow within and between Terraform configurations. In your case, it seems like there is a producer/consumer relationship between a system that manages images (which may or may not be a Terraform configuration) and one or more Terraform configurations that consume existing images.
If the images are managed by Terraform then that suggests either that your main Terraform configuration should assume the image does not exist and unconditionally create it -- if your decision is that the image is owned by the same system as what consumes it -- or it should assume that the image does already exist and retrieve the information about it using a data block.
A possible solution here is to write a separate Terraform configuration that manages the image and then only apply that configuration in situations where that object isn't expected to already exist. Then your configuration that consumes the existing image can just assume it exists without caring about whether it was created by the other Terraform configuration or not.
There's a longer overview of this situation in the Terraform documentation section Module Composition, and in particular the sub-section Conditional Creation of Objects. That guide is focused on interactions between modules in a single configuration, but the same underlying principles apply to dependencies between configurations (via data sources) too.

Terraform config file : Using variable inside a variable

I am tring to create a terraform configuration file to create an ec2 instance. I am using the variables.tf file to put all my variables. It works for most of the cases but there are two cases which I am not able to achieve. Any pointers is much appreciated.
1.using variable for aws instance name. Using var.service_name or "${service_name}" does not work.
resource "aws_instance" var.service_name {
ami = "ami-010fae13a16763bb4"
instance_type = "t2.micro"
.....
}
This post explains that resource name cannot be variables. But this is pretty old. Not sure if this is still the case.
2.Using variable inside another variable. For example I have connection parameters defined like this
connection {
host = "${aws_instance.terraformtest.public_ip}"
type = "ssh"
user = "ec2-user"
private_key = "${file("C:/Users/phani/Downloads/microservices.pem")}"
}
This works. I am using the ip generated from aws instance resource. But If i use it like this. It doesn't work
host = "${aws_instance.${service_name}.public_ip}"
Am I missing something or is there a workaround
Resource names in Terraform are required to be constants. In practice this usually isn't a problem, because a resource name only needs to be unique within a given module and so it shouldn't ever be necessary for a caller to customize what a resource is named in a child module. In the case where a particular resource is the primary object declared by a module, a common idiom is to name the resource "main", like resource "aws_instance" "main".
In situations where you need to declare several instances whose definitions share a common configuration, you can use for_each or count to produce multiple instances from a same resource block. In that case, the resource addresses include an additional "index" component, like aws_instance.example["foo"] or aws_instance.example[0], where the index can be a variable to dynamically select one of the instances produced by a single resource block.

Referring to resources named with variables in Terraform

I'm trying to create a module in Terraform that can be instantiated multiple times with different variable inputs. Within the module, how do I reference resources when their names depend on an input variable? I'm trying to do it via the bracket syntax ("${aws_ecs_task_definition[var.name].arn}") but I just guessed at that.
(Caveat: I might be going about this in completely the wrong way)
Here's my module's (simplified) main.tf file:
variable "name" {}
resource "aws_ecs_service" "${var.name}" {
name = "${var.name}_service"
cluster = ""
task_definition = "${aws_ecs_task_definition[var.name].arn}"
desired_count = 1
}
resource "aws_ecs_task_definition" "${var.name}" {
family = "ecs-family-${var.name}"
container_definitions = "${template_file[var.name].rendered}"
}
resource "template_file" "${var.name}_task" {
template = "${file("task-definition.json")}"
vars {
name = "${var.name}"
}
}
I'm getting the following error:
Error loading Terraform: Error downloading modules: module foo: Error loading .terraform/modules/af13a92c4edda294822b341862422ba5/main.tf: Error reading config for aws_ecs_service[${var.name}]: parse error: syntax error
I was fundamentally misunderstanding how modules worked.
Terraform does not support interpolation in resource names (see the relevant issues), but that doesn't matter in my case, because the resources of each instance of a module are in the instance's namespace. I was worried about resource names colliding, but the module system already handles that.
The picture below shows what is going on.
The terraform documentation does not make their use of "NAME" clear versus the "name" values that are used for the actual resources created by the infrastructure vender (like, AWS or Google Cloud).
Additionally, it isn't always "name=, but sometimes, say, "endpoint= or even "resource_group_name= or whatever.
And there are a couple of ways to generate multiple "name" values -- using count, variables, etc., or inside tfvar files and running terraform apply -var-file=foo.tfvars

Resources