Terraform :depends_on resource created with count - terraform

I have created a set of NAT gateway with count
resource "aws_nat_gateway" "nat_gateway_ec1_dev" {
count = 3
}
And I would like to this as dependence resource while creating route table in which I am also using count
resource "aws_route_table" "route_table_ics_ec1_dev_private" {
vpc_id = module.vpc_dev.vpc_id
count = 3
depends_on = [
##HOW TO ADD NAT GATEWAY DEPENDCIE HERE
]
}
My question how can I add the NAT gateway dependencies in the route_table resource ?? Since both resources are created with count i can't statically specify the name here

We don't usually need to use depends_on because in most cases the dependencies between objects are implied by data flow between them. In this case, this would become true when you write the route block describing the route to the NAT gateway:
resource "aws_route_table" "route_table_ics_ec1_dev_private" {
vpc_id = module.vpc_dev.vpc_id
count = 3
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.nat_gateway_ec1_dev[count.index].id
}
}
Because the configuration for that route depends on the id of the NAT gateway, Terraform can see that it must wait until after the NAT gateway is created before it starts creating the route table.
depends_on is for more complicated situations where the data flow between objects is insufficient because the final result depends on some side-effects that are implied by the remote API rather than explicit in Terraform. One example of such a situation is where an object doesn't become usable until an access policy is applied to it in a separate step, such as with an S3 bucket and an associated bucket policy:
resource "aws_s3_bucket" "example" {
# ...
}
resource "aws_s3_bucket_policy" "example" {
bucket = aws_s3_bucket.example.bucket
policy = # ...
}
In the above, Terraform can understand that it must create the bucket before creating the policy, but if something elsewhere in the configuration is also using that S3 bucket then it might be necessary for it to declare an explicit dependency on the policy to make sure that the necessary access rules will be in effect before trying that operation:
# Service cannot access the data from the S3 bucket
# until the policy has been activated.
depends_on = [aws_s3_bucket_policy.example]
Neither count and for_each make any difference to depends_on: dependencies between resources in Terraform are always for entire resource and data blocks, not for the individual instances created from them. Therefore in your case if an explicit dependency on the NAT gateway were needed (which is isn't) then you would write it the same way, regardless of the fact that count is set on that resource:
# Not actually needed, but included for the sake of example.
depends_on = [aws_nat_gateway.nat_gateway_ec1_dev]

Related

Terraform: How to obtain VPCE service name when it was dynamically created

I am trying to obtain (via terraform) the dns name of a dynamically created VPCE endpoint using a data resource but the problem I am facing is the service name is not known until resources have been created. See notes below.
Is there any way of retrieving this information as a hard-coded service name just doesn’t work for automation?
e.g. this will not work as the service_name is dynamic
resource "aws_transfer_server" "sftp_lambda" {
count = local.vpc_lambda_enabled
domain = "S3"
identity_provider_type = "AWS_LAMBDA"
endpoint_type = "VPC"
protocols = ["SFTP"]
logging_role = var.loggingrole
function = var.lambda_idp_arn[count.index]
endpoint_details = {
security_group_ids = var.securitygroupids
subnet_ids = var.subnet_ids
vpc_id = var.vpc_id
}
tags = {
NAME = "tf-test-transfer-server"
ENV = "test"
}
}
data "aws_vpc_endpoint" "vpce" {
count = local.vpc_lambda_enabled
vpc_id = var.vpc_id
service_name = "com.amazonaws.transfer.server.c-001"
depends_on = [aws_transfer_server.sftp_lambda]
}
output "transfer_server_dnsentry" {
value = data.aws_vpc_endpoint.vpce.0.dns_entry[0].dns_name
}
Note: The VPCE was created automatically from an AWS SFTP transfer server resource that was configured with endpoint type of VPC (not VPC_ENDPOINT which is now deprecated). I had no control over the naming of the endpoint service name. It was all created in the background.
Minimum AWS provider version: 3.69.0 required.
Here is an example cloudformation script to setup an SFTP transfer server using Lambda as the IDP.
This will create the VPCE automatically.
So my aim here is to output the DNS name from the auto-created VPC endpoint using terraform if at all possible.
example setup in cloudFormation
data source: aws_vpc_endpoint
resource: aws_transfer_server
I had a response from Hashicorp Terraform Support on this and this is what they suggested:
you can get the service SFTP-Server-created-VPC-Endpoint by calling the following exported attribute of the vpc_endpoint_service resource [a].
NOTE: There are certain setups that causes AWS to create additional resources outside of what you configured. The AWS SFTP transfer service is one of them. This behavior is outside Terraform's control and more due to how AWS designed the service.
You can bring that VPC Endpoint back under Terraform's control however, by importing the VPC endpoint it creates on your behalf AFTER the transfer service has been created - via the VPCe ID [b].
If you want more ideas of pulling the service name from your current AWS setup, feel free to check out this example [c].
Hope that helps! Thank you.
[a] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_endpoint_service#service_name
[b] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_endpoint#import
[c] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_endpoint#gateway-load-balancer-endpoint-type
There is a way forward like I shared earlier with the imports but it not going to be fully automated unfortunately.
Optionally, you can use a provisioner [1] and the aws ec2 describe-vpc-endpoint-services --service-names command [2] to get the service names you need.
I'm afraid that's the last workaround I can provide, as explained in our doc here [3] - which will explain how - as much as we'd like to, Terraform isn't able to solve all use-cases.
[1] https://www.terraform.io/language/resources/provisioners/remote-exec
[2] https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/describe-vpc-endpoint-services.html
[3] https://www.terraform.io/language/resources/provisioners/syntax
I've finally found the solution:
data "aws_vpc_endpoint" "transfer_server_vpce" {
count = local.is_enabled
vpc_id = var.vpc_id
filter {
name = "vpc-endpoint-id"
values = ["${aws_transfer_server.transfer_server[0].endpoint_details[0].vpc_endpoint_id}"]
}
}

Terraform AzureRM Continually Modifying API Management with Proxy Configuration for Default Endpoint

We are terraforming our Azure API Management instance.
...
resource "azurerm_api_management" "apim" {
name = "the-apim"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
...
hostname_configuration {
proxy {
host_name = "the-apim.azure-api.net"
negotiate_client_certificate = true
}
}
}
...
We need to include the hostname_configuration block so that we can switch negotiate_client_certificate to true for the default endpoint.
This does the job, however every time Terraform runs it plans to modify the APIM instance by adding the hostname_configuration block again:
+ hostname_configuration {
+ proxy {
+ host_name = "the-apim.azure-api.net"
+ negotiate_client_certificate = true
}
}
Is there a way to prevent this from happening? In the portal I can see this value is set to true.
I suggest you try to pair with lifecycle > ignore_changes.
The ignore_changes feature is intended to be used when a resource is created with references to data that may change in the future, but should not affect said resource after its creation. In some rare cases, settings of a remote object are modified by processes outside of Terraform, which Terraform would then attempt to "fix" on the next run. In order to make Terraform share management responsibilities of a single object with a separate process, the ignore_changes meta-argument specifies resource attributes that Terraform should ignore when planning updates to the associated remote object.
In your case, the hostname_configuration is considered a "nested block" or "attribute as block" in Terraform. So the usage of ignore_changes is not so straightforward (you can't just add the property name, as you would do if you wanted to ignore changes in your resource_group_name for example, which is directly a property). From an issue in GitHub back from 2018, it seems you could use the TypeSet hash of the nested block to add to an ignore sections.
Even though I can't test this, my suggestion for you:
deploy your azurerm_api_management resource normally with the hostname_configuration block
check the state file from your resource and get the typeset hash of the hostname_configuration part; should be similar to hostname_configuration.XXXXXX
add an ignore_changes section passing the above
resource "azurerm_api_management" "apim" {
# ...
lifecycle {
ignore_changes = [
"hostname_configuration.XXXXXX",
]
}
}
Sometimes such issues occur due to issues in the provider. Probably it is not storing the configuration in the state file or not retrieving the stored state for this block. Try upgrading the provider to the latest available provider and see if it sorts the issue.
If that does not solve it, you can try defining this configuration as a separate resource. As per the terraform documentation: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/api_management
It's possible to define Custom Domains both within the
azurerm_api_management resource via the hostname_configurations block
and by using the azurerm_api_management_custom_domain resource.
However it's not possible to use both methods to manage Custom Domains
within an API Management Service, since there'll be conflicts.
So Please try removing that hostname_configuration block and add it as separate resource as per this documentation: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/api_management_custom_domain
This will most likely fix the issue.

Create azurerm_sql_firewall_rule for an azurerm_app_service_plan in Terraform

I want to whitelist the ip addresses of an App Service Plan on a managed Sql Server.
The problem is, the resource azurerm_app_service_plan exposes its ip addresses as a comma-separated value, on the attribute possible_outbound_ip_addresses.
I need to create one azurerm_sql_firewall_rule for each of these ips.
If I try the following approach, Terraform gives an exception:
locals {
staging_app_service_ip = {
for v in split(",", azurerm_function_app.prs.possible_outbound_ip_addresses) : v => v
}
}
resource "azurerm_sql_firewall_rule" "example" {
for_each = local.staging_app_service_ip
name = "my_rules_${each.value}"
resource_group_name = data.azurerm_resource_group.example.name
server_name = var.MY_SERVER_NAME
start_ip_address = each.value
end_ip_address = each.value
}
I get then the error:
The "for_each" value depends on resource attributes that cannot be
determined until apply, so Terraform cannot predict how many instances
will be created. To work around this, use the -target argument to
first apply only the resources that the for_each depends on.
I'm not sure how to work around this.
For the time being, I have added the ip addresses as a variable, and am manually setting the value of the variable.
What would be the correct approach to create these firewall rules?
I'm trying to deal with the same issue. My way around it is to perform multi-step setup.
In the first step I run terraform configuration where it creates database, app service, api management and some other resources. Next I deploy the app. Lastly I run terraform again, but this time the second configuration creates sql firewall rules and api management api from deployed app swagger definition.

Terraform unable to recognise pre existing Subnet in AWS

I am trying to set up an EC2 with elastic IP with terraform. I am trying to use the existing VPC and subnets for the new EC2.
But Terraform is unable to recognise the existing subnet.
I am using the pre existing subnet like this -
variable "subnet_id" {}
data "aws_subnet" "my-subnet" {
id = "${var.subnet_id}"
}
When I run terraform plan I get this error -
Error: InvalidSubnetID.NotFound: The subnet ID 'subnet-02xxxxxxxxxx7' does not exist
status code: 400, request id: c4b6142b-5dfd-458c-959d-e5440b89c9fd
on ec2.tf line 3, in data "aws_subnet" "my-subnet":
3: data "aws_subnet" "my-subnet" {
This subnet was created by terraform in the past. So why does it say it doesn't exist?
Suggested debug:
Create 2 new terraform files:
First file, create a simple subnet (or VPC then subnet whatever)
Second file, try to retreive the subnet id like you posted.
The idea here is not to change anything else, meaning, same region, same creds, same everything.
Possible outputs:
1) you're not able to get the subnet ID - then you should be looking at things like the terraform version, provider version, stuff like that
2) you get the subnet ID, which means something in your creds, region, copy&paste of the name, basically human error is leading this blockade and you should revisit how you're doing things with emphysis on typos and permissions.
We can use the Data-Sources aws_vpc, aws_subnet and aws_subnet_ids:
data "aws_vpc" "default" {
default = true
}
data "aws_subnet_ids" "selected" {
vpc_id = data.aws_vpc.default.id
}
data "aws_subnet" "selected" {
for_each = data.aws_subnet_ids.selected.ids
id = each.value
}
And we can use them like in this LB example below:
resource "aws_alb" "alb" {
...
subnets = [for subnet in data.aws_subnet.selected : subnet.id]
...
}
This provides a reference to the VPC and subnets so you can pass the ID to other resources. Terraform does not manage the VPC or subnets when you do this, it simply references them.
It's irrelevant whether the subnet was initially created by Terraform or not.
The data source is attempting to find the subnet in the current state file. Your plan command returns the error because the subnet is not in your state file.
Try importing the subnet and then re-running the plan.
$ terraform import aws_subnet.public_subnet subnet-02xxxxxxxxxx7

How terraform know which resource should it run first to spin up infrastructure?

I’m using terraform to spin up Aws-DMS. To spin up DMS, we need subnet groups, dms replication task, dms endpoints, dms replication instance. I’ve configured everything using terraform documentation. My question is how will terraform know which task to be completed first to spin up other dependency tasks?
Do we need to declare it somewhere in terraform or is terraform intelligent enough to run accordingly?
Terraform uses references in the configuration to infer ordering.
Consider the following example:
resource "aws_s3_bucket" "example" {
bucket = "terraform-dependencies-example"
acl = "private"
}
resource "aws_s3_bucket_object" "example" {
bucket = aws_s3_bucket.example.bucket # reference to aws_s3_bucket.example
key = "example"
content = "example"
}
In the above example, the aws_s3_bucket_object.example resource contains an expression that refers to aws_s3_bucket.example.bucket, and so Terraform can infer that aws_s3_bucket.example must be created before aws_s3_bucket_object.example.
These implicit dependencies created by references are the primary way to create ordering in Terraform. In some rare circumstances we need to represent dependencies that cannot be inferred by expressions, and so for those exceptional circumstances only we can add additional explicit dependencies using the depends_on meta argument.
One situation where that can occur is AWS IAM policies, where the graph created naturally by references will tend to have the following shape:
Due to AWS IAM's data model, we must first create a role and then assign a policy to it as a separate step, but the objects assuming that role (in this case, an AWS Lambda function just for example) only take a reference to the role itself, not to the policy. With the dependencies created implicitly by references then, the Lambda function could potentially be created before its role has the access it needs, causing errors if the function tries to take any actions before the policy is assigned.
To address this, we can use depends_on in the aws_lambda_function resource block to force that extra dependency and thus create the correct execution order:
resource "aws_iam_role" "example" {
# ...
}
resource "aws_iam_role_policy" "example" {
# ...
}
resource "aws_lambda_function" "exmaple" {
depends_on = [aws_iam_role_policy.example]
}
For more information on resource dependencies in Terraform, see Resource Dependencies in the Terraform documentation.
Terraform will automatically create the resources in an order that all dependencies can be fulfilled.
E.g.: If you set a security group id in your DMS definition as "${aws_security_group.my_sg.id}", Terraform recognizes this dependency and created the security group prior to the DMS resource.

Resources