I am trying to set up an EC2 with elastic IP with terraform. I am trying to use the existing VPC and subnets for the new EC2.
But Terraform is unable to recognise the existing subnet.
I am using the pre existing subnet like this -
variable "subnet_id" {}
data "aws_subnet" "my-subnet" {
id = "${var.subnet_id}"
}
When I run terraform plan I get this error -
Error: InvalidSubnetID.NotFound: The subnet ID 'subnet-02xxxxxxxxxx7' does not exist
status code: 400, request id: c4b6142b-5dfd-458c-959d-e5440b89c9fd
on ec2.tf line 3, in data "aws_subnet" "my-subnet":
3: data "aws_subnet" "my-subnet" {
This subnet was created by terraform in the past. So why does it say it doesn't exist?
Suggested debug:
Create 2 new terraform files:
First file, create a simple subnet (or VPC then subnet whatever)
Second file, try to retreive the subnet id like you posted.
The idea here is not to change anything else, meaning, same region, same creds, same everything.
Possible outputs:
1) you're not able to get the subnet ID - then you should be looking at things like the terraform version, provider version, stuff like that
2) you get the subnet ID, which means something in your creds, region, copy&paste of the name, basically human error is leading this blockade and you should revisit how you're doing things with emphysis on typos and permissions.
We can use the Data-Sources aws_vpc, aws_subnet and aws_subnet_ids:
data "aws_vpc" "default" {
default = true
}
data "aws_subnet_ids" "selected" {
vpc_id = data.aws_vpc.default.id
}
data "aws_subnet" "selected" {
for_each = data.aws_subnet_ids.selected.ids
id = each.value
}
And we can use them like in this LB example below:
resource "aws_alb" "alb" {
...
subnets = [for subnet in data.aws_subnet.selected : subnet.id]
...
}
This provides a reference to the VPC and subnets so you can pass the ID to other resources. Terraform does not manage the VPC or subnets when you do this, it simply references them.
It's irrelevant whether the subnet was initially created by Terraform or not.
The data source is attempting to find the subnet in the current state file. Your plan command returns the error because the subnet is not in your state file.
Try importing the subnet and then re-running the plan.
$ terraform import aws_subnet.public_subnet subnet-02xxxxxxxxxx7
Related
I'm using Terraform to deploy some dev and prod VMs on our VMware vCenter infrastructure and use vsphere tags to define responsibilities of VMs. I therefore added the following to the (sub)module:
resource "vsphere_tag" "tag" {
name = "SYS-Team"
category_id = "Responsibility"
description = "Systems group"
}
...
resource "vsphere_virtual_machine" "web" {
tags = [vsphere_tag.tag.id]
...
}
Now, when I destroy e.g. the dev infra, it also deletes the prod vsphere tag and leave the VMs without the tag.
I tried to skip the deletion with the lifecycle, but then I would need to separately delete each resource which I don't like.
lifecycle {
prevent_destroy = true
}
Is there a way to add an existing tag without having the resource managed by Terraform? Something hardcoded without having the tag included as a resource like:
resource "vsphere_virtual_machine" "web" {
tags = [{
name = "SYS-Team"
category_id = "Responsibility"
description = "Systems group"
}
]
...
}
You can use Terraform's data sources to refer to things that are either not managed by Terraform or are managed in a different Terraform context when you need to retrieve some output from the resource such as an automatically generated ID.
In this case you could use the vsphere_tag data source to look up the id of the tag:
data "vsphere_tag_category" "responsibility" {
name = "Responsibility"
}
data "vsphere_tag" "sys_team" {
name = "SYS-Team"
category_id = data.vsphere_tag_category.responsibility.id
}
...
resource "vsphere_virtual_machine" "web" {
tags = [data.vsphere_tag.sys_team.id]
...
}
This will use a vSphere tag that has either been created externally or managed by Terraform in another place, allowing you to easily run terraform destroy to destroy the VM but keep the tag.
If I understood correctly the real problem is on:
when I destroy e.g. the dev infra, it also deletes the prod vsphere tag
That should not be happening!
any cross environment deletions is a red flag
It feels like the problem is in your pipeline, not your terraform code ...
The deployment pipelines I create the dev resources are not mixed with prod,
and IF they are mix, you are just asking for troubles,
your team should make redesigning that a high priority
You do ask:
Is there a way to add an existing tag without having the resource managed by Terraform?
Yes you can use PowerCLI for that:
https://blogs.vmware.com/PowerCLI/2014/03/using-vsphere-tags-powercli.html
the command to add tags is really simple:
New-Tag –Name “jsmith” –Category “Owner”
You could even integrate that into terraform code with a null_resource something like:
resource "null_resource" "create_tag" {
provisioner "local-exec" {
when = "create"
command = "New-Tag –Name “jsmith” –Category “Owner”"
interpreter = ["PowerShell"]
}
}
Recently i figured out that my AKS cluster holds a subnet which is too small. Therefor im trying to add a second subnet and nodepool which is possible with the Azure CNI nowadays and then create a single proper subnet instead and migrate it back.
During terraform plan all goes well with a valid response however while applying it throws an error.
Error: Error Creating/Updating Subnet "me-test-k8s-subnet2" (Virtual Network "me-test-k8s-vnet" / Resource Group "me-test-k8s-rg"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="NetcfgInvalidSubnet" Message="Subnet 'me-test-k8s-subnet2' is not valid in virtual network 'me-test-k8s-vnet'." Details=[]
on main.tf line 28, in resource "azurerm_subnet" "subnet2":
28: resource "azurerm_subnet" "subnet2" {
My original cluster is created with this configuration of Terraform:
name = "${var.cluster_name}-rg"
location = "${var.location}"
}
resource "azurerm_virtual_network" "network" {
name = "${var.cluster_name}-vnet"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
address_space = ["10.1.0.0/16"]
}
resource "azurerm_subnet" "subnet" {
name = "${var.cluster_name}-subnet"
resource_group_name = "${azurerm_resource_group.rg.name}"
address_prefixes = ["10.1.0.0/24"]
virtual_network_name = "${azurerm_virtual_network.network.name}"
}
To make things more easy i decided to first add the subnet to the network without the nodepool. This will bring me to this terraform plan:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# azurerm_subnet.subnet2 will be created
+ resource "azurerm_subnet" "subnet2" {
+ address_prefix = (known after apply)
+ address_prefixes = [
+ "10.2.0.0/22",
]
+ enforce_private_link_endpoint_network_policies = false
+ enforce_private_link_service_network_policies = false
+ id = (known after apply)
+ name = "me-test-k8s-subnet2"
+ resource_group_name = "me-test-k8s-rg"
+ virtual_network_name = "me-test-k8s-vnet"
}
Hope that someone can explain me why this error occurs.
Best,
Pim
When creating a subnet in a virtual network, it is mandatory to check if it is not jumping out of the network range.
You are just out of the range with your network mask: 10.1.0.0/16
First host: 10.1.0.1
Last host: 10.1.255.254
And you are trying to create subnet 10.2.0.0/22.
For not overlapping with subnets that are already created, 10.1.4.0/22, can be accepted, for instance.
As mentioned in my comment and in someone's answer, Azure is throwing this error because you are trying to add a 10.2.0.0/22 subnet to a 10.1.0.0/16 network. ie- 10.2.0.0/22 is not part of that network.
I also want to point out that when you run a plan that is not submitting the actual API calls to Azure to make the changes, which is why things looked fine to you when you ran your plan, but Azure complained when you tried to apply it. I think the explanation is good in this tutorial. The excerpts that are applicable are:
Once you are happy with your declared configuration, you can ask
Terraform to generate an execution plan for it. The plan command in
the CLI is used to generate an execution plan from a configuration.
The execution plan tells you what changes Terraform would need to make
to bring your current infrastructure to the declared state in your
configuration.
If you accept the plan you can instruct Terraform to apply changes. Terraform will make the API calls required to implement the changes. If anything goes wrong terraform will not attempt to automatically rollback the infrastructure to the state it was in before running apply. This is because apply adheres to the plan
You might also run to a similar error if you try to deploy another vnet into a subscription where there already is a vnet with the same address space.
Currently I have a aws infrastructure that is created manually. I want to import configuration of vpc and subnets. With the
terraform import aws_vpc.example vpc-id
I can get the cidr block of the vpc. Further with the cidr block I also want to get all the subnet ids as well. Is there a way to get all subnet ids with import command or I have to manually enter every id?
I cannot find any document where all subnet values can be imported? If there is please share
Thank you in advance!
If you already have vpc_id, then you can use aws_subnet_ids data to automatically get the ids of its subsets.
Example from docs:
data "aws_subnet_ids" "example" {
vpc_id = aws_vpc.example.vpc_id
}
I have created a set of NAT gateway with count
resource "aws_nat_gateway" "nat_gateway_ec1_dev" {
count = 3
}
And I would like to this as dependence resource while creating route table in which I am also using count
resource "aws_route_table" "route_table_ics_ec1_dev_private" {
vpc_id = module.vpc_dev.vpc_id
count = 3
depends_on = [
##HOW TO ADD NAT GATEWAY DEPENDCIE HERE
]
}
My question how can I add the NAT gateway dependencies in the route_table resource ?? Since both resources are created with count i can't statically specify the name here
We don't usually need to use depends_on because in most cases the dependencies between objects are implied by data flow between them. In this case, this would become true when you write the route block describing the route to the NAT gateway:
resource "aws_route_table" "route_table_ics_ec1_dev_private" {
vpc_id = module.vpc_dev.vpc_id
count = 3
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.nat_gateway_ec1_dev[count.index].id
}
}
Because the configuration for that route depends on the id of the NAT gateway, Terraform can see that it must wait until after the NAT gateway is created before it starts creating the route table.
depends_on is for more complicated situations where the data flow between objects is insufficient because the final result depends on some side-effects that are implied by the remote API rather than explicit in Terraform. One example of such a situation is where an object doesn't become usable until an access policy is applied to it in a separate step, such as with an S3 bucket and an associated bucket policy:
resource "aws_s3_bucket" "example" {
# ...
}
resource "aws_s3_bucket_policy" "example" {
bucket = aws_s3_bucket.example.bucket
policy = # ...
}
In the above, Terraform can understand that it must create the bucket before creating the policy, but if something elsewhere in the configuration is also using that S3 bucket then it might be necessary for it to declare an explicit dependency on the policy to make sure that the necessary access rules will be in effect before trying that operation:
# Service cannot access the data from the S3 bucket
# until the policy has been activated.
depends_on = [aws_s3_bucket_policy.example]
Neither count and for_each make any difference to depends_on: dependencies between resources in Terraform are always for entire resource and data blocks, not for the individual instances created from them. Therefore in your case if an explicit dependency on the NAT gateway were needed (which is isn't) then you would write it the same way, regardless of the fact that count is set on that resource:
# Not actually needed, but included for the sake of example.
depends_on = [aws_nat_gateway.nat_gateway_ec1_dev]
Is it possible to add a new VM only using Terraform? All the examples/samples and everything I have used Terraform for so far has me adding the VNet, Subnet, Network Interface, VM, Storage, etc., etc. all at the same time, referencing the resources crated within the script when creating other resources. For example, Terraform the Network Interface and then reference that when creating the VM.
What about if you already have the VNet, Subnet, etc. and just want to add a new, for example, Network Interface. Every time I try to do this and just reference what I think is the correct id, the plan stage works but then the apply fails with an "autorest:DoErrorUnlessStatusCode 400" error on he PUT call.
Is it just not possible to do this unless the resources were originally created using Terraform?
Yes you can. You can get the id from an created subnet with output. Like:
output "subnetid" {
value = "${azurerm_subnet.xxx.id}"
}
In your next template you can use this id in the subnet_id field.
The value of "${azurerm_subnet.xxx.id}" is base on the resourcegroup/vnet/subnet. So if you know how it is build you can also like to resource that are not created in terraform.
You can use the data directive to get existing resources as below. You can then interpolate these to create your VM.
data "azurerm_resource_group" "existing_deploy_rg" {
name = "RG"
}
data "azurerm_virtual_network" "existing_vnet" {
name = "existing-vnet"
resource_group_name = "${data.azurerm_resource_group.existing_deploy_rg.name}"
}
data "azurerm_subnet" "existing_subnet" {
name = "existing-subnet"
resource_group_name = "${data.azurerm_resource_group.existing_deploy_rg.name}"
virtual_network_name = "${data.azurerm_virtual_network.existing_vnet.name}"
}