Cannot destroy one module using Terraform destroy - terraform

I have created a few instances using Terraform module:
resource "google_compute_instance" "cluster" {
count = var.num_instances
name = "redis-${format("%03d", count.index)}"
...
attached_disk {
source =
google_compute_disk.ssd[count.index].name
}
}
resource "google_compute_disk" "ssd" {
count = var.num_instances
name = "redis-ssd-${format("%03d", count.index)}"
...
zone = data.google_compute_zones.available.names[count.index % length(data.google_compute_zones.available.names)]
}
resource "google_dns_record_set" "dns" {
count = var.num_instances
name = "${var.dns_name}-${format("%03d",
count.index +)}.something.com"
...
managed_zone = XXX
rrdatas = [google_compute_instance.cluster[count.index].network_interface.0.network_ip]
}
module "test" {
source = "/modules_folder"
num_instances = 2
...
}
How can I destroy one of the instances and its dependency, say instance[1]+ssd[1]+dns[1]? I tried to destroy only one module using
terraform destroy -target module.test.google_compute_instance.cluster[1]
but it does not destroy ssd[1] and it tried to destroy both dns:
module.test.google_dns_record_set.dns[0]
module.test.google_dns_record_set.dns[1]
if I run
terraform destroy -target module.test.google_compute_disk.ssd[1]
it tried to destroy both instances and dns:
module.test.google_compute_instance.cluster[0]
module.test.google_compute_instance.cluster[1]
module.test.google_dns_record_set.dns[0]
module.test.google_dns_record_set.dns[1]
as well.
how to only destroy instance[1], ssd[1] and dns[1]? I feel my code may have some bug, maybe count.index has some problem which trigger some unexpected destroy?
I use: Terraform v0.12.29

I'm a bit confused as to why you want to terraform destroy what you'd normally want to do is decrement num_instances and then terraform apply.
If you do a terraform destroy the next terraform apply will put you right back to whatever you have configured in your terraform source.
It's a bit hard without more of your source to see what's going on- but setting num_instances on the module and using it in the module's resources feels wonky.
I would recommend you upgrade terraform and use count or for_each directly on the module rather than within it. (this was introduced in terraform 0.13.0) see https://www.hashicorp.com/blog/terraform-0-13-brings-powerful-meta-arguments-to-modular-workflows

Remove resource by resource:
terraform destroy -target RESOURCE_TYPE.NAME -target RESOURCE_TYPE2.NAME
resource "resource_type" "resource_name" {
...
}

Related

Reference multiple aws_instance in terraform for template output

We want to deploy services into several regions.
Looks like because of the aws provider, we can't just use count or for_each, as the provider can't be interpolated. Thus I need to set this up manually:
resource "aws_instance" "app-us-west-1" {
provider = aws.us-west-1
#other stuff
}
resource "aws_instance" "app-us-east-1" {
provider = aws.us-east-1
#other stuff
}
I would like when running this to create a file which contains all the IPs created (for an ansible inventory).
I was looking at this answer:
https://stackoverflow.com/a/61788089/169252
and trying to adapt it for my case:
resource "local_file" "app-hosts" {
content = templatefile("${path.module}/templates/app_hosts.tpl",
{
hosts = aws_instance[*].public_ip
}
)
filename = "app-hosts.cfg"
}
And then setting up the template accordingly.
But this fails:
Error: Invalid reference
on app.tf line 144, in resource "local_file" "app-hosts":
122: hosts = aws_instance[*].public_ip
A reference to a resource type must be followed by at least one attribute
access, specifying the resource name
I am suspecting that I can't just reference all the aws_instance defined as above like this. Maybe to refer to all aws_instance in this file I need to use a different syntax.
Or maybe I need to use a module somehow. Can someone confirm this?
Using terraform v0.12.24
EDIT: The provider definitions use alias and it's all in the same app.tf, which I was naively assuming to be able to apply in one go with terraform apply (did I mention I am a beginner with terraform?):
provider "aws" {
alias = "us-east-1"
region = "us-east-1"
}
provider "aws" {
alias = "us-west-1"
region = "us-west-1"
}
My current workaround is to not do a join but simply listing them all individually:
{
host1 = aws_instance.app-us-west-1.public_ip
host2 = aws_instance.app-us-east-1.public_ip
# more hosts
}

Why does Terraform want to create the custom attribute again?

I am try to use the Vsphere_custom_attribute resource. On the first run on an VSphere it runs fine but on the second run I get the error below.
Do you have any ideas how to solve this ?
or did i just use it wrong ?
I use this Version of Terraform and Vsphere provider.
Terraform v0.12.12
provider.template v2.1.2
provider.vsphere v1.13.0
These are the code parts where i create the custom attributes and where i use it.
resource "vsphere_custom_attribute" "hostname" {
name = "hypervisor.hostname"
managed_object_type = "VirtualMachine"
}
resource "vsphere_virtual_machine" "vm" {
...
custom_attributes = "${map(vsphere_custom_attribute.hostname.id, "${var.vsphere_name}${var.vsphere_dom}" )}"
...
}
Error :
Error: could not create custom attribute: ServerFaultCode: The name 'hypervisor.hostname' already exists.
on main.tf line 32, in resource "vsphere_custom_attribute" "hostname":
32: resource "vsphere_custom_attribute" "hostname" {
terraform plan:
I do not understand why Terraform want to create it the custom attribute as it already exists.
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# vsphere_custom_attribute.hostname will be created
+ resource "vsphere_custom_attribute" "hostname" {
+ id = (known after apply)
+ managed_object_type = "VirtualMachine"
+ name = "hypervisor.hostname"
}
This is because you're saying create the 'resource' instead of pulling the 'data' from vsphere. I had this confusion as well until understanding the 'vsphere_datastore' input required.
Try something like this (im using octopus deploy for variable replacement so ignore that im using invalid #{, should be ${ or just straight string for 0.12.x
...
data "vsphere_custom_attribute" "consul_backend_path" {
name = "consul.backend.path"
}
...
resource "vsphere_virtual_machine" "windows_virtual_machine" {
...
custom_attributes = map(data.vsphere_custom_attribute.consul_backend_path.id, "custom_attribute_value")
...
}
Keep in mind, this requires the resource of the custom attribute to be in vcenter before this virtual_machine resource is created. Otherwise you will need to do a logic test to validate if the tag is required.

How to iterate multiple resources over the same list?

New to Terraform here. I'm trying to create multiple projects (in Google Cloud) using Terraform. The problem is I've to execute multiple resources to completely set up a project. I tried count, but how can I tie multiple resources sequentially using count? Here are the following resources I need to execute per project:
Create project using resource "google_project"
Enable API service using resource "google_project_service"
Attach the service project to a host project using resource "google_compute_shared_vpc_service_project" (I'm using shared VPC)
This works if I want to create a single project. But, if I pass a list of projects as input, how can I execute all the above resources for each project in that list sequentially?
Eg.
Input
project_list=["proj-1","proj-2"]
Execute the following sequentially:
resource "google-project" for "proj-1"
resource "google_project_service" for "proj-1"
resource "google_compute_shared_vpc_service_project" for "proj-1"
resource "google-project" for "proj-2"
resource "google_project_service" for "proj-2"
resource "google_compute_shared_vpc_service_project" for "proj-2"
I'm using Terraform version 0.11 which does not support for loops
In Terraform, you can accomplish this using count and the two interpolation functions, element() and length().
First, you'll give your module an input variable:
variable "project_list" {
type = "list"
}
Then, you'll have something like:
resource "google_project" {
count = "${length(var.project_list)}"
name = "${element(var.project_list, count.index)}"
}
resource "google_project_service" {
count = "${length(var.project_list)}"
name = "${element(var.project_list, count.index)}"
}
resource "google_compute_shared_vpc_service_project" {
count = "${length(var.project_list)}"
name = "${element(var.project_list, count.index)}"
}
And of course you'll have your other configuration in those resource declarations as well.
Note that this pattern is described in Terraform Up and Running, Chapter 5, and there are other examples of using count.index in the docs here.
A small update to this question/answer (terraform 0.13 and above). The count or length is not advisable to use anymore due to the way that terraforms works, let's imagine the next scenario:
Suppose you have an array with 3 elements: project_list=["proj-1","proj-2","proj-3"], once you apply that if you want to delete the "proj-2" item from your array once you run the plan, terraform will modify your second element to "proj-3" instead of removing It from the list (more info in this good post). The solution to get the proper behavior is to use the for_each function as follow:
variable "project_list" {
type = list(string)
}
resource "google_project" {
for_each = toset(var.project_list)
name = each.value
}
resource "google_project_service" {
for_each = toset(var.project_list)
name = each.value
}
resource "google_compute_shared_vpc_service_project" {
for_each = toset(var.project_list)
name = each.value
}
Hope this helps! 👍

How to use dynamic resource names in Terraform?

I would like to use the same terraform template for several dev and production environments.
My approach:
As I understand it, the resource name needs to be unique, and terraform stores the state of the resource internally. I therefore tried to use variables for the resource names - but it seems to be not supported. I get an error message:
$ terraform plan
var.env1
Enter a value: abc
Error asking for user input: Error parsing address 'aws_sqs_queue.SqsIntegrationOrderIn${var.env1}': invalid resource address "aws_sqs_queue.SqsIntegrationOrderIn${var.env1}"
My terraform template:
variable "env1" {}
provider "aws" {
region = "ap-southeast-2"
}
resource "aws_sqs_queue" "SqsIntegrationOrderIn${var.env1}" {
name = "Integration_Order_In__${var.env1}"
message_retention_seconds = 86400
receive_wait_time_seconds = 5
}
I think, either my approach is wrong, or the syntax. Any ideas?
You can't interpolate inside the resource name. Instead what you should do is as #BMW have mentioned in the comments, you should make a terraform module that contains that SqsIntegrationOrderIn inside and takes env variable. Then you can use the module twice, and they simply won't clash. You can also have a look at a similar question I answered.
I recommend using a different workspace for each environment. This allows you to specify your configuration like this:
variable "env1" {}
provider "aws" {
region = "ap-southeast-2"
}
resource "aws_sqs_queue" "SqsIntegrationOrderIn" {
name = "Integration_Order_In__${var.env1}"
message_retention_seconds = 86400
receive_wait_time_seconds = 5
}
Make sure to make the name of the "aws_sqs_queue" resource depending on the environment (e.g. by including it in the name) to avoid name conflicts in AWS.

How to create provide modules that support multiple AWS regions?

We are trying to create Terraform modules for below activities in AWS, so that we can use them where ever that is required.
VPC creation
Subnets creation
Instance creation etc.
But while creating these modules we have to define the provider in all above listed modules. So we decided to create one more module for provider so that we can call that provider module in other modules (VPC, Subnet, etc.).
Issue in above approach is that it is not taking provider value, and asking for the user input for region.
Terraform configuration is as follow:
$HOME/modules/providers/main.tf
provider "aws" {
region = "${var.region}"
}
$HOME/modules/providers/variables.tf
variable "region" {}
$HOME/modules/vpc/main.tf
module "provider" {
source = "../../modules/providers"
region = "${var.region}"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
"name" = "${var.environment}_McD_VPC"
}
}
$HOME/modules/vpc/variables.tf
variable "vpc_cidr" {}
variable "environment" {}
variable "region" {}
$HOME/main.tf
module "dev_vpc" {
source = "modules/vpc"
vpc_cidr = "${var.vpc_cidr}"
environment = "${var.environment}"
region = "${var.region}"
}
$HOME/variables.tf
variable "vpc_cidr" {
default = "192.168.0.0/16"
}
variable "environment" {
default = "dev"
}
variable "region" {
default = "ap-south-1"
}
Then when running terraform plan command at $HOME/ location it is not taking provider value and instead asking for the user input for region.
I need help from the Terraform experts, what approach we should follow to address below concerns:
Wrap provider in a Terraform module
Handle multiple region use case using provider module or any other way.
I knew a long time back that it wasn't possible to do this because Terraform built a graph that required a provider for any resource before it included any dependencies and it didn't used to be possible to force a dependency on a module.
However since Terraform 0.8 it is now possible to set a dependency on modules with the following syntax:
module "network" {
# ...
}
resource "aws_instance" "foo" {
# ...
depends_on = ["module.network"]
}
However, if I try that with your setup by changing modules/vpc/main.tf to look something like this:
module "aws_provider" {
source = "../../modules/providers"
region = "${var.region}"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
"name" = "${var.environment}_McD_VPC"
}
depends_on = ["module.aws_provider"]
}
And run terraform graph | dot -Tpng > graph.png against it it looks like the graph doesn't change at all from when the explicit dependency isn't there.
This seems like it might be a potential bug in the graph building stage in Terraform that should probably be raised as an issue but I don't know the core code base well enough to spot where the change needs to be.
For our usage we use symlinks heavily in our Terraform code base, some of which is historic from before Terraform supported other ways of doing things but could work for you here.
We simply define the provider in a single .tf file (such as environment.tf) along with any other generic config needed for every place you would ever run Terraform (ie not at a module level) and then symlink this into each location. That allows us to define the provider in a single place with overridable variables if necessary.
Step 1
Add region alias in the main.tf file where you gonna execute the terraform plan.
provider "aws" {
region = "eu-west-1"
alias = "main"
}
provider "aws" {
region = "us-east-1"
alias = "useast1"
}
Step 2
Add providers block inside your module definition block
module "lambda_edge_rule" {
providers = {
aws = aws.useast1
}
source = "../../../terraform_modules/lambda"
tags = var.tags
}
Step 3
Define "aws" as providers inside your module. ( source = ../../../terraform_modules/lambda")
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.7.0"
}
}
}
resource "aws_lambda_function" "lambda" {
function_name = "blablabla"
.
.
.
.
.
.
.
}
Note: Terraform version v1.0.5 as of now.

Resources