Using "count" in aws_route53_record terrafrom resource - terraform

I'm starting to use (and learn) terraform, for now, I need to create multiple DO droplets and attach them to the aws route53 zone, what I'm trying to do:
My DO terraform file:
# Configure the DigitalOcean Provider
provider "digitalocean" {
token = var.do_token
}
# Create a new tag
resource "digitalocean_tag" "victor" {
name = "victor-fee1good22"
}
resource "digitalocean_droplet" "web" {
count = 2
image = var.do_config["image"]
name = "web-${count.index}"
region = var.do_config["region"]
size = var.do_config["size"]
ssh_keys = [var.public_ssh_key, var.pv_ssh_key]
tags = [digitalocean_tag.victor.name]
}
My route53 file:
provider "aws" {
version = "~> 2.0"
region = "us-east-1"
access_key = var.aws_a_key
secret_key = var.aws_s_key
}
data "aws_route53_zone" "selected" {
name = "devops.rebrain.srwx.net"
}
resource "aws_route53_record" "www" {
сount = length(digitalocean_droplet.web)
zone_id = data.aws_route53_zone.selected.zone_id
name = "web_${count.index}"
type = "A"
ttl = "300"
records = [digitalocean_droplet.web[count.index].ipv4_address]
}
But I always get The "count" object can be used only in "resource" and "data" blocks, and only
when the "count" argument is set. error, what did I wrong?
Thanks!
UPDATE:

I've resolved this one like — add сount = 2 instead of сount = length(digitalocean_droplet.web)
It works but would be better to have the dynamic variable instead of constant count. :)

you want to get number of services, that not yet created. Terraform couldn't do that.
As I think simplest way use common var with the number of droplets.
resource "digitalocean_droplet" "test" {
count = var.number_of_vps
image = "ubuntu-18-04-x64"
name = "test-1"
region = data.digitalocean_regions.available.regions[0].slug
size = "s-1vcpu-1gb"
}
resource "aws_route53_record" "test" {
count = var.number_of_vps
zone_id = data.aws_route53_zone.primary.zone_id
name = "${local.login}-${count.index}.${data.aws_route53_zone.primary.name}"
type = "A"
ttl = "300"
records = [digitalocean_droplet.test[count.index].ipv4_address]
}

This trick helped - https://github.com/hashicorp/terraform/issues/12570#issuecomment-291517239
resource "aws_route53_record" "dns" {
count = "${length(var.ips) > 0 ? length(var.domains) : 0}"
// ...
}

Related

Retrieve IP address from instances using for_each

I have this script which works great. It created 3 instances with the sepcified tags to identify them easily. But issue is i want to add a remote-exec provisioner (currently commented) to the code to install some packages. If i was using count, i could have looped over it to do the remote-exec over all the instances. I could not use count because i had to use for_each to loop over a local list. Since count and for_each cannot be used together, how do i loop over the instances to retrieve their IP addresses for using in the remote-exec provisioner.
On digital ocean and AWS, i was able to get it work using host = "${self.public_ip}"
But it does not work on vultr and gives the Unsupported attribute error
instance.tf
resource "vultr_ssh_key" "kubernetes" {
name = "kubernetes"
ssh_key = file("kubernetes.pub")
}
resource "vultr_instance" "kubernetes_instance" {
for_each = toset(local.expanded_names)
plan = "vc2-1c-2gb"
region = "sgp"
os_id = "387"
label = each.value
tag = each.value
hostname = each.value
enable_ipv6 = true
backups = "disabled"
ddos_protection = false
activation_email = false
ssh_key_ids = [vultr_ssh_key.kubernetes.id]
/* connection {
type = "ssh"
user = "root"
private_key = file("kubernetes")
timeout = "2m"
host = vultr_instance.kubernetes_instance[each.key].ipv4_address
}
provisioner "remote-exec" {
inline = "sudo hostnamectl set-hostname ${each.value}"
} */
}
locals {
expanded_names = flatten([
for name, count in var.host_name : [
for i in range(count) : format("%s-%02d", name, i + 1)
]
])
}
provider.tf
terraform {
required_providers {
vultr = {
source = "vultr/vultr"
version = "2.3.1"
}
}
}
provider "vultr" {
api_key = "***************************"
rate_limit = 700
retry_limit = 3
}
variables.tf
variable "host_name" {
type = map(number)
default = {
"Manager" = 1
"Worker" = 2
}
}
The property you are looking for is called main_ip instead of ip4_address or something like that. Specifically accessible via self.main_ip in your connection block.

How to dynamically attach multiple volumes to multiple instances via terraform in openstack?

I have written a terraform module(v0.14) that can be used to provision multiple instances in openstack with the correct availability_zone,network-port, correct flavor, can create 1 local disk or 1 blockstorage_volume based on boolean input variables and so forth. Now I have gotten a feature request to be able to add dynamically multiple blockstorage_volumes(=network/shared storage volumes) on multiple instances.
Example of how everything dynamically works for 1 blockstorage_volume on multiple instances:
#Dynamically create this resource ONLY if boolean "volume_storage" = true
resource "openstack_blockstorage_volume_v3" "shared_storage" {
for_each = var.volume_storage ? var.nodes : {}
name = "${each.value}-${each.key}.${var.domain}"
size = var.volume_s
availability_zone = each.key
volume_type = var.volume_type
image_id = var.os_image
}
resource "openstack_compute_instance_v2" "instance" {
for_each = var.nodes
name = "${each.value}-${each.key}.${var.domain}"
flavor_name = var.flavor
availability_zone = each.key
key_pair = var.key_pair
image_id = (var.volume_storage == true ? "" : var.os_image)
config_drive = true
#Dynamically create this parameter ONLY if boolean "volume_storage" = true
dynamic "block_device" {
for_each = var.volume_storage ? var.volume : {}
content {
uuid = openstack_blockstorage_volume_v3.shared_storage[each.key].id
source_type = "volume"
destination_type = "volume"
delete_on_termination = var.volume_delete_termination
}
}
user_data = data.template_file.cloud-init[each.key].rendered
scheduler_hints {
group = openstack_compute_servergroup_v2.servergroup.id
}
network {
port = openstack_networking_port_v2.network-port[each.key].id
}
}
So let's say now that I have 2 instances, where I want to add dynamically 2 extra blockstorage_volumes to each instance, my first idea was to add 2 extra dynamic resources as a try-out:
#Dynamically create this resource if boolean "multiple_volume_storage" = true
resource "openstack_blockstorage_volume_v3" "multiple_shared_storage" {
for_each = var.multiple_volume_storage ? var.multiple_volumes : {}
name = each.value
size = var.volume_s
availability_zone = each.key
volume_type = var.volume_type
image_id = var.os_image
}
Example of 2 extra blockstorage_volumes defined in a .tf file:
variable "multiple_volumes" {
type = map(any)
default = {
dc1 = "/volume/mysql"
dc2 = "/volume/postgres"
}
}
Example of 2 instances defined in a .tf file:
nodes = {
dc1 = "app-stage"
dc2 = "app-stage"
}
Here I try to dynamically attach 2 extra blockstorage_volumes to each instance:
resource "openstack_compute_volume_attach_v2" "attach_multiple_shared_storage" {
for_each = var.multiple_volume_storage ? var.multiple_volumes : {}
instance_id = openstack_compute_instance_v2.instance[each.key].id
volume_id = openstack_blockstorage_volume_v3.multiple_shared_storage[each.key].id
}
The openstack_compute_instance_v2.instance [each.key] is obviously not correct since it now only creates 1 extra blockstorage_volume per instance. Is there a clean/elegant way to solve this? So basically to attach all given volumes in variable "multiple_volumes" to each single instance that is defined in var.nodes
Kind regards,
Jonas

Default DNS records in every zone managed via terraform (eg. MX records)

I'm looking for a way to manage cloudflare zones and records with terraform and create some default records (eg. MX) in every zone that is managed via terraform, something like this:
resource "cloudflare_zone" "example_net" {
type = "full"
zone = "example.net"
}
resource "cloudflare_zone" "example_com" {
type = "full"
zone = "example.com"
}
resource "cloudflare_record" "mxrecord"{
for_each=cloudflare_zone.*
name = "${each.value.zone}"
priority = "1"
proxied = "false"
ttl = "1"
type = "MX"
value = "mail.foo.bar"
zone_id = each.value.id
}
Does anyone have a clue for me how to achieve this (and if this is even possible...)?
Thanks a lot!
You could create a module responsible for the zone resource, e.g.:
# modules/cf_zone/main.tf
resource "cloudflare_zone" "cf_zone" {
type = "full"
zone = var.zone_name
}
resource "cloudflare_record" "mxrecord"{
name = "${cloudflare_zone.cf_zone.name}"
priority = "1"
proxied = "false"
ttl = "1"
type = "MX"
value = "mail.foo.bar"
zone_id = "${cloudflare_zone.cf_zone.id}"
}
# main.tf
module "example_net" {
source = "./modules/cf_zone"
zone_name = "example_net"
}
module "example_com" {
source = "./modules/cf_zone"
zone_name = "example_com"
}
This would give you an advantage on creation of default resources and settings per zone (DNS entries, security settings, page rules, etc.). It is also a good way to keep all the default values in a single place for review.
You can ready more about terraform modules here.
This is easy to do if you use a module, as was correctly noted in the other answer, but you don't have to create one, you can use this module.
Then your configuration will look like this:
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
}
}
}
variable "cloudflare_api_token" {
type = string
sensitive = true
description = "The Cloudflare API token."
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
locals {
domains = [
"example.com",
"example.net"
]
mx = "mail.foo.bar"
}
module "domains" {
source = "registry.terraform.io/alex-feel/zone/cloudflare"
version = "1.8.0"
for_each = toset(local.domains)
zone = each.value
records = [
{
record_name = "mx_1"
type = "MX"
value = local.mx
priority = 1
}
]
}
You can find an example of using this module that matches your question here.

How do I make terraform skip that block while creating multiple resources in loop from a CSV file?

Hi I am trying to create a Terraform script which will take inputs from the user in the form of a CSV file and create multiple Azure resources.
For example if the user wants to create: ResourceGroup>Vnet>Subnet in bulk, he will provide input in CSV format as below:
resourcegroup,RG_location,RG_tag,domainname,DNS_Zone_tag,virtualnetwork,VNET_location,addressspace
csvrg1,eastus2,Terraform RG,test.sd,Terraform RG,csvvnet1,eastus2,10.0.0.0/16,Terraform VNET,subnet1,10.0.0.0/24
csvrg2,westus,Terraform RG2,test2.sd,Terraform RG2,csvvnet2,westus,172.0.0.0/8,Terraform VNET2,subnet1,171.0.0.0/24
I have written the following working main.tf file:
# Configure the Microsoft Azure Provider
provider "azurerm" {
version = "=1.43.0"
subscription_id = var.subscription
tenant_id = var.tenant
client_id = var.client
client_secret = var.secret
}
#Decoding the csv file
locals {
vmcsv = csvdecode(file("${path.module}/computelanding.csv"))
}
# Create a resource group if it doesn’t exist
resource "azurerm_resource_group" "myterraformgroup" {
count = length(local.vmcsv)
name = local.vmcsv[count.index].resourcegroup
location = local.vmcsv[count.index].RG_location
tags = {
environment = local.vmcsv[count.index].RG_tag
}
}
# Create a DNS Zone
resource "azurerm_dns_zone" "dnsp-private" {
count = 1
name = local.vmcsv[count.index].domainname
resource_group_name = local.vmcsv[count.index].resourcegroup
depends_on = [azurerm_resource_group.myterraformgroup]
tags = {
environment = local.vmcsv[count.index].DNS_Zone_tag
}
}
To be continued....
The issue I am facing here what is in the second resource group, the user don't want a resource type, suppose the user want to skip the DNS zone in the resource group csvrg2. How do I make terraform skip that block ?
Edit: What I am trying to achieve is "based on some condition in the CSV file, not to create azurerm_dns_zone resource for the resource group csvrg2"
I have provided an example of the CSV file, how it may look like below:
resourcegroup,RG_location,RG_tag,DNS_required,domainname,DNS_Zone_tag,virtualnetwork,VNET_location,addressspace
csvrg1,eastus2,Terraform RG,1,test.sd,Terraform RG,csvvnet1,eastus2,10.0.0.0/16,Terraform VNET,subnet1,10.0.0.0/24
csvrg2,westus,Terraform RG2,0,test2.sd,Terraform RG2,csvvnet2,westus,172.0.0.0/8,Terraform VNET2,subnet1,171.0.0.0/24
you had already the right thought in your mind using the depends_on function. Although, you're using a count inside, which causes from my understanding, that once the first resource[0] is created, Terraform sees the dependency as solved and goes ahead as well.
I found this post with a workaround which you might be able to try:
https://github.com/hashicorp/terraform/issues/15285#issuecomment-447971852
That basically tells us to create a null_resource like in that example:
variable "instance_count" {
default = 0
}
resource "null_resource" "a" {
count = var.instance_count
}
resource "null_resource" "b" {
depends_on = [null_resource.a]
}
In your example, it might look like this:
# Create a resource group if it doesn’t exist
resource "azurerm_resource_group" "myterraformgroup" {
count = length(local.vmcsv)
name = local.vmcsv[count.index].resourcegroup
location = local.vmcsv[count.index].RG_location
tags = {
environment = local.vmcsv[count.index].RG_tag
}
}
# Create a DNS Zone
resource "azurerm_dns_zone" "dnsp-private" {
count = 1
name = local.vmcsv[count.index].domainname
resource_group_name = local.vmcsv[count.index].resourcegroup
depends_on = null_resource.example
tags = {
environment = local.vmcsv[count.index].DNS_Zone_tag
}
}
resource "null_resource" "example" {
...
depends_on = [azurerm_resource_group.myterraformgroup[length(local.vmcsv)]]
}
or depending on your Terraform version (0.12+ which you're using guessing your syntax)
# Create a resource group if it doesn’t exist
resource "azurerm_resource_group" "myterraformgroup" {
count = length(local.vmcsv)
name = local.vmcsv[count.index].resourcegroup
location = local.vmcsv[count.index].RG_location
tags = {
environment = local.vmcsv[count.index].RG_tag
}
}
# Create a DNS Zone
resource "azurerm_dns_zone" "dnsp-private" {
count = 1
name = local.vmcsv[count.index].domainname
resource_group_name = local.vmcsv[count.index].resourcegroup
depends_on = [azurerm_resource_group.myterraformgroup[length(local.vmcsv)]]
tags = {
environment = local.vmcsv[count.index].DNS_Zone_tag
}
}
I hope that helps.
Greetings

Terraform resource recreation dynamic AWS RDS instance counts

I have a question relating to AWS RDS cluster and instance creation.
Environment
We recently experimented with:
Terraform v0.11.11
provider.aws v1.41.0
Background
Creating some AWS RDS databases. Our mission was that in some environment (e.g. staging) we may run fewer instances than in others (e.g. production.). With this in mind and not wanting to have totally different terraform files per environment we instead decided to specify the database resources just once and use a variable for the number of instances which is set in our staging.tf and production.tf files respectively for the number of instances.
Potentially one more "quirk" of our setup, is that the VPC in which the subnets exist is not defined in terraform, the VPC already existed via manual creation in the AWS console, so this is provided as a data provider and the subnets for the RDS are specific in terraform - but again this is dynamic in the sense that in some environments we might have 3 subnets (1 in each AZ), whereas in others perhaps we have only 2 subnets. Again to achieve this we used iteration as shown below:
Structure
|-/environments
-/staging
-staging.tf
-/production
-production.tf
|- /resources
- database.tf
Example Environment Variables File
dev.tf
terraform {
terraform {
backend "s3" {
bucket = "my-bucket-dev"
key = "terraform"
region = "eu-west-1"
encrypt = "true"
acl = "private"
dynamodb_table = "terraform-state-locking"
}
version = "~> 0.11.8"
}
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
version = "~> 1.33"
allowed_account_ids = ["XXX"]
}
module "main" {
source = "../../resources"
vpc_name = "test"
test_db_name = "terraform-test-db-dev"
test_db_instance_count = 1
test_db_backup_retention_period = 7
test_db_backup_window = "00:57-01:27"
test_db_maintenance_window = "tue:04:40-tue:05:10"
test_db_subnet_count = 2
test_db_subnet_cidr_blocks = ["10.2.4.0/24", "10.2.5.0/24"]
}
We came to this module based structure for environment isolation mainly due to these discussions:
https://github.com/hashicorp/terraform/issues/18632#issuecomment-412247266
https://github.com/hashicorp/terraform/issues/13700
https://www.terraform.io/docs/state/workspaces.html#when-to-use-multiple-workspaces
Our Issue
Initial resource creation works fine, our subnets are created, the database cluster starts up.
Our issues start the next time we subsequently run a terraform plan or terraform apply (with no changes to the files), at which point we see interesting things like:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
module.main.aws_rds_cluster.test_db (new resource required)
id: "terraform-test-db-dev" => (forces new resource)
availability_zones.#: "3" => "1" (forces new resource)
availability_zones.1924028850: "eu-west-1b" => "" (forces new resource)
availability_zones.3953592328: "eu-west-1a" => "eu-west-1a"
availability_zones.94988580: "eu-west-1c" => "" (forces new resource)
and
module.main.aws_rds_cluster_instance.test_db (new resource required)
id: "terraform-test-db-dev" => (forces new resource)
cluster_identifier: "terraform-test-db-dev" => "${aws_rds_cluster.test_db.id}" (forces new resource)
Something about the way we are approaching this appears to be causing terraform to believe that the resource has changed to such an extent that it must destroy the existing resource and create a brand new one.
Config
variable "aws_availability_zones" {
description = "Run the EC2 Instances in these Availability Zones"
type = "list"
default = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
}
variable "test_db_name" {
description = "Name of the RDS instance, must be unique per region and is provided by the module config"
}
variable "test_db_subnet_count" {
description = "Number of subnets to create, is provided by the module config"
}
resource "aws_security_group" "test_db_service" {
name = "${var.test_db_service_user_name}"
vpc_id = "${data.aws_vpc.vpc.id}"
}
resource "aws_security_group" "test_db" {
name = "${var.test_db_name}"
vpc_id = "${data.aws_vpc.vpc.id}"
}
resource "aws_security_group_rule" "test_db_ingress_app_server" {
security_group_id = "${aws_security_group.test_db.id}"
...
source_security_group_id = "${aws_security_group.test_db_service.id}"
}
variable "test_db_subnet_cidr_blocks" {
description = "Cidr block allocated to the subnets"
type = "list"
}
resource "aws_subnet" "test_db" {
count = "${var.test_db_subnet_count}"
vpc_id = "${data.aws_vpc.vpc.id}"
cidr_block = "${element(var.test_db_subnet_cidr_blocks, count.index)}"
availability_zone = "${element(var.aws_availability_zones, count.index)}"
}
resource "aws_db_subnet_group" "test_db" {
name = "${var.test_db_name}"
subnet_ids = ["${aws_subnet.test_db.*.id}"]
}
variable "test_db_backup_retention_period" {
description = "Number of days to keep the backup, is provided by the module config"
}
variable "test_db_backup_window" {
description = "Window during which the backup is done, is provided by the module config"
}
variable "test_db_maintenance_window" {
description = "Window during which the maintenance is done, is provided by the module config"
}
data "aws_secretsmanager_secret" "test_db_master_password" {
name = "terraform/db/test-db/root-password"
}
data "aws_secretsmanager_secret_version" "test_db_master_password" {
secret_id = "${data.aws_secretsmanager_secret.test_db_master_password.id}"
}
data "aws_iam_role" "rds-monitoring-role" {
name = "rds-monitoring-role"
}
resource "aws_rds_cluster" "test_db" {
cluster_identifier = "${var.test_db_name}"
engine = "aurora-mysql"
engine_version = "5.7.12"
# can only request to deploy in AZ's where there is a subnet in the subnet group.
availability_zones = "${slice(var.aws_availability_zones, 0, var.test_db_instance_count)}"
database_name = "${var.test_db_schema_name}"
master_username = "root"
master_password = "${data.aws_secretsmanager_secret_version.test_db_master_password.secret_string}"
preferred_backup_window = "${var.test_db_backup_window}"
preferred_maintenance_window = "${var.test_db_maintenance_window}"
backup_retention_period = "${var.test_db_backup_retention_period}"
db_subnet_group_name = "${aws_db_subnet_group.test_db.name}"
storage_encrypted = true
kms_key_id = "${data.aws_kms_key.kms_rds_key.arn}"
deletion_protection = true
enabled_cloudwatch_logs_exports = ["audit", "error", "general", "slowquery"]
vpc_security_group_ids = ["${aws_security_group.test_db.id}"]
final_snapshot_identifier = "test-db-final-snapshot"
}
variable "test_db_instance_count" {
description = "Number of instances to create, is provided by the module config"
}
resource "aws_rds_cluster_instance" "test_db" {
count = "${var.test_db_instance_count}"
identifier = "${var.test_db_name}"
cluster_identifier = "${aws_rds_cluster.test_db.id}"
availability_zone = "${element(var.aws_availability_zones, count.index)}"
instance_class = "db.t2.small"
db_subnet_group_name = "${aws_db_subnet_group.test_db.name}"
monitoring_interval = 60
engine = "aurora-mysql"
engine_version = "5.7.12"
monitoring_role_arn = "${data.aws_iam_role.rds-monitoring-role.arn}"
tags {
Name = "test_db-${count.index}"
}
}
My question is, is there a way to achieve this so that terraform would not try to recreate the resource (e.g. ensure that the availability zones of the cluster and ID of the instance do not change each time we run terraform.
Turns out that simply by just removing the explicit availability zones definitions from the aws_rds_cluster and aws_rds_cluster_instance then this issue goes away and everything so far appears to work as expected. See also https://github.com/terraform-providers/terraform-provider-aws/issues/7307#issuecomment-457441633

Resources