How to create multiple resources in a for loop with terraform? - terraform

I have looked at several bits of documentation as well as a udemy course on terraform and I do not understand how to do the thing that I want to do. I want to create a for loop and in it I want to create an S3 event notification, create an Sns topic that listens to that notification, create an Sqs queue and then subscribe the queue to the sns topic. It seems like for loops in terraform are not advanced enough to do this. Am I wrong, is there any documentation or examples that explain how to use for loops for this use case?
Thanks in advance.

An example to create AWS VPC subnets then give them to AWS EC2 instances.
resource "aws_subnet" "public" {
count = length(var.public_subnet_cidr_blocks)
vpc_id = var.vpc_id
cidr_block = var.public_subnet_cidr_blocks[count.index]
}
resource "aws_instance" "public_ec2" {
count = length(var.public_subnet_ids)
subnet_id = var.public_subnet_ids[count.index]
ami = var.ami_id
instance_type = "t2.micro"
tags = {
Name = "PublicEC2${count.index}}"
}
provisioner "local-exec" {
command = <<EOF
echo "Public EC2 ${count.index} ID is ${self.id}"
EOF
}
}
There is no syntax like below to create resources.
[ for name in var.names:
aws_s3_bucket {...}
aws_sns_topic {...}
]
For expression is basically for values, not for creating resources.
for Expressions
A for expression creates a complex type value by transforming another complex type value.
To create multiple resources, as below in the document use for_each or count.
for_each: Multiple Resource Instances Defined By a Map, or Set of Strings
By default, a resource block configures one real infrastructure object. However, sometimes you want to manage several similar objects, such as a fixed pool of compute instances. Terraform has two ways to do this: count and for_each.

The for_each example looks like the following:
variable names {
type = list(string)
description = "List of names"
}
resoure "aws_foo" "bar" {
for_each = toset(var.names)
name = each.key
}
This will create N resources (length of names), using the each.key to specify the value to assign to name each time

Related

How to select the value of a specific resource in tuple based on a property value?

Hello im struggling a little to solve this challenge and hope someone can provide insights. I need to pick a specific resource out of a tuple of those resources based on a property on that resource. How would one go about achieving something like that?
Code looks something like:
resource "aws_network_interface" "network_interface" {
for_each = var.counter_of_2
// stuff
}
resource "aws_network_interface_attachment" "currently_used_eni" {
instance_id = var.instance_id
network_interface_id = <the aws_network_interface with tag.Name = "thisone">
device_index = 0
}
Because tag is not ID of resource something like that would be impossible. Imagine if you have two network interfaces with same tag. Which one should it take?
But your case might be not lost. (yet it might... depends on case)
I assume that if you want to find specific aws_network_interface then tags are unique. And if they are unique then every network interface have different tag. So you can simply use something like that:
aws_network_interface.network_interface["value_from_for_each"]
For simplicity (and assuming that one network interface = one tag) I'll make this for_each collection to be set of tag names.
variable "interface_tags" {
type = set(string)
default = ["tag1", "tag2", "thisone"]
}
resource "aws_network_interface" "network_interface" {
for_each = var.interface_tags
// stuff
}
resource "aws_network_interface_attachment" "currently_used_eni" {
instance_id = var.instance_id
network_interface_id = aws_network_interface.network_interface["thisone"].id
device_index = 0
}
And this should work (and I hope it can be used in your case)

Terraform Import Resources and Looping Over Those Resources

I am new to Terraform and looking to utilize it for management of Snowflake environment using the provider of "chanzuckerberg/snowflake". I am specifically looking to leverage it for managing an RBAC model for roles within Snowflake.
The scenario is that I have about 60 databases in Snowflake which would equate to a resource for each in Terraform. We will then create 3 roles (reader, writer, all privileges) for each database. We will expand our roles from there.
The first question is, can I leverage map or object variables to define all database names and their attributes and import them using a for_each within a single resource or do I need to create a resource for each database and then import them individually?
The second question is, what would be the best approach for creating the 3 roles per database? Is there a way to iterate over all the resources of type snowflake_database and create the 3 roles? I was imagining the use of modules, variables, and resources based on the research I have done.
Any help in understanding how this can be accomplished would be super helpful. I understand the basics of Terraform but this is a bit of a complex situation for a newbie like myself to visualize enough to implement it. Thanks all!
Update:
This is what my project looks like and the error I am receiving is below it.
variables.tf:
variable "databases" {
type = list(object(
{
name = string
comment = string
retention_days = number
}))
}
databases.auto.tfvars:
databases = [
{
name = "TEST_DB1"
comment = "Testing state."
retention_days = 90
},
{
name = "TEST_DB2"
comment = ""
retention_days = 1
}
]
main.tf:
terraform {
required_providers {
snowflake = {
source = "chanzuckerberg/snowflake"
version = "0.25.25"
}
}
}
provider "snowflake" {
username = "user"
account = "my_account"
region = "my_region"
password = "pwd"
role = "some_role"
}
resource "snowflake_database" "sf_database" {
for_each = { for idx, db in var.databases: idx => db }
name = each.value.name
comment = each.value.comment
data_retention_time_in_days = each.value.retention_days
}
To Import the resource I run:
terraform import snowflake_database.sf_databases["TEST_DB1"]
db_test_db1
I am left with this error:
Error: resource address
"snowflake_database.sf_databases["TEST_DB1"]" does not exist in the
configuration.
Before importing this resource, please create its configuration in the
root module. For example:
resource "snowflake_database" "sf_databases" { # (resource
arguments) }
You should be able to define the databases using for_each and referring to the actual resources with brackets in the import command. Something like:
terraform import snowflake_database.resource_id_using_for_each[foreachkey]
You could then create three snowflake_role and three snowflake_database_grant definitions using for_each over the same map of databases used for the database definitions.
had this exact same problem and in the end the solution was quite simple.
you just need to wrap your import statement within single brackets.
so instead of
terraform import snowflake_database.sf_databases["TEST_DB1"] db_test_db1
do
terraform import 'snowflake_database.sf_databases["TEST_DB1"]' db_test_db1
this took way to long to figure out!

Reference a field within same module

Lets say I have a module block like this:
resource "aws_instance" "server" {
ami = "ami-a1b2c3d4"
instance_type = "t2.micro"
subnet_id = var.subnet_ids
tags = {
Name = format("ami:%s", ami) # << **How to do this?**
}
}
And I have to use a module field such as ami in this example as a value to another field? Is there a way i can do this.
Above is an example in really, I am working with custom module and one value gets used multiple times so I find it non-efficient to write/change same thing at multiple places. I also want to avoid creating a separate external variable.
Is there a way above can be achieved with some sort of internal referencing of fields within same module?
TIA!
Use variables to reuse values in the same module. If you want to access other modules resourcesLets use the data attribute.
Example: start with using a module:
module "dev-server" {
source = "./modules/server"
ami = var.dev_ami_name
instance_id = var.instance_id
}
Also add a module var file with the correct variables you're passing. Then your module:
resource "aws_instance" "server" {
ami = var.ami
instance_type = var.instance_id
tags = {
Name = format("ami:%s", var.ami) # << **How to do this?**
}
}
A general answer to this is to factor out the common value into a separate named value that you can refer to elsewhere.
54m's answer shows one way to do that with input variables. That's a good choice if it's a value that should be chosen by the user of your module. If it's instead something that would make sense to have "hard-coded", but still used in multiple locations, then you can use a local value:
locals {
ami_id = "ami-a1b2c3d4"
}
resource "aws_instance" "server" {
ami = local.ami
instance_type = "t2.micro"
subnet_id = var.subnet_ids
tags = {
Name = format("ami:%s", local.ami)
}
}
A local value is private to the module that defined it, so this doesn't change the public interface of the module at all. In particular, it would not be possible for the user of this module to change the AMI ID, just like before.
If you have experience with general-purpose programming languages then it might be helpful to think of input variables as being similar to function parameters, while local values are somewhat like local variables inside a function. In this analogy, the module itself corresponds with the function.

Terraform & OpenStack - Zero downtime flavor change

I’m using openstack_compute_instance_v2 to create instances in OpenStack. There is a lifecycle setting create_before_destroy = true present. And it works just fine in case I e.g. change volume size, where instances needs to be replaced.
But. When I do flavor change, which can be done by using resize instance option from OpenStack, it does just that, but doesn’t care about any HA. All instances in the cluster are unavailable for 20-30 seconds, before resize finishes.
How can I change this behaviour?
Some setting like serial from Ansible, or some other options would come in handy. But I can’t find anything.
Just any solution that would allow me to say “at least half of the instances needs to be online at all times”.
Terraform version: 12.20.
TF plan: https://pastebin.com/ECfWYYX3
The Openstack Terraform provider knows that it can update the flavor by using a resize API call instead of having to destroy the instance and recreate it.
Unfortunately there's not currently a lifecycle option that forces mutable things to do a destroy/create or create/destroy when coupled with the create_before_destroy lifecycle customisation so you can't easily force this to replace the instance instead.
One option in these circumstances is to find a parameter that can't be modified in place (these are noted by the ForceNew flag on the schema in the underlying provider source code for the resource) and then have a change in the mutable parameter also cascade a change to the immutable parameter.
A common example here would be replacing an AWS autoscaling group when the launch template (which is mutable compared to the immutable launch configurations) changes so you can immediately roll out the changes instead of waiting for the ASG to slowly replace the instances over time. A simple example would look something like this:
variable "ami_id" {
default = "ami-123456"
}
resource "random_pet" "ami_random_name" {
keepers = {
# Generate a new pet name each time we switch to a new AMI id
ami_id = var.ami_id
}
}
resource "aws_launch_template" "example" {
name_prefix = "example-"
image_id = var.ami_id
instance_type = "t2.small"
vpc_security_group_ids = ["sg-123456"]
}
resource "aws_autoscaling_group" "example" {
name = "${aws_launch_template.example.name}-${random_pet.ami_random_name.id}"
vpc_zone_identifier = ["subnet-123456"]
min_size = 1
max_size = 3
launch_template {
id = aws_launch_template.example.id
version = "$Latest"
}
lifecycle {
create_before_destroy = true
}
}
In the above example a change to the AMI triggers a new random pet name which changes the ASG name which is an immutable field so this triggers replacing the ASG. Because the ASG has the create_before_destroy lifecycle customisation then it will create a new ASG, wait for the minimum amount of instances to pass EC2 health checks and then destroy the old ASG.
For your case you can also use the name parameter on the openstack_compute_instance_v2 resource as that is an immutable field as well. So a basic example might look like this:
variable "flavor_name" {
default = "FLAVOR_1"
}
resource "random_pet" "flavor_random_name" {
keepers = {
# Generate a new pet name each time we switch to a new flavor
flavor_name = var.flavor_name
}
}
resource "openstack_compute_instance_v2" "example" {
name = "example-${random_pet.flavor_random_name}"
image_id = "ad091b52-742f-469e-8f3c-fd81cadf0743"
flavor_name = var.flavor_name
key_pair = "my_key_pair_name"
security_groups = ["default"]
metadata = {
this = "that"
}
network {
name = "my_network"
}
}
So. At first I've started digging how, as #ydaetskcoR proposed, to use random instance name.
Name wasn't an option, both because in openstack it is a mutable parameter, and because I have a decided naming schema which I can't change.
I've started to look for other parameters that I could modify to force instance being created instead of modified. I've found about personality.
https://www.terraform.io/docs/providers/openstack/r/compute_instance_v2.html#instance-with-personality
But it didn't work either. Mainly, because personality is no longer supported as it seems:
The use of personality files is deprecated starting with the 2.57 microversion. Use metadata and user_data to customize a server instance.
https://docs.openstack.org/api-ref/compute/
Not sure if terraform doesn't support it, or there are any other issues. But I went with user_data. I've already used user_data in compute instance module, so adding some flavor data there shouldn't be an issue.
So, within user_data I've added the following:
user_data = "runcmd:\n - echo ${var.host["flavor"]} > /tmp/tf_flavor"
No need for random pet names, no need to change instances names. Just change their "personality" by adding flavor name somewhere. This does force instance to be recreated when flavor changes.
So. Instead of simply:
# module.instance.openstack_compute_instance_v2.server[0] will be updated in-place
~ resource "openstack_compute_instance_v2" "server" {
I have now:
-/+ destroy and then create replacement
+/- create replacement and then destroy
Terraform will perform the following actions:
# module.instance.openstack_compute_instance_v2.server[0] must be replaced
+/- resource "openstack_compute_instance_v2" "server" {

Attempting to use list of stacks output as module output

I have a module that creates a variable number of CloudFormation Stacks. This works just fine but I am having problems attempting to use the Stack output as output in the module. The stack creates a subnet and I specify the created subnet id as an output of the stack. Then I want to return a list of all subnet ids as part of module output. This is what I think my output should look like:
output "subnets" {
value = ["${aws_cloudformation_stack.subnets.*.outputs["Subnet"]}"]
}
I get an integer parse error when I do that. Terraform seems to be treating outputs as a list instead of a map. Any way to get this to work?
Edit: Here is where I declare the stacks:
resource "aws_cloudformation_stack" "subnets" {
count = "${local.num_zones}"
name = "Subnet-${element(local.availability_zones, count.index)}"
on_failure = "DELETE"
template_body = "${file("${path.module}/templates/subnet.yaml")}"
parameters {
CIDR = "${cidrsubnet(var.cidr,ceil(log(local.num_zones * 2, 2)), count.index)}"
AZ = "${element(local.availability_zones, count.index)}"
VPC = "${aws_cloudformation_stack.vpc.outputs["VPCId"]}"
}
}
Then there is a Stack output in subnet.yaml that is has key Subnet and is the id of the subnet that was created.
The stacks are all created successfully but I can't seem to get exporting all the created subnet ids from my terraform module. Not sure why terraform is treating *.outputs as list vs keeping *.outputs["Subnet"] as the list. I'm guessing *.outputs is getting converted to a list of maps but I need a list of a specific key (Subnet) in the map.
I've got a non list example working for stack and using output from stack as terraform module output:
resource "aws_cloudformation_stack" "vpc" {
name = "${var.name_prefix}-VPC"
on_failure = "DELETE"
template_body = "${file("${path.module}/templates/vpc.yaml")}"
parameters {
CIDR = "${var.cidr}"
}
}
output "vpc" {
value = "${aws_cloudformation_stack.vpc.outputs["VPCId"]}"
}
I was able to work around the issue by declaring data to lookup the subnets after creation. It's not ideal but gets me passed being stuck. Let me know if anyone knows how to do what I was originally trying to do. Here is what I came up with:
data "aws_subnet_ids" "subnets" {
depends_on = ["aws_cloudformation_stack.subnets"]
vpc_id = "${aws_cloudformation_stack.vpc.outputs["VPCId"]}"
}
output "subnets" {
value = "${data.aws_subnet_ids.subnets.ids}"
}

Resources