How to create AWS resources in multiple regions using for loop - terraform

I'm trying to launch AWS Config and rules in each region in my account. Right now in my root main.tf I create an AWS provider in a single region and call my AWS Config module from my modules directory. This is fine for creating one module, but I would be hoping to have a regions list that I could iterate over to create AWS Config rules in each
I have tried creating individual modules with region as a parameter, but I do not know if 10+ different modules is effective. It seems using a for loop would be more effective, but I cant find any examples for this behavior.
provider "aws" {
region = "${var.aws_region}"
}
module "config" {
source = "./modules/config"
...
}
My goal is to use my config modules over all and create them in all regions. us-east-1, us-east-2, us-west1-, etc etc

I believe you're trying to dynamically pass in a list of regions to a module's provider in order to provision resources across regions in a single module. This is not possible at the moment.
Here is the ticket to upvote and follow: https://github.com/hashicorp/terraform/issues/24476

Related

How to handle multi region and multi account aws infrastructure using terraform

My infrastructure is spread acrross multiple accounts and multiple region. I want to implement IaC for that using terraform. I have created modules for various core resources like VPC,ECS clusters etc. how to handle multiple region and accounts ??
I suggest you start checking the provider alias option. Basically you define a provider alias for each region or account. Then when you create a resource or "call" a module you specify the provider alias to use.

How Do I Apply A Single Terraform Module?

I have a single main.tf at the root and different modules under it for different parts of my Azure cloud e.g.
main.tf
- apim
- firewall
- ftp
- function
The main.tf passes variable down to the various modules e.g. resource group name or a map of tags.
During development I have been investigating certain functionality using the portal and I don't have it in terraform yet.
e.g. working out how best to add a mock service in the web module
If I now want to update a different module (e.g. update firewall rules) terraform will suggest destroying the added service.
How can I do terraform plan/apply for just a single module?
You can target only the module by specifying the module namespace as the target argument in your plan and apply commands:
terraform plan -target=module.<declared module name>
For example, if your module declaration was:
module "the_firewall" {
source = "${path.root}/firewall"
}
then the command would be:
terraform plan -target=module.the_firewall
to only target that module declaration.
Note that this should only be used in development/testing scenarios, which it seems you already are focused on according to the question.

How to structure terraform templates for serverless applications

I have several lambdas which either get triggered by messages from queues or through api gateway, have different storage types and so on.
Each of these components seat in their respective repos but over all and together they are part of the same architecture.
I am attempting to structure my terraform templates but one point of concern for me was the fact that some of these lambdas have shared resources, for example storage tables or s3 buckets, so I was wondering if it would be a good idea to only have a main.tf file in each lambda's repo that only creates the lambda itself and not any of its other dependencies, this way I could redeploy the lambda through ci/cd without worrying about the other components, and I would place all the other parts of the architecture that are more or less long lasting in a central repo and only run them when necessary through this repos dedicated ci/cd pipeline. I was also thinking of having a tfvar file that has all the shared resource names.
Is this a valid approach? What are the downsides? What are the alternatives?
You are on the right track. You can have a module for lambdas that accepts those common resources as variables. You can either hardcode the resources your terraform import then, and pass them to your lambda:
On a modules/lambda folder
resource "aws_lambda_function" "this" {
"""
My lambda stuff
"""
}
On your main folder main.tf
module "lambda1"{
s3_bucket = "${var.common_bucket}"
iam_role = "${var.common_role}"
subnets = "${var.private_subnets}"
....
}
module "lambda2"{
s3_bucket = "${var.common_bucket}"
iam_role = "${var.common_role}"
subnets = "${var.private_subnets}"
....
}
module "lambda3"{
s3_bucket = "${var.common_bucket}"
iam_role = "${var.common_role}"
subnets = "${var.private_subnets}"
....
}
If you want yo import the common resources, e.g iam the iam policy for the rule, just do something like this
resource "aws_iam_policy" "my_policy" {
}
Then run the command terraform import aws_iam_policy.my_policy <policy arn>

Layered deployments with Terraform

I am new to Terraform so not even sure something like this is possible. As an example, lets say I have a template that deploys an Azure resource group and a key vault in it. And then lets say I have another template that deploys a virtual machine into the same resource group. Is it possible to do a destroy with the virtual machine template without destroying the key vault and resource group? We are trying to compartmentalize the parts of a large solution without having to put it all in a single template and we want to be able to manage each piece separately without affecting other pieces.
On a related note...we are storing state files in an Azure storage account. If we breakup our deployment into multiple compartmentalized deployments...should each deployment have its own state file or should they all use the same state file?
For larger systems it is common to split infrastructure across multiple separate configurations and apply each of them separately. This is a separate idea from (and complimentary to) using shared modules: modules allow a number of different configurations to have their own separate "copy" of a particular set of infrastructure, while the patterns described below allow an object managed by one configuration to be passed by reference to another.
If some configurations will depend on the results of other configurations, it's necessary to store these results in some data store that can be written to by its producer and read from by its consumer. In an environment where the Terraform state is stored remotely and readable broadly, the terraform_remote_state data source is a common way to get started:
data "terraform_remote_state" "resource_group" {
# The settings here should match the "backend" settings in the
# configuration that manages the network resources.
backend = "s3"
config {
bucket = "mycompany-terraform-states"
region = "us-east-1"
key = "azure-resource-group/terraform.tfstate"
}
}
resource "azurerm_virtual_machine" "example" {
resource_group_name = "${data.terraform_remote_state.resource_group.resource_group_name}"
# ... etc ...
}
The resource_group_name attribute exported by the terraform_remote_state data source in this example assumes that a value of that name was exposed by the configuration that manages the resource group using an output.
This decouples the two configurations so that they have an entirely separate lifecycle. You first terraform apply in the configuration that creates the resource group, and then terraform apply in the configuration that contains the terraform_remote_state data resource shown above. You can then apply that latter configuration as many times as you like without risk to the shared resource group or key vault.
While the terraform_remote_state data source is quick to get started with for any organization already using remote state (which is recommended), some organizations prefer to decouple configurations further by introducing an intermediate data store like Consul, which then allows data to be passed between configurations more explicitly.
To do this, the "producing" configuration (the one that manages your resource group) publishes the necessary information about what it created into Consul at a well-known location, using the consul_key_prefix resource:
resource "consul_key_prefix" "example" {
path_prefix = "shared/resource_group/"
subkeys = {
name = "${azurerm_resource_group.example.name}"
id = "${azurerm_resource_group.example.id}"
}
resource "consul_key_prefix" "example" {
path_prefix = "shared/key_vault/"
subkeys = {
name = "${azurerm_key_vault.example.name}"
id = "${azurerm_key_vault.example.id}"
uri = "${azurerm_key_vault.example.uri}"
}
}
The separate configuration(s) that use the centrally-managed resource group and key vault would then read it using the consul_keys data source:
data "consul_keys" "example" {
key {
name = "resource_group_name"
path = "shared/resource_group/name"
}
key {
name = "key_vault_name"
path = "shared/key_vault/name"
}
key {
name = "key_vault_uri"
path = "shared/key_vault/uri"
}
}
resource "azurerm_virtual_machine" "example" {
resource_group_name = "${data.consul_keys.example.var.resource_group_name}"
# ... etc ...
}
In return for the additional complexity of running another service to store these intermediate values, the two configurations now know nothing about each other apart from the agreed-upon naming scheme for keys within Consul, which gives flexibility if, for example, in future you decide to refactor these Terraform configurations so that the key vault has its own separate configuration too. Using a generic data store like Consul also potentially makes this data available to the applications themselves, e.g. via consul-template.
Consul is just one example of a data store that happens to already be well-supported in Terraform. It's also possible to achieve similar results using any other data store that Terraform can both read and write. For example, you could even store values in TXT records in a DNS zone and use the DNS provider to read, as an "outside the box" solution that avoids running an additional service.
As usual, there is a tradeoff to be made here between simplicity (with "everything in one configuration" being the simplest possible) and flexibility (with a separate configuration store), so you'll need to evaluate which of these approaches is the best fit for your situation.
As some additional context: I've documented a pattern I used successfully for a moderate-complexity system. In that case we used a mixture of Consul and DNS to create an "environment" abstraction that allowed us to deploy the same applications separately for a staging environment, production, etc. The exact technologies used are less important than the pattern, though. That approach won't apply exactly to all other situations, but hopefully there are some ideas in there to help others think about how to best make use of Terraform in their environment.
You can destroy specific resources using terraform destroy -target path.to.resource. Docs
Different parts of a large solution can be split up into modules, these modules do not even have to be part of the same codebase and can be referenced remotely. Depending on your solution you may want to break up your deployments into modules and reference them from a "master" state file that contains everything.

Managing multiple configurations which depend on a singleton shared resource

I have multiple different services which each have their own terraform configuration to create resources (in this particular case, a BigQuery table for each service).
Each of these services depends on the existence of a single instance of a resource (in this case, a BigQuery dataset).
I would like to somehow configure Terraform so that this shared resource is created exactly once if it does not exist.
My first thought was to use modules, however this leads to each root service attempting to create its own instance of the shared resource due to module namespacing.
Ideally I would like to mark one directory of terraform configuration as dependent on another directory of terraform configuration, without importing that latter directory as a module. Is this possible?
It is, you need to create a module and then save the remote state somewhere. You can configure backends in terraform to handle this for you. Once you have that you can then have other resources reference that state using the "data_terraform_remote_state" resource. Any outputs you have configured in the module will be available to reference in the remote state.

Resources