Terraform multiple resources with the same monitoring settings - terraform

Multiple resources (S3, lambda, etc.) in AWS are created by different teams via Terraform scripts;
I’ve developed my Terraform scripts for CloudWatch monitoring, they require AWS ARN as input;
Is it possible to use my monitoring terraform scripts by each team without copying them to their repos?

Yes you can let each team make use of your terraform scripts by creating them as a module. Then the teams can load them into their own scripts using the module configuration section If you do not use git, or you have all terraform in one repository, there are other ways to load a module
This is how we make and use a large number of modules to have a consistent setup across all teams. We have a git repository for each module like (module_cloudwatch_monitoring). There is nothing special about defining a module, you only need to have variables and outputs defined.
When a team wants to use that module, they can use the module syntax like:
module "cloudwatch_monitoring" {
source = "git::http://your-git-repository-url.git?ref=latest"
resource_arn = "${aws_s3_bucket.my_bucket.id}"
}
If the module was in a local path in the same repository, you could do something like:
module "cloudwatch_monitoring" {
source = "../../modules/cloudwatch_monitoring"
resource_arn = "${aws_s3_bucket.my_bucket.id}"
}

Related

terraform interpolation with variables returning error [duplicate]

# Using a single workspace:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "company"
workspaces {
name = "my-app-prod"
}
}
}
For Terraform remote backend, would there be a way to use variable to specify the organization / workspace name instead of the hardcoded values there?
The Terraform documentation
didn't seem to mention anything related either.
The backend configuration documentation goes into this in some detail. The main point to note is this:
Only one backend may be specified and the configuration may not contain interpolations. Terraform will validate this.
If you want to make this easily configurable then you can use partial configuration for the static parts (eg the type of backend such as S3) and then provide config at run time interactively, via environment variables or via command line flags.
I personally wrap Terraform actions in a small shell script that runs terraform init with command line flags that uses an appropriate S3 bucket (eg a different one for each project and AWS account) and makes sure the state file location matches the path to the directory I am working on.
I had the same problems and was very disappointed with the need of additional init/wrapper scripts. Some time ago I started to use Terragrunt.
It's worth taking a look at Terragrunt because it closes the gap between Terraform and the lack of using variables at some points, e.g. for the remote backend configuration:
https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry

How to re-use a module several times in the same `main.tf`?

I created a module for a re-usable piece of infrastructure. The module is a project, so each time we want to create a new project and the related infrastructure items, we could use this module:
module "project1" {
source = ".modules/project_module"
project_id = "project1"
...
}
module "project2" {
source = ".modules/project_module"
project_id = "project2"
...
}
The module uses the google provider to create resources on GCP.
Unfortunately, this didn't work as hoped. First, each new project requires to invoke terraform init and second, it is impossible to remove a project, because when removing a module from the main.tf file, Terraform complains that without the Google provider it cannot destroy resources. For example:
module.project1.google_storage_bucket_iam_member.some-bucket:
configuration for module.project1.provider.google is not present; a provider configuration block is required for all operations
Is there a way to use several times the same module in the same main.tf? I realize that ideally I should write a provider, but I would like to avoid that for now.
It turned out there was something inconsistent in the state. Burning the end, after re-creating the project from scratch while keeping the Google provider outside of the module, it worked.

Use variable in Terraform remote backend

# Using a single workspace:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "company"
workspaces {
name = "my-app-prod"
}
}
}
For Terraform remote backend, would there be a way to use variable to specify the organization / workspace name instead of the hardcoded values there?
The Terraform documentation
didn't seem to mention anything related either.
The backend configuration documentation goes into this in some detail. The main point to note is this:
Only one backend may be specified and the configuration may not contain interpolations. Terraform will validate this.
If you want to make this easily configurable then you can use partial configuration for the static parts (eg the type of backend such as S3) and then provide config at run time interactively, via environment variables or via command line flags.
I personally wrap Terraform actions in a small shell script that runs terraform init with command line flags that uses an appropriate S3 bucket (eg a different one for each project and AWS account) and makes sure the state file location matches the path to the directory I am working on.
I had the same problems and was very disappointed with the need of additional init/wrapper scripts. Some time ago I started to use Terragrunt.
It's worth taking a look at Terragrunt because it closes the gap between Terraform and the lack of using variables at some points, e.g. for the remote backend configuration:
https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry

How to manage terraform for multiple repos

I have 2 repos for my project. A Static website and server. I want the website to be hosted by cloudfront and s3 and the server on elasticbeanstalk. I know these resources will need to know about a route53 resource at least to be under the same domain name for cors to work. Among other things such as vpcs and stuff.
So my question is how do I manage terraform with multiple repos.
I'm thinking I could have a seperate infrastructure repo that builds for all repos.
I could also have them seperate and pass in the arns/names/ids as variables (annoying).
You can use terraform remote_state for this. It lets you read the output variables from another terraform state file.
Lets assume you save your state files remotely on s3 and you have your website.tfstate and server.tfstate file. You could output your hosted zone ID of your route53 zone as hosted_zone_id in your website.tfstate and then reference that output variable directly in your server state terraform code.
data "terraform_remote_state" "website" {
backend = "s3"
config {
bucket = "<website_state_bucket>"
region = "<website_bucket_region>"
key = "website.tfstate"
}
}
resource "aws_route53_record" "www" {
zone_id = "${data.terraform_remote_state.website.hosted_zone_id}"
name = "www.example.com"
type = "A"
ttl = "300"
records = ["${aws_eip.lb.public_ip}"]
}
Note, that you can only read output variables from remote states. You cannot access resources directly, as terraform treats other states/modules as black boxes.
Update
As mentioned in the comments, terraform_remote_state is a simple way to share explicitly published variables across multiple states. However, it comes with 2 issues:
Close coupling between code components, i.e., producer of the variable cannot change easily.
It can only be used by terraform, i.e., you cannot easily share those variables across different layers. Configuration tools such as Ansible cannot use .tfstate natively without some additional custom plugin/wrapper.
The recommended HashiCorp way is to use a central config store such as Consul. It comes with more benefits:
Consumer is decoupled from the variable producer.
Explicit publishing of variables (like in terraform_remote_state).
Can be used by other tools.
A more detailed explanation can be found here.
An approach I've used in the past is to have a single repo for all of the Infrastructure.
An alternative is to have 2 separate tf configurations, each using remote state. Config 1 can use output variables to store any arns/ids as necessary.
Config 2 can then have a remote_state data source to query for the relevant arns/ids.
E.g.
# Declare remote state
data "terraform_remote_state" "network" {
backend = "s3"
config {
bucket = "my-terraform-state"
key = "network/terraform.tfstate"
region = "us-east-1"
}
}
You can then use output values using standard interpolation syntax
${data.terraform_remote_state.network.some_id}

How to use multiple Terraform Providers sequentially

How can I get Terraform 0.10.1 to support two different providers without having to run 'terraform init' every time for each provider?
I am trying to use Terraform to
1) Provision an API server with the 'DigitalOcean' provider
2) Subsequently use the 'Docker' provider to spin up my containers
Any suggestions? Do I need to write an orchestrating script that wraps Terraform?
Terraform's current design struggles with creating "multi-layer" architectures in a single configuration, due to the need to pass dynamic settings from one provider to another:
resource "digitalocean_droplet" "example" {
# (settings for a machine running docker)
}
provider "docker" {
host = "tcp://${digitalocean_droplet.example.ipv4_address_private}:2376/"
}
As you saw in the documentation, passing dynamic values into provider configuration doesn't fully work. It does actually partially work if you use it with care, so one way to get this done is to use a config like the above and then solve the "chicken-and-egg" problem by forcing Terraform to create the droplet first:
$ terraform plan -out=tfplan -target=digitalocean_droplet.example
The above will create a plan that only deals with the droplet and any of its dependencies, ignoring the docker resources. Once the Docker droplet is up and running, you can then re-run Terraform as normal to complete the setup, which should then work as expected because the Droplet's ipv4_address_private attribute will then be known. As long as the droplet is never replaced, Terraform can be used as normal after this.
Using -target is fiddly, and so the current recommendation is to split such systems up into multiple configurations, with one for each conceptual "layer". This does, however, require initializing two separate working directories, which you indicated in your question that you didn't want to do. This -target trick allows you to get it done within a single configuration, at the expense of an unconventional workflow to get it initially bootstrapped.
Maybe you can use a provider instance within your resources/module to set up various resources with various providers.
https://www.terraform.io/docs/configuration/providers.html#multiple-provider-instances
The doc talks about multiple instances of same provider but I believe the same should be doable with distinct providers as well.
A little bit late...
Well, got the same Problem. My workaround is to create modules.
First you need a module for your docker Provider with an ip variable:
# File: ./docker/main.tf
variable "ip" {}
provider "docker" {
host = "tcp://${var.ip}:2375/"
}
resource "docker_container" "www" {
provider = "docker"
name = "www"
}
Next one is to load that modul in your root configuration:
# File: .main.tf
module "docker01" {
source = "./docker"
ip = "192.169.10.12"
}
module "docker02" {
source = "./docker"
ip = "192.169.10.12"
}
True, you will create on every node the same container, but in my case that's what i wanted. I currently haven't found a way to configure the hosts with an individual configuration. Maybe nested modules, but that didn't work in the first tries.

Resources