Using terraform remote state in s3 with multiple folders - terraform

I am currently using the default workspace and my folder structure is like this -
dev
├── app
│ └── main.tf
├── mysql
│ └── main.tf
└── vpc
└── main.tf
I have a s3 backend created and it works fine for a single folder
terraform {
backend "s3" {
bucket = "mybucket"
key = "global/s3/mykey/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-wellness-nonprod"
encrypt = true
}
}
I am struggling on how to include this back-end config in all the folders, like i want to use the same backend s3 bucket in app, mysql and vpc (diff keys for Dynamodb) but while this works in one folder , in the second folder terraform wants to delete both S3 bucket and Dynamodb.

I recommend you use module structure in terraform code.
like :
dev
├──modules
│ ├── app
│ │ └── app.tf
│ ├── mysql
│ │ └── mysql.tf
│ └── vpc
│ └── vpc.tf
└──main.tf
main.tf :
module "app" {
source = "./modules/app"
...
}
module "mysql" {
source = "./modules/mysql"
...
}
module "vpc" {
source = "./modules/vpc"
...
}
terraform {
backend "s3" {
...
}
}
If you want to apply/destroy each module :
terraform apply -target module.app
terraform destroy -target module.app
See :
Here's a repository using module structure.

Related

Terragrunt is possible to overwrite a generate block from child file?

I'm quite new on terragrunt but I can't find a way to overwrite a generate block in terragrunt (if possible).
What I'm trying to do is the following.
I have the following folder structure:
└── my_project
└── root
├── child-1
│ └── terragrunt.hcl
├── child-2
│ └── terragrunt.hcl
├── child-3
│ └── terragrunt.hcl
└── terragrunt.hcl
In my root/terragrunt.hcl I have the following terragrunt generate block:
generate "backend" {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
terraform {
backend "azurerm" {
resource_group_name = "My_RG"
storage_account_name = "My_storage"
container_name = "My_tf-state"
key = "foo_key"
}
}
EOF
}
And in all my child modules I have:
include "root" {
path = find_in_parent_folders()
}
//other terragrunt stuff
What I like to do is to overwrite or merge or replace the key = "foo_key"
in every child file.
Something like this:
In child-1/terragrunt.hcl file I'll have
include "root" {
path = find_in_parent_folders()
key = "another_key1" //I know the include doesn't support that, is just for example
}
//other terragrunt stuff
In child-2/terragrunt.hcl I'll have
include "root" {
path = find_in_parent_folders()
key = "custom_key2"
}
//other terragrunt stuff
and so on.
Is there a way to do so?
Define a common generate backend block in a root file and replace what I need in the child files?
Thanks.

Terraform aws_iam_access_key secret in another division (using remote_state)

I have a Terraform infrastructure that is divided into "parts" that looks something like this.
.
├── network
│   ├── locals.tf
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
├── ecs
│   ├── locals.tf
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
└── sqs
├── locals.tf
├── main.tf
├── output.tf
└── variables.tf
In SQS, I'm creating a programmatic user with aws_iam_user and aws_iam_access_key.
resource "aws_iam_user" "sqs_write" {
name = "sqs-queue-name-read"
path = "/system/"
}
resource "aws_iam_access_key" "sqs_write" {
user = aws_iam_user.sqs_write.name
pgp_key = local.settings.gpg_public_key
}
Now I need to be able to use aws_iam_access_key.sqs_write.secret in my ECS division.
I tried sending the secret to an "output" and use it with data.terraform_remote_state in my ECS division but Terraform says the output does not exist (most likely because it is marked as sensitive = true.
I tried to save the aws_iam_access_key.sqs_write.secret to a SSM parameter with:
resource "aws_ssm_parameter" "write_secret" {
name = "sqs-queue-name-write-secret-access-key"
description = "SQS write secret access key"
key_id = "aws/secretsmanager"
type = "String"
value = aws_iam_access_key.sqs_write.secret
overwrite = true
}
But I get this error:
╷
│ Error: Missing required argument
│
│ with aws_ssm_parameter.write_secret,
│ on main.tf line 109, in resource "aws_ssm_parameter" "write_secret":
│ 109: value = aws_iam_access_key.sqs_write.secret
│
│ The argument "value" is required, but no definition was found.
╵
So I can't seem to find a way to use the "secret" value outside of my SQS division. I could use the "encrypted_secret" version of it that works fine, but I don't know how I could decrypt it directly from Terraform so I guess it is not an option.
Any thoughts?
My version is:
Terraform v1.0.2 on linux_amd64
provider registry.terraform.io/hashicorp/aws v3.52.0
provider registry.terraform.io/hashicorp/http v2.1.0

how to get terragrunt to read tfvars files into dependent modules

Anyone know how to get terragrunt to read tfvars files into dependent modules? If I declare all my tfvars as inputs in my root terragrunt.hcl, everything works fine, but of course then I can’t customize them by environment. I tried adding the extra_arguments block, but the variables aren’t declared in the root module. They’re declared in the dependent module and I don’t want to have to declare them in both places.
Here’s my setup:
// terraform/terragrunt.hcl
terraform {
extra_arguments "common_vars" {
commands = ["plan", "apply"]
arguments = [
"-var-file=${find_in_parent_folders("account.tfvars")}",
"-var-file=./terraform.tfvars"
]
}
}
locals {
environment_vars = read_terragrunt_config(find_in_parent_folders("account.hcl"))
bucket = local.environment_vars.locals.bucket
}
remote_state {
backend = "s3"
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
config = {
key = "${path_relative_to_include()}/terraform.tfstate"
region = "us-east-1"
bucket = local.bucket
}
}
dependencies {
paths = ["../../../shared/services", "../../../shared/core"]
}
// terraform/accounts/dev/account.tfvars
aws_region = "us-east-1"
// terraform/accounts/dev/william/terraform.tfvars
aws_vpc_cidr = "10.1.0.0/16"
// terraform/accounts/dev/william/terragrunt.hcl
include {
path = find_in_parent_folders()
}
This doesn't work because the variable values don't actually get passed to the dependent modules. I got this back when I tried to run a terragrunt plan:
''' TERMINAL OUTPUT
$ terragrunt plan
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
Warning: Value for undeclared variable
The root module does not declare a variable named
"aws_region" but a value was found in file
"/Users/williamjeffries/code/parachute/infrastructure/terraform/accounts/dev/account.tfvars".
To use this value, add a "variable" block to the configuration.
Using a variables file to set an undeclared variable is deprecated and will
become an error in a future release. If you wish to provide certain "global"
settings to all configurations in your organization, use TF_VAR_...
environment variables to set these instead.
Actually there were 26 such warnings, I’ve only pasted in one here but you get the idea. It seems like there should be some way to solve this with a terragrunt generate block but I'm not sure how. Any ideas?
I have been following the documentation here where it suggested to have directory structure:
live
├── prod
│ ├── app
│ │ └── terragrunt.hcl
│ ├── mysql
│ │ └── terragrunt.hcl
│ └── vpc
│ └── terragrunt.hcl
├── qa
│ ├── app
│ │ └── terragrunt.hcl
etc...
and
# content of qa/app/terragrunt.hcl
terraform {
# Deploy version v0.0.3 in qa
source = "git::git#github.com:foo/modules.git//app?ref=v0.0.3"
}
inputs = {
# tfvars for qa
instance_count = 3
instance_type = "t2.micro"
}
and
# content of prod/app/terragrunt.hcl
terraform {
# Deploy version v0.0.3 in prod
source = "git::git#github.com:foo/modules.git//app?ref=v0.0.3"
}
inputs = {
# tfvars for prod
instance_count = 20
instance_type = "t2.2xlarge"
}
and then the source could be within the same git repo., i.e: just app directory. You could then customize module app by different environment (and even different versions in different environments)

why does my terraform not working with minikube?

I am trying to launch pods using terraform in minikube. While running terraformError Image apply I am getting an error, "zip: not a valid zip file".
provider "kubernetes" {
config_context_cluster = "minikube"
}
resource "kubernetes_pod" "echo" {
metadata {
name = "echo-example"
labels {
App = "echo"
} }
spec {
container {
image = "hashicorp/http-echo:0.2.1"
name = "example2"
args = ["-listen=:80", "-text='Hello World'"]
port {
container_port = 80
}
}
}
}
There are a lot of similar cases. For example this issue
You need to move your individual tf files to their own directories, and then you can point terraform to that directory.
The plan command only accepts directories, and the apply command will only take the entire dir, or plan output file (use -out on plan). I think this limitation is due to the fact that terraform requires a state file for each plan. Here is how I've setup my terraform project, note secrets.tfvars and terraform.tfvars are common between both terraform plans.
$ tree
.
├── 1-base
│ ├── provider.tf
│ ├── backend.tf
│ └── core.tf
├── 2-k8s
│ ├── 1-k8s.tf
│ ├── 2-helm.tf
│ ├── apps
│ ├── provider.tf
│ ├── backend.tf
│ ├── chart-builds
│ └── charts
├── secrets.tfvars
├── terraform.tfvars
└── todo.md
#From here you can run:
$ terraform init -var-file=secrets.tfvars ./1-base
$ terraform plan -var-file=secrets.tfvars ./1-base

Using output in another module or resource

I've seen a good amount of posts that talk about passing a module's output into another module. For some reason I can't get this to work.
I can get the output of the module without any issues
$ terraform output
this_sg_id = sg-xxxxxxxxxxxxxxxxx
However, when I call the module in the resource or into another module, it asks me for the Security group ID.
$ terraform plan
var.vpc_security_group_ids
Security Group ID
Enter a value:
Here's my file structure:
── dev
│ └── service
│ └── dev_instance
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
├── modules
│ ├── ec2
│ │ ├── build_ec2.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── sg
│ ├── build_sg.tf
│ ├── outputs.tf
│ └── variables.tf
Not sure if this is correct but in dev/service/dev_instance/main.tf:
module "build_sg" {
source = "../../../modules/sg/"
vpc_id = var.vpc_id
sg_name = var.sg_name
sg_description = var.sg_description
sg_tag = var.sg_tag
sg_tcp_ports = var.sg_tcp_ports
sg_tcp_cidrs = var.sg_tcp_cidrs
sg_udp_ports = var.sg_udp_ports
sg_udp_cidrs = var.sg_udp_cidrs
sg_all_ports = var.sg_all_ports
sg_all_cidrs = var.sg_all_cidrs
}
module "build_ec2" {
source = "../../../modules/ec2/"
vpc_security_group_ids = ["${module.build_sg.this_sg_id}"]
}
In dev/service/dev_instance/output.tf:
output "this_sg_id" {
description = "The security group ID"
value = "${module.build_sg.this_sg_id}"
}
My ec2 module build_ec2.tf file has the following:
resource "aws_instance" "ec2" {
vpc_security_group_ids = ["${module.build_sg.this_sg_id}"]
}
You have a var "vpc_security_group_ids" defined somewhere, I assume in one of your variables.tf files. Terraform doesn't automatically know to fill that in with the output from a module. You need to remove the var definition and just use the module output reference in your template.
Variables are used to pass in values from the command line. They are not tied to module outputs in any way. If you expect values to come from a module you are using then you should not be also defining that value as a variable.
I also think, you need to remove var definition from your variable tf file and use only module output reference.

Resources