I have this skeleton for two Terraform modules I'm building: api-gateway and lambda. This is the file structure:
.
├── modules
│ ├── api-gateway
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── lambda
│ ├── main.tf
│ ├── outputs.tf
│ ├── policies
│ │ └── lambda-role.json
│ └── variables.tf
├── main.tf
├── provider.tf
├── sandbox-environment.tfvars
└── variables.tf
The (excerpt) content of modules/api-gateway/main.tf is:
resource "aws_api_gateway_integration" "lambda_root" {
...
uri = "${aws_lambda_function.fn_name.invoke_arn}"
}
resource "aws_api_gateway_integration" "lambda" {
...
uri = "${aws_lambda_function.fn_name.invoke_arn}"
}
module "lambda" {
source = "../lambda"
}
The (excerpt) content of modules/lambda/main.tf is:
resource "aws_lambda_function" "fn_name" {
filename = "${data.archive_file.fn_name.output_path}"
...
runtime = "java8"
}
The problem is I can't read the value ${aws_lambda_function.fn_name.invoke_arn} in modules/api-gateway/main.tf:
$ terraform init
Initializing modules...
- module.pipeline
Error: resource 'aws_api_gateway_integration.lambda_root' config: unknown resource 'aws_lambda_function.fn_name' referenced in variable aws_lambda_function.fn_name.invoke_arn
Error: resource 'aws_api_gateway_integration.lambda' config: unknown resource 'aws_lambda_function.fn_name' referenced in variable aws_lambda_function.fn_name.invoke_arn
Is there a way to "export" that value from within modules/api-gateway/main.tf?
You would need to add an output variable in modules/lambda/outputs.tf.
output "lambda_invoke_arn" {
value = "${aws_lambda_function.fn_name.invoke_arn}"
}
Then in the modules/api-gateway/main.tf, you can reference the output of the lambda module.
resource "aws_api_gateway_integration" "lambda" {
...
uri = "${module.lambda.lambda_invoke_arn}"
}
Related
I have a Terraform infrastructure that is divided into "parts" that looks something like this.
.
├── network
│ ├── locals.tf
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
├── ecs
│ ├── locals.tf
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── sqs
├── locals.tf
├── main.tf
├── output.tf
└── variables.tf
In SQS, I'm creating a programmatic user with aws_iam_user and aws_iam_access_key.
resource "aws_iam_user" "sqs_write" {
name = "sqs-queue-name-read"
path = "/system/"
}
resource "aws_iam_access_key" "sqs_write" {
user = aws_iam_user.sqs_write.name
pgp_key = local.settings.gpg_public_key
}
Now I need to be able to use aws_iam_access_key.sqs_write.secret in my ECS division.
I tried sending the secret to an "output" and use it with data.terraform_remote_state in my ECS division but Terraform says the output does not exist (most likely because it is marked as sensitive = true.
I tried to save the aws_iam_access_key.sqs_write.secret to a SSM parameter with:
resource "aws_ssm_parameter" "write_secret" {
name = "sqs-queue-name-write-secret-access-key"
description = "SQS write secret access key"
key_id = "aws/secretsmanager"
type = "String"
value = aws_iam_access_key.sqs_write.secret
overwrite = true
}
But I get this error:
╷
│ Error: Missing required argument
│
│ with aws_ssm_parameter.write_secret,
│ on main.tf line 109, in resource "aws_ssm_parameter" "write_secret":
│ 109: value = aws_iam_access_key.sqs_write.secret
│
│ The argument "value" is required, but no definition was found.
╵
So I can't seem to find a way to use the "secret" value outside of my SQS division. I could use the "encrypted_secret" version of it that works fine, but I don't know how I could decrypt it directly from Terraform so I guess it is not an option.
Any thoughts?
My version is:
Terraform v1.0.2 on linux_amd64
provider registry.terraform.io/hashicorp/aws v3.52.0
provider registry.terraform.io/hashicorp/http v2.1.0
I am trying to launch pods using terraform in minikube. While running terraformError Image apply I am getting an error, "zip: not a valid zip file".
provider "kubernetes" {
config_context_cluster = "minikube"
}
resource "kubernetes_pod" "echo" {
metadata {
name = "echo-example"
labels {
App = "echo"
} }
spec {
container {
image = "hashicorp/http-echo:0.2.1"
name = "example2"
args = ["-listen=:80", "-text='Hello World'"]
port {
container_port = 80
}
}
}
}
There are a lot of similar cases. For example this issue
You need to move your individual tf files to their own directories, and then you can point terraform to that directory.
The plan command only accepts directories, and the apply command will only take the entire dir, or plan output file (use -out on plan). I think this limitation is due to the fact that terraform requires a state file for each plan. Here is how I've setup my terraform project, note secrets.tfvars and terraform.tfvars are common between both terraform plans.
$ tree
.
├── 1-base
│ ├── provider.tf
│ ├── backend.tf
│ └── core.tf
├── 2-k8s
│ ├── 1-k8s.tf
│ ├── 2-helm.tf
│ ├── apps
│ ├── provider.tf
│ ├── backend.tf
│ ├── chart-builds
│ └── charts
├── secrets.tfvars
├── terraform.tfvars
└── todo.md
#From here you can run:
$ terraform init -var-file=secrets.tfvars ./1-base
$ terraform plan -var-file=secrets.tfvars ./1-base
I used to have (working) map variables in terraform, but after upgrading to terraform 0.12 I keep getting errors of the form:
Error: Invalid value for module argument
on main.tf line 84, in module "gke":
84: gke_label = "var.gke_label"
The given value is not suitable for child module variable "gke_label" defined
at gke/variables.tf:40,1-19: map of any single type required.
I don't understand how to upgrade these map variables. Documentation on this is not particularly clear (to me).
My set-up is as follows:
I have a terraform folder structure:
├── infrastructure
│ ├── backend
│ │ ├── subnet
│ │ │ ├── main.tf
│ │ │ ├── outputs.tf
│ │ │ └── variables.tf
│ │ └── vpc
│ │ ├── main.tf
│ │ └── outputs.tf
│ ├── backend.tf
│ ├── backend.tfvars
│ ├── gke
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
│ └── versions.tf
within main.tf I had / have (among others):
module "gke" {
source = "./gke"
region = "var.region"
min_master_version = "var.min_master_version"
node_version = "var.node_version"
gke_num_nodes = "var.gke_num_nodes" # [MAP VARIABLE]
vpc_name = "module.vpc.vpc_name"
subnet_name = "module.subnet.subnet_name"
gke_master_user = "var.gke_master_user"
gke_master_pass = "var.gke_master_pass"
gke_node_machine_type = "var.gke_node_machine_type"
gke_label = "var.gke_label" # [MAP VARIABLE]
}
and in variables.tf (among others)
variable "gke_label" {
default = {
prod = "prod"
dev = "dev"
}
variable "gke_num_nodes" {
default = {
prod = 2
dev = 1
}
description = "Number of nodes in each GKE cluster zone"
}
within gke/variables.tf I had:
variable "gke_num_nodes" {
type = map
description = "Number of nodes in each GKE cluster zone"
}
variable gke_label {
type = map
description = "label"
}
This used to work fine, but with the upgrade to terraform 0.12 this results in:
Error: Invalid value for module argument
on main.tf line 78, in module "gke":
78: gke_num_nodes = "var.gke_num_nodes"
The given value is not suitable for child module variable "gke_num_nodes"
defined at gke/variables.tf:15,1-25: map of any single type required.
Error: Invalid value for module argument
on main.tf line 84, in module "gke":
84: gke_label = "var.gke_label"
The given value is not suitable for child module variable "gke_label" defined
at gke/variables.tf:40,1-19: map of any single type required.
I changed in gke/variables.tf (same for num_nodes)
variable gke_label {
type = map(any)
description = "label"
}
but the error remains
Error: Invalid value for module argument
on main.tf line 84, in module "gke":
84: gke_label = "var.gke_label"
The given value is not suitable for child module variable "gke_label" defined
at gke/variables.tf:40,1-19: map of any single type required.
How do I update these map variables to terraform 0.12?
This Terraform 0.12 code will assign the value as expected (not a literal string):
gke_num_nodes = var.gke_num_node
In either Terraform 0.11.x or Terraform 0.12, if you use quotes around your variable assignments without interpolation, they will be treated as strings.
gke_num_nodes = "var.gke_num_node"
The code above will assign the literal string "var.gke_num_node" to gke_num_nodes in the module, instead of assigning the value of var.gke_num_nodes as you intend. Since string is not assignable to map(any), Terraform outputs the type error you presented:
Error: Invalid value for module argument
on main.tf line 78, in module "gke":
78: gke_num_nodes = "var.gke_num_nodes"
In Terraform 0.11.x and earlier, you would use string interpolation with ${} to get the value of a variable:
gke_num_nodes = "${var.gke_num_node}"
That kind of expression is deprecated in Terraform 0.12, but will still work in most cases. Do not use string interpolation in Terraform 0.12 unless you are building a string from multiple variable.
You leapt halfway to Terraform 0.12 by removing the ${}. Leap the remaining gap by removing the quotes so your variable assignments will work as expected:
gke_num_nodes = var.gke_num_node
Here is the entire module block, corrected to remove the quotes:
module "gke" {
source = "./gke"
region = var.region
min_master_version = var.min_master_version
node_version = var.node_version
gke_num_nodes = var.gke_num_node # [MAP VARIABLE]
vpc_name = module.vpc.vpc_name
subnet_name = module.subnet.subnet_name
gke_master_user = var.gke_master_user
gke_master_pass = var.gke_master_pass
gke_node_machine_type = var.gke_node_machine_type
gke_label = var.gke_label # [MAP VARIABLE]
}
I've seen a good amount of posts that talk about passing a module's output into another module. For some reason I can't get this to work.
I can get the output of the module without any issues
$ terraform output
this_sg_id = sg-xxxxxxxxxxxxxxxxx
However, when I call the module in the resource or into another module, it asks me for the Security group ID.
$ terraform plan
var.vpc_security_group_ids
Security Group ID
Enter a value:
Here's my file structure:
── dev
│ └── service
│ └── dev_instance
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
├── modules
│ ├── ec2
│ │ ├── build_ec2.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── sg
│ ├── build_sg.tf
│ ├── outputs.tf
│ └── variables.tf
Not sure if this is correct but in dev/service/dev_instance/main.tf:
module "build_sg" {
source = "../../../modules/sg/"
vpc_id = var.vpc_id
sg_name = var.sg_name
sg_description = var.sg_description
sg_tag = var.sg_tag
sg_tcp_ports = var.sg_tcp_ports
sg_tcp_cidrs = var.sg_tcp_cidrs
sg_udp_ports = var.sg_udp_ports
sg_udp_cidrs = var.sg_udp_cidrs
sg_all_ports = var.sg_all_ports
sg_all_cidrs = var.sg_all_cidrs
}
module "build_ec2" {
source = "../../../modules/ec2/"
vpc_security_group_ids = ["${module.build_sg.this_sg_id}"]
}
In dev/service/dev_instance/output.tf:
output "this_sg_id" {
description = "The security group ID"
value = "${module.build_sg.this_sg_id}"
}
My ec2 module build_ec2.tf file has the following:
resource "aws_instance" "ec2" {
vpc_security_group_ids = ["${module.build_sg.this_sg_id}"]
}
You have a var "vpc_security_group_ids" defined somewhere, I assume in one of your variables.tf files. Terraform doesn't automatically know to fill that in with the output from a module. You need to remove the var definition and just use the module output reference in your template.
Variables are used to pass in values from the command line. They are not tied to module outputs in any way. If you expect values to come from a module you are using then you should not be also defining that value as a variable.
I also think, you need to remove var definition from your variable tf file and use only module output reference.
I am currently using the default workspace and my folder structure is like this -
dev
├── app
│ └── main.tf
├── mysql
│ └── main.tf
└── vpc
└── main.tf
I have a s3 backend created and it works fine for a single folder
terraform {
backend "s3" {
bucket = "mybucket"
key = "global/s3/mykey/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-wellness-nonprod"
encrypt = true
}
}
I am struggling on how to include this back-end config in all the folders, like i want to use the same backend s3 bucket in app, mysql and vpc (diff keys for Dynamodb) but while this works in one folder , in the second folder terraform wants to delete both S3 bucket and Dynamodb.
I recommend you use module structure in terraform code.
like :
dev
├──modules
│ ├── app
│ │ └── app.tf
│ ├── mysql
│ │ └── mysql.tf
│ └── vpc
│ └── vpc.tf
└──main.tf
main.tf :
module "app" {
source = "./modules/app"
...
}
module "mysql" {
source = "./modules/mysql"
...
}
module "vpc" {
source = "./modules/vpc"
...
}
terraform {
backend "s3" {
...
}
}
If you want to apply/destroy each module :
terraform apply -target module.app
terraform destroy -target module.app
See :
Here's a repository using module structure.