I created a lambda layer using this script.
resource "aws_lambda_layer_version" "lambda_common_layer" {
layer_name = "lambda_common_layer"
s3_bucket = "${aws_s3_bucket_object.object_lambda_common_layer.bucket}"
s3_key = "${aws_s3_bucket_object.object_lambda_common_layer.key}"
s3_object_version = "${aws_s3_bucket_object.object_lambda_common_layer.version_id}"
source_code_hash = "${data.archive_file.layer_zip_lambda_common_layer.output_base64sha256}"
description = "Common layer providing logging"
compatible_runtimes = ["python3.6"]
}
I also has a lambda definition which I want to use the layer in. To do that I need to pass a list of ARNs and I don't know how to get the ARN of existing lambda layer Those are different projects with separate Terraform scripts.
How to do that ?
Here is my lambda declaration
I tried to put only layer name as a resouce but it's getting highlighted
resource "aws_lambda_layer_version" "lambda_common_layer" {
layer_name = "lambda_common_layer"
}
...
layers = ["${aws_lambda_layer_version.lambda_common_layer.layer_arn}"]
You will need to data source in the remote state from the project where the lambda layer resource was declared.
This is easiest way, in my mind, to do this is to use a remote state backend to store the terraform state of each project so that you can reference it from other projects. In my experience the easiest remote state to set up is the S3 remote state.
With that done, you can then include an outputs.tf file in your lambda layer project with something like:
output "lambda_layer_arn" {
value = "${aws_lambda_layer_version.lambda_common_layer.arn}"
}
output "lambda_layer_version_arn" {
value = "${aws_lambda_layer_version.lambda_common_layer.layer_arn}"
}
By outputting those values, they will be available in the remote state for other terraform modules to use.
Related
I am using below terraform datasource for importing shared state from s3. Terraform is giving me error " No stored state was found for the given workspace in the given backend". I am expecting terraform to pick up the workspace "dev-use1" as I have set the workspace using terraform workspace select "dev-use1".
data "terraform_remote_state" "shared_jobs_state" {
backend = "s3"
config = {
bucket = "cicd-backend"
key = "analyticsjobs.tfstate"
workspace_key_prefix = "pipeline/v2/db"
region = "us-east-1"
}
}
Version = Terraform v1.1.9 on darwin_arm64
After enabling the DEBUG in terraform by setting TF_LOG="DEBUG". I can see that s3 api call is giving 404 error.
from the request xml I can see that the prefix is wrong.
As a workaround I have done below changes to datasource.
Not sure this is the recommended way of doing but it works. There is less clarity in docs regards to this https://www.terraform.io/language/state/remote-state-data
data "terraform_remote_state" "shared_jobs_state" {
backend = "s3"
config = {
bucket = "cicd-backend"
key = "pipeline/v2/db/${terraform.workspace}/analyticsjobs.tfstate"
region = "us-east-1"
}
}
I have the following code:
terraform {
backend "s3" {
bucket = "my-sandbox-terraform-state"
key = "dev/iac/terraform.tfstate"
region = "us-east-1"
profile = "sandbox"
dynamodb_table = "sandbox-dev-terraform-locks"
encrypt = "true"
}
}
I want to be able to use the value "my-sandbox-terraform-state" from the variable bucket like:
locals {
my_bucket = terraform.s3.bucket
}
Is there a way to access the values defined in the terraform backend block as read variables?
A backend block cannot refer to named values (like input variables, locals, or data source attributes). You can use a partial config & then pass them in using the -backend-config CLI argument:
https://developer.hashicorp.com/terraform/language/settings/backends/configuration#partial-configuration
directory structure
I am using s3 as remote state backend and dynamodb table for locking
Both platform1 and platform2 used share infrastructure from shared platform
If I try to create platform1 first it will fail as the dependencies in shared haven’t been created, same for platform2, but if I create shared platform first and then platform1 and platform2 all infrastructure will build without issues
Is this correct?
How can I build shared environment first when trying to build one of the platform environments?
I have tried creating shared environment first
the root terragrunt.hcl file i.e. under tst1 folder
# Configure Terragrunt to automatically store tfstate files in an S3 bucket
remote_state {
backend = "s3"
config = {
encrypt = true
bucket = "automation-terraform-state"
key = "tst1/${path_relative_to_include()}/terraform.tfstate"
region = "ap-southeast-2"
dynamodb_table = "tst-terraform-locks"
}
}
# Configure root level variables that all resources can inherit. This is especially helpful with multi-account configs
# where terraform_remote_state data sources are placed directly into the modules.
inputs = {
aws_region = "ap-southeast-2"
ami_id = "ami-0aa5848a455c3ec32"
vpc_id = "vpc-7e49e81a"
}
terragrunt.hcl inside platform1
terraform {
source = "git::git#github.com:acme/infrastructure-modules.git//application_lb"
}
# Include all settings from the root terragrunt.hcl file
include {
path = find_in_parent_folders()
}
inputs = {
...
...
...
}
I have App1, App2, App3, etc. To reuse code, I want to create them using Terraform module.
The common infrastructure called by modules is:
root\Common_infra\main.tf:
resource "aws_lambda_function" "app" {
# count = “${var.should_launch}”
function_name = "app"
…
}
resource "aws_cloudwatch_event_rule" "app" {
name = " ${var.app_name } "
schedule_expression = "${var.app_schedule}"
}
resource "aws_cloudwatch_event_target" "app_target" {
rule = "${aws_cloudwatch_event_rule.app.name}"
arn = "${aws_lambda_function.app.arn}"
input = <<EOF
{
"app_name": "${var.app_name}"
}
EOF
}
resource "aws_lambda_permission" "allow_cloudwatch_to_call_lambda" {
action = "lambda:InvokeFunction"
function_name = "${aws_lambda_function.app.function_name}"
principal = "events.amazonaws.com"
source_arn = "${aws_cloudwatch_event_rule.app.arn}"
}
# Other resources each app need to create.
The module for app1 is as follows:
root\app1\main.tf:
module "app1" {
# should_launch = 1
source = "../common_infra"
app_name = "app1"
schedule = "cron(01 01 ? * * *)"
……
}
Using the module, I have successfully launched a cloudwatch event which triggers the lambda at schedule, and I have successfully launched the lambda called “app”. The lambda gets the app_name = app1 as input and then work on app1.
when I create another app2 as follows,
root\app2\main.tf:
module "app2" {
# should_launch = 0
source = "../common_infra"
app_name = "app2"
schedule = "cron(01 01 ? * * *)"
……
}
It tries to create another lambda but fails because the lambda has been created by app1 module. In fact I do not want to create a new lambda because It is unnecessary to create multiple lambda for app1, app2…. I can use input = app_name to control what lambda should do.
I try to use should_launch (see above commented lines) to have lambda created only when app1 is created. But it does not work. When deploying app1, the lambda is created. When creating app2. Terrform complains:
aws_cloudwatch_event_target.app_target
cannot find
arn = "${aws_lambda_function.app.arn}"
My question is: how to organize the layout/structure of my code so that the lambda resource is only declared once for multiple modules? e.g. maybe I should create a new folder root/resource_called_by_all_module/lambda.tf? and then deploy this new folder in advance?
Yeah so as you have pointed out, in the current setup all your modules will try to create all the resources in that file individually. Because the name of your lambda is hard-coded, terraform will rightly complain.
If you have a resource that other things are dependent on like that, you can rearrange your terraform to construct that resource on a separate Terraform apply.
So you could take that Lambda resource out of those files, and place it into a separate folder. Then you simply run Terraform apply for those shared resources first ('app' lambda) before the resources that are dependent on them ('app1','app2').
Once the shared resources are created, you can retrieve the details you need from them by using Terraform Data Sources (usually to get names or ARNs).
Is there any way I can use a Terraform template output to another Terraform template's input?
Ex: I have a Terraform template which creates an ELB and I have another Terraform template which is going to create an auto scale group which need the ELB information as an input variable.
I know I can use shell script to grep and feed in the ELB information, but I'm looking for some Terraform way to doing this.
Have you tried using remote state to populate your second template?
Declare it like this:
resource "terraform_remote_state" "your_state" {
backend = "s3"
config {
bucket = "${var.your_bucket}"
region = "${var.your_region}"
key = "${var.your_state_file}"
}
}
And then you should be able to pull out your resource directly like this:
your_elb = "${terraform_remote_state.your_state.output.your_output_resource}"
If this doesn't work for you, have you tried implementing your ELB in a module and then just using the output?
https://github.com/terraform-community-modules/tf_aws_elb is a good example of how to structure the module.
Looks like in newer versions of Terraform you'd access the output var like this
your_elb = "${data.terraform_remote_state.your_state.your_output_resource}"
All the rest is the same, just how you referenced it.
The question is about ELB, but I will give an example with S3. It is less things to write.
If you don't know how to store terraform state on AWS, read the article.
Let's suppose you have two independent projects: project-1, project-2. They are located in two different directories (two different repositories)!
Terraform file /tmp/project-1/main.tf:
// Create an S3 bucket
resource "aws_s3_bucket" "main_bucket" {
bucket = "my-epic-test-b1"
acl = "private"
}
// Output. It will available on s3://multi-terraform-project-state-bucket/p1.tfstate
output "bucket_name_p1" {
value = aws_s3_bucket.main_bucket.bucket
}
// Store terraform state on AWS. The S3 bucket and dynamo db table should be created before running terraform
terraform {
backend "s3" {
bucket = "multi-terraform-project-state-bucket"
key = "p1.tfstate"
dynamodb_table = "multi-terraform-project-state-table"
region = "eu-central-1" // AWS region of state resources
}
}
provider "aws" {
profile = "my-cli-profile" // User profile defined in ~/.aws/credentials
region = "eu-central-1" // AWS region
}
You run terraform init, and terraform apply.
After it you move to the terraform file /tmp/project-2/main.tf:
// Create an S3 bucket
resource "aws_s3_bucket" "main_bucket" {
bucket = "my-epic-test-b2"
acl = "private"
tags = {
// Get the S3 bucket name from another terraform state file. In this case it is s3://multi-terraform-project-state-bucket/p1.tfstate
p1-bucket = data.terraform_remote_state.state1.outputs.bucket_name_p1
}
}
// Get date from another state file
data "terraform_remote_state" "state1" {
backend = "s3"
config = {
bucket = "multi-terraform-project-state-bucket"
key = "p1.tfstate"
region = "eu-central-1"
}
}
// Store terraform state on AWS. The S3 bucket and dynamo db table should be created before running terraform
terraform {
backend "s3" {
bucket = "multi-terraform-project-state-bucket"
key = "p2.tfstate"
dynamodb_table = "multi-terraform-project-state-table"
region = "eu-central-1" // AWS region of state resources
}
}
provider "aws" {
profile = "my-cli-profile" // User profile defined in ~/.aws/credentials
region = "eu-central-1" // AWS region
}
You run terraform init, and terraform apply.
Now check the tags in the my-epic-test-b2. You will find there the name of the bucket from the project-1.
When you are integrating terraform with Jenkins you can simply define a variable in the Jenkinsfile you are creating. Suppose you want to run ec2-instance using terraform and Jenkinsfile. So when you need to get the public IP address of the instance you can use this command inside your Jenkinsfile.
script {
def public_ip = sh(script: 'terraform output public_ip | cut -d '"' -f2', returnStdout: true).trim()
}
this makes proper formatting and saves only the IP address in the public_ip variable. To do that you have to define the output block in the terraform script to output the public ip