I have terraform that looks like:
terraform {
backend "s3" {
region = "ap-southeast-1"
key = "01-service-quota-state.json"
bucket = "foobar-dev-infra-tf-state"
dynamodb_table = "foobar-dev-infra-tf-state-lock"
}
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
Since we use a Makefile to initialise the backend, I have a large terraform repository where I want to reduce and refactor the above to:
terraform {
backend "s3" {}
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
Initially I was planning to do this with sed, though I was hinted that I could do some sort of .tf -> JSON -> jq -> .tf transformation. Is that right?
Maybe you can write a small node script using hcl2-parser with the parseToString method to transform hcl to json
Wrong answer, though to solve this problem I ended up using VS code search and replace regex backend "s3" \{([\n\s\S]*?)\}
https://github.com/BurntSushi/ripgrep/discussions/2334#discussioncomment-3923925
Related
I am using below terraform datasource for importing shared state from s3. Terraform is giving me error " No stored state was found for the given workspace in the given backend". I am expecting terraform to pick up the workspace "dev-use1" as I have set the workspace using terraform workspace select "dev-use1".
data "terraform_remote_state" "shared_jobs_state" {
backend = "s3"
config = {
bucket = "cicd-backend"
key = "analyticsjobs.tfstate"
workspace_key_prefix = "pipeline/v2/db"
region = "us-east-1"
}
}
Version = Terraform v1.1.9 on darwin_arm64
After enabling the DEBUG in terraform by setting TF_LOG="DEBUG". I can see that s3 api call is giving 404 error.
from the request xml I can see that the prefix is wrong.
As a workaround I have done below changes to datasource.
Not sure this is the recommended way of doing but it works. There is less clarity in docs regards to this https://www.terraform.io/language/state/remote-state-data
data "terraform_remote_state" "shared_jobs_state" {
backend = "s3"
config = {
bucket = "cicd-backend"
key = "pipeline/v2/db/${terraform.workspace}/analyticsjobs.tfstate"
region = "us-east-1"
}
}
I have the following code:
terraform {
backend "s3" {
bucket = "my-sandbox-terraform-state"
key = "dev/iac/terraform.tfstate"
region = "us-east-1"
profile = "sandbox"
dynamodb_table = "sandbox-dev-terraform-locks"
encrypt = "true"
}
}
I want to be able to use the value "my-sandbox-terraform-state" from the variable bucket like:
locals {
my_bucket = terraform.s3.bucket
}
Is there a way to access the values defined in the terraform backend block as read variables?
A backend block cannot refer to named values (like input variables, locals, or data source attributes). You can use a partial config & then pass them in using the -backend-config CLI argument:
https://developer.hashicorp.com/terraform/language/settings/backends/configuration#partial-configuration
We upgraded to using the integrations/github provider source and ever since we have started getting a 404 when attempting to create a github_repository_webhook with Terraform. I believe we have all of the necessary pieces required based on the docs, but the API uri in the logs is missing the org. NOTE: Real org and repo names have been redacted.
main.tf
resource "aws_codepipeline_webhook" "codepipeline_webhook" {
name = "test-github-webhook"
authentication = "GITHUB_HMAC"
target_action = "CC"
target_pipeline = aws_codepipeline.pipeline.name
authentication_configuration {
secret_token = data.aws_secretsmanager_secret_version.github_token.secret_string
}
filter {
json_path = "$.ref"
match_equals = "refs/heads/{Branch}"
}
tags = merge(var.tags, {
Name = "test-github-webhook"
})
}
# Wire the CodePipeline webhook into a GitHub repository.
resource "github_repository_webhook" "github_webhook" {
repository = "my_repo"
configuration {
url = aws_codepipeline_webhook.codepipeline_webhook.url
content_type = "json"
insecure_ssl = true
secret = data.aws_secretsmanager_secret_version.github_token.secret_string
}
events = ["push"]
}
backend.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.65.0"
}
github = {
source = "integrations/github"
version = "~> 4.0"
}
}
}
provider "github" {
token = data.aws_secretsmanager_secret_version.github_token.secret_string
owner = "my_org"
base_url = "https://github.com/my_org/" # we have github enterprise
}
Error on create:
Error: POST https://api.github.com/repos//my_repo/hooks: 404 Not Found []
Note that the org is missing completely from the URL. I've also tried including the org name in the github_repository_webhook resource, but the url still comes out with a double slash and a 404:
Error: POST https://api.github.com/repos//my_org/my_repo/hooks: 404 Not Found []
When I remove the provider source and version completely, terraform falls back to the hashicorp/terraform source and the webhook creates without any issues. Has anyone else run into this problem?
You've probably solved this by now, but just in case there are any others who run into this problem.
This solution assumes you're using Terraform version >=0.13 and the new Terraform GitHub Integration.
Check to make sure that all modules that use the github provider have defined a github required_providers block. If this block is not defined within the module, then it seems that the (deprecated) "hashicorp/github" provider is used even if you've configured this block on the root module where you've defined the github provider. (This is what was causing the error for me.) That is, make sure you've defined the following in each of your modules:
terraform {
required_providers {
github = {
source = "integrations/github"
version = "~> 4.0"
}
}
}
Assuming you've done the above, then try double checking that you've set up the GitHub integration properly. Some notable gotchas are:
When upgrading from hashicorp/github to integrations/github, use terraform state replace-provider. Otherwise, Terraform will still require the old provider to interact with the state file.
The organization argument has been deprecated in favor of the owner argument (should behave the same in the mean time, but just in case).
What is the output of terraform providers?
I suspect you have both versions of the github provider in your state.
If you do,
terraform state replace-provider hashicorp/github integrations/github
Should fix you up.
I ran into this issue a few months ago. Your url is missing the organization. It should be of the format
https://api.github.com/repos/my_organization/my_repo/hooks
What specific syntax needs to be used in the example below in order for Terraform to source the AWS provider from a given path in the local file system instead of requesting a cloud copy from the remote Terraform Registry?
provider "aws" {
region = var._region
access_key = var.access_key
secret_key = var.secret_access_key
}
Something like src=C:\path\to\terraform\aws\provider\binary
I recall Mitchel Hashimoto explaining that this is a new feature during HashiConf, but I cannot seem to find documentation.
You should be able to set it in the CLI configuration as described in the documentation:
provider_installation {
filesystem_mirror {
path = "/usr/share/terraform/providers"
include = ["example.com/*/*"]
}
direct {
exclude = ["example.com/*/*"]
}
}
Is there any way I can use a Terraform template output to another Terraform template's input?
Ex: I have a Terraform template which creates an ELB and I have another Terraform template which is going to create an auto scale group which need the ELB information as an input variable.
I know I can use shell script to grep and feed in the ELB information, but I'm looking for some Terraform way to doing this.
Have you tried using remote state to populate your second template?
Declare it like this:
resource "terraform_remote_state" "your_state" {
backend = "s3"
config {
bucket = "${var.your_bucket}"
region = "${var.your_region}"
key = "${var.your_state_file}"
}
}
And then you should be able to pull out your resource directly like this:
your_elb = "${terraform_remote_state.your_state.output.your_output_resource}"
If this doesn't work for you, have you tried implementing your ELB in a module and then just using the output?
https://github.com/terraform-community-modules/tf_aws_elb is a good example of how to structure the module.
Looks like in newer versions of Terraform you'd access the output var like this
your_elb = "${data.terraform_remote_state.your_state.your_output_resource}"
All the rest is the same, just how you referenced it.
The question is about ELB, but I will give an example with S3. It is less things to write.
If you don't know how to store terraform state on AWS, read the article.
Let's suppose you have two independent projects: project-1, project-2. They are located in two different directories (two different repositories)!
Terraform file /tmp/project-1/main.tf:
// Create an S3 bucket
resource "aws_s3_bucket" "main_bucket" {
bucket = "my-epic-test-b1"
acl = "private"
}
// Output. It will available on s3://multi-terraform-project-state-bucket/p1.tfstate
output "bucket_name_p1" {
value = aws_s3_bucket.main_bucket.bucket
}
// Store terraform state on AWS. The S3 bucket and dynamo db table should be created before running terraform
terraform {
backend "s3" {
bucket = "multi-terraform-project-state-bucket"
key = "p1.tfstate"
dynamodb_table = "multi-terraform-project-state-table"
region = "eu-central-1" // AWS region of state resources
}
}
provider "aws" {
profile = "my-cli-profile" // User profile defined in ~/.aws/credentials
region = "eu-central-1" // AWS region
}
You run terraform init, and terraform apply.
After it you move to the terraform file /tmp/project-2/main.tf:
// Create an S3 bucket
resource "aws_s3_bucket" "main_bucket" {
bucket = "my-epic-test-b2"
acl = "private"
tags = {
// Get the S3 bucket name from another terraform state file. In this case it is s3://multi-terraform-project-state-bucket/p1.tfstate
p1-bucket = data.terraform_remote_state.state1.outputs.bucket_name_p1
}
}
// Get date from another state file
data "terraform_remote_state" "state1" {
backend = "s3"
config = {
bucket = "multi-terraform-project-state-bucket"
key = "p1.tfstate"
region = "eu-central-1"
}
}
// Store terraform state on AWS. The S3 bucket and dynamo db table should be created before running terraform
terraform {
backend "s3" {
bucket = "multi-terraform-project-state-bucket"
key = "p2.tfstate"
dynamodb_table = "multi-terraform-project-state-table"
region = "eu-central-1" // AWS region of state resources
}
}
provider "aws" {
profile = "my-cli-profile" // User profile defined in ~/.aws/credentials
region = "eu-central-1" // AWS region
}
You run terraform init, and terraform apply.
Now check the tags in the my-epic-test-b2. You will find there the name of the bucket from the project-1.
When you are integrating terraform with Jenkins you can simply define a variable in the Jenkinsfile you are creating. Suppose you want to run ec2-instance using terraform and Jenkinsfile. So when you need to get the public IP address of the instance you can use this command inside your Jenkinsfile.
script {
def public_ip = sh(script: 'terraform output public_ip | cut -d '"' -f2', returnStdout: true).trim()
}
this makes proper formatting and saves only the IP address in the public_ip variable. To do that you have to define the output block in the terraform script to output the public ip