I configure my terraform using a GCS backend, with a workspace. My CI environment exactly has access to the state file it requires for the workspace.
terraform {
required_version = ">= 0.14"
backend "gcs" {
prefix = "<my prefix>"
bucket = "<my bucket>"
credentials = "credentials.json"
}
}
I define the output of my terraform module inside output.tf:
output "base_api_url" {
description = "Base url for the deployed cloud run service"
value = google_cloud_run_service.api.status[0].url
}
My CI Server runs terraform apply -auto-approve -lock-timeout 15m. It succeeds and it shows me the output in the console logs:
Outputs:
base_api_url = "https://<my project url>.run.app"
When I call terraform output base_api_url and it gives me the following error:
│ Warning: No outputs found
│
│ The state file either has no outputs defined, or all the defined outputs
│ are empty. Please define an output in your configuration with the `output`
│ keyword and run `terraform refresh` for it to become available. If you are
│ using interpolation, please verify the interpolated value is not empty. You
│ can use the `terraform console` command to assist.
I try calling terraform refresh like it mentions in the warning and it tells me:
╷
│ Warning: Empty or non-existent state
│
│ There are currently no remote objects tracked in the state, so there is
│ nothing to refresh.
╵
I'm not sure what to do. I'm calling terraform output RIGHT after I call apply, but it's still giving me no outputs. What am I doing wrong?
I had the exact same issue, and this was happening because I was running terraform commands from a different path than the one I was at.
terraform -chdir="another/path" apply
And then running the output command would fail with that error. Unless you cd to that path before running the output command.
cd "another/path"
terraform output
Related
I'm using terraform and terragrunt to create some repositories in bitbucket, and since they do not have an official provider I'm using one from DrFaust92/bitbucket. I have done everything in my computer and I could apply all, now I'm moving the workflow to circle ci, and when I run it there I always get this error:
bitbucket_repository.repository: Creating...
╷
│ Error: Empty Summary: This is always a bug in the provider and should be reported
to the provider developers.
│
│ with bitbucket_repository.repository,
│ on main.tf line 5, in resource "bitbucket_repository" "repository":
│ 5: resource "bitbucket_repository" "repository" {
The resource does not have anything in special:
resource "bitbucket_repository" "repository" {
name = var.name
description = var.description
owner = var.owner
project_key = var.project_key
language = var.project_language
fork_policy = var.fork_policy
is_private = true
}
I'm using terraform 1.3.7 and terragrunt 0.43.1 (in my computer and in circle ci, both run with the same versions). It fails when it access any tfstate: if the tfstate already exists, it throws the error when planning, if it doesn't, the plan runs well, but when I apply it fails with the same error.
Any help to fix this will be appreciated!
This is most likely issue associated with provider version. In your local, it may have a certain version downloaded/cached. Within Circle CI, it may be fetching latest provider that is available(which may have some issues). I would suggest you find the provider version currently in use locally, and then add required_providers block accordingly to make sure that it uses the same version of the provider. You can find the version presently in use from the terminal output that is generated on 'terraform init'. Below is the sample block to specify specific provider version(taken from: https://registry.terraform.io/providers/aeirola/bitbucket/latest/docs/resources/repository).
terraform {
required_providers {
bitbucket = {
source = "DrFaust92/bitbucket"
version = "v2.30.0"
}
}
}
I am deploying Terraform code through a Bitbucket pipeline and am having an problem when parsing a map of objects variable in the pipeline. Below is the variable:
variable "images" {
type = map(object({
port = number
}))
}
Below is how the variable value is defined in the BitBucket Pipeline variables:
"{"image_one"={port=1000}"image_two"={port=2000}}"
When the pipeline runs, I am getting the following error:
Error: Extra characters after expression
│
│ on <value for var.images> line 1:
│ (source code not available)
│
│ An expression was successfully parsed, but extra characters were found
│ after it.
Below is the command within bitbucket-pipelines.yml on how the variable is called within the pipeline:
terraform apply -var images="$IMAGES" -auto-approve
Any advice on how to get the map of objects to execute through the pipeline would be helpful.
Thank you.
Assuming that this is your actual code, and not again some mistake you did when making the question, then you are missing a comma between arguments. It should be:
"{"image_one"={port=1000}, "image_two"={port=2000}}"
I am trying to add the attribute "response_headers_policy" to the aws_cloudfront_distribution module. I have 3 environments: prod, stage, demo. Prod was the first created, followed by stage and demo a few months later. When adding that attribute to the staging and demo environments, there are no issues. However the plan fails with the following error when running for the prod environment:
Error: Unsupported argument
│
│ on ../../modules/<module>/cloudfront.tf line 47, in resource "aws_cloudfront_distribution" "this":
│ 47: response_headers_policy_id = "67f7725c-6f97-4210-82d7-5512b31e9d03" // SecurityHeadersPolicy ID
│
│ An argument named "response_headers_policy_id" is not expected here.
My assumption is that the state file expects an older version of the module for the production environment, but I am unsure how to resolve that issue. Especially in terraform cloud.
My first thought is that there is a mismatch in what version of the AWS provider you're using for your different environments. That argument was only added to the AWS provider in v3.64.0, in #21620.
I'm using Terraform v1.0.0 and created a remote backend using AWS S3 and AWS DynamoDB as explained in Terraform Up & Running by Yevgeniy Brikman:
I wrote the code for the S3 bucket and the DynamoDB table and created the resources via terraform apply
I added terraform { backend "S3" {} } to my code
I created a backend.hcl file with all the relevant parameters
I moved my local state to S3 by invoking terraform init -backend-config=backend.hcl
Now I want to convert the remote state back to local state so I can safely delete the remote backend. Brikman explains to do this, one has to remove the backend configuration and invoke terraform init. When I try this, I see this:
$ terraform init
Initializing modules...
Initializing the backend...
╷
│ Error: Backend configuration changed
│
│ A change in the backend configuration has been detected, which may require migrating existing state.
│
│ If you wish to attempt automatic migration of the state, use "terraform init -migrate-state".
│ If you wish to store the current configuration with no changes to the state, use "terraform init -reconfigure".
╵
I figured the correct approach is to use -reconfigure which seems to work at first glance:
$ terraform init -reconfigure
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v3.47.0
- Using previously-installed hashicorp/random v3.1.0
Terraform has been successfully initialized!
However, executing terraform plan reveals that the initialization did not succeed:
$ terraform plan
╷
│ Error: Backend initialization required, please run "terraform init"
│
│ Reason: Unsetting the previously set backend "s3"
│
│ The "backend" is the interface that Terraform uses to store state,
│ perform operations, etc. If this message is showing up, it means that the
│ Terraform configuration you're using is using a custom configuration for
│ the Terraform backend.
│
│ Changes to backend configurations require reinitialization. This allows
│ Terraform to set up the new configuration, copy existing state, etc. Please run
│ "terraform init" with either the "-reconfigure" or "-migrate-state" flags to
│ use the current configuration.
│
│ If the change reason above is incorrect, please verify your configuration
│ hasn't changed and try again. At this point, no changes to your existing
│ configuration or state have been made.
╵
The only way to unset the backend seems to be via terraform init -migrate-state:
$ terraform init -migrate-state
Initializing modules...
Initializing the backend...
Terraform has detected you're unconfiguring your previously set "s3" backend.
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "s3" backend to the
newly configured "local" backend. No existing state was found in the newly
configured "local" backend. Do you want to copy this state to the new "local"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
Successfully unset the backend "s3". Terraform will now operate locally.
Is it not possible to convert the state via terraform init -reconfigure despite Terraform explicitly telling me so? If so, what does terraform init -reconfigure do exactly?
Below work around solved this problem for me.
Add below and run terraform init
terraform
{
backend "local"
{
}
}
From the official docs1, it seems -reconfigure is a bit destructive in the sense that it disregards the existing config. I would think that if you did changes to the backend, and then ran the command, it would only work from the assumption that this is a new config. I just recently read the docs myself, and I did not know that this was the behavior.
So, back to your question, I would assume -migrate-state is the desired option to use when migrating states between different backends. I understand from your issue that this was the case using terraform init -migrate-state?
As said by MindTooth, command init -migrate-state does exactly what you want to do.
It migrates the state unchanged when a different backend is configured.
init -reconfigure will initialise the new backend with a clean empty state.
Another way to do it is by pulling the state from s3 backend to a json file.
Then initialising an empty local backend using init -reconfigure, and pushing the state back in.
terraform state pull > state.json
terraform init -reconfigure
terraform state push state.json
I would like to use environment variables in my TF files. How can I mention them in those files?
I use Terraform cloud and define the variables in the environment variables section. Which means I don't use my cli to run terraform commands commands (no export TF_VAR & no -var or -var-file parameter).
I didn’t find any answer to this in forums nor in documentation.
Edit:
Maybe if I'll elaborate the things I've done it will be much clear.
So I have 2 environment variables named "username" and "password"
Those variables are defined in the environment variables section in Terraform Cloud.
In my main.tf file I create a mongo cluster which should be created with those username and password variables.
In the main variables.tf file I defined those variables as:
variable "username" {
type = string
}
variable "password" {
type = string
}
My main.tf file looks like:
module "eu-west-1-mongo-cluster" {
...
...
username = var.username
password = var.password
}
In the mongo submodule I defined them in variables.tf file as type string and in the mongo.tf file in the submodule I ref them as var.username and var.password
Thanks !
I don't think what you are trying to do is supported by Terraform Cloud. You are setting Environment Variables in the UI but you need to set Terraform Variables (see screenshot).
For Terraform Cloud backend you need to dynamically create *.auto.tfvars, none of the usual -var="myvar=123" or TF_VAR_myvar=123 or terraform.tfvars are currently supported from the remote backend. The error message below is produced from the CLI when running terraform 1.0.1 with a -var value:
│ Error: Run variables are currently not supported
│
│ The "remote" backend does not support setting run variables at this time. Currently the
│ only to way to pass variables to the remote backend is by creating a '*.auto.tfvars'
│ variables file. This file will automatically be loaded by the "remote" backend when the
│ workspace is configured to use Terraform v0.10.0 or later.
│
│ Additionally you can also set variables on the workspace in the web UI:
│ https://app.terraform.io/app/<org>/<workspace>/variables
My use case is in a CI/CD pipeline with the CLI using a remote Terraform Cloud backend, so created the *.auto.tfvars with for example:
# Environment variables set by pipeline
TF_VAR_cloudfront_origin_path="/mypath"
# Dynamically create *.auto.tfvars from environment variables
cat >dev.auto.tfvars <<EOL
cloudfront_origin_path="$TF_VAR_cloudfront_origin_path"
EOL
# Plan the run
terraform plan
As per https://www.terraform.io/docs/cloud/workspaces/variables.html#environment-variables, terraform will export all the provided variables.
So if you have defined an environment variable TF_VAR_name then you should be able to use as var.name in the terraform code.
Hope this helps.
I managed to get around this in my devops pipeline, by copying the terraform.tfvars file from my sub directory to the working directory as file.auto.tfvars
For example:
cp $(System.DefaultWorkingDirectory)/_demo/env/dev/variables.tfvars $(System.DefaultWorkingDirectory)/demo/variables.auto.tfvars