data source terraform_remote_state with workspaces - terraform

I'm running terraform v0.14.8 with a non-default terraform workspace.
I have an Atlantis server that handles my plans and applies.
When I clone my repo locally and run my plans I get errors about my datasource. I don't quite understand why as I don't get these errors on my Atlantis server which I believe performs the same operations. The Atlantis server also uses tf v0.14.8.
My terraform:
data "terraform_remote_state" "route53" {
backend = "s3"
config = {
key = "web/terraform.tfstate"
region = "us-west-2"
bucket = "prod-terraform"
role_arn = "arn:aws:iam::xxxxxxxxxx:role/atlantis"
}
Before I run my local plan, i switch the workspace
terraform workspace select web
# in addition I also tried
export TF_WORKSPACE=web
My plan
teraform plan
...
Error: Unable to find remote state
on provider.tf line 46, in data "terraform_remote_state" "route53":
46: data "terraform_remote_state" "route53" {
No stored state was found for the given workspace in the given backend.
I could easily edit my "key" with env: and things will work, but I'm trying to figure out how to do this without making that adjustment, seeing that my Atlantis server just works.
data "terraform_remote_state" "route53" {
backend = "s3"
config = {
key = "env:/web/web/terraform.tfstate"
region = "us-west-2"
bucket = "prod-terraform"
role_arn = "arn:aws:iam::xxxxxxxxxx:role/atlantis"
}

Your question seems to imply some confusion over which of these backends the web workspace is selected for. Running terraform workspace select web selects the web workspace from the backend of the current configuration (the directory where you are running Terraform), but I suspect your intent is to select the web backend from the backend you've configured in data "terraform_remote_state" instead.
If so, you can do that by setting the workspace argument in the data source configuration:
data "terraform_remote_state" "route53" {
backend = "s3"
workspace = "web"
config = {
key = "web/terraform.tfstate"
region = "us-west-2"
bucket = "prod-terraform"
role_arn = "arn:aws:iam::xxxxxxxxxx:role/atlantis"
}
}

Related

Unable to pull terraform state from AWS-S3

I'm trying to create a mechanism so that I use terraform backend to upload the state to a S3, so that my teammate can use my terraform state to resume my work. This is my setup:
terraform {
backend "s3" {
bucket = "username-terraform-state"
key = "billow/terraform.tfstate"
region = "us-west-2"
dynamodb_table = "username-terraform-state-test-locks"
encrypt = true
}
}
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = "username-terraform-state"
key = "billow/terraform.tfstate"
region = var.region
}
}
With this setup, I have two folders in the S3 bucket. One is billow/ with a terraform.tfstate file. There's another folder env:/remote_s3/billow/ (remote_s3 is the name of my terraform workspace) with another terraform.tfstate. Both of them are updated also when I execute a terraform import command.
What I want is when I create a new workspace, I would be able to pull the state file from existing folder in the bucket, and continue the project. The step I took was placing the same .tf file in the directory and run terraform init, terraform refresh and then terraform state pull to pull the state file. However, this only pulls an empty state file, and I would need to re-import all the resources again.
So here are my two questions:
Why are there two folders in the bucket? I thought with my backend setup there should be only one of them.
What should I do to make it so that when I set up a new terraform workspace, I would be able to import the whole state file from my previously saved terraform state?
Thanks!

Terraform remote state issue for Azureblob in 0.12.x

I am using Azure provider and storing the terraform state in Azure blob storage. Using the below code snippet for this.
data "terraform_remote_state" "xxxxxx" {
backend = "azurerm"
config = {
container_name = "terraform-state"
resource_group_name = "${var.remote_state_resource_group}"
storage_account_name = "${var.remote_state_storage_account}"
access_key = "${var.remote_state_credentials}"
key = "${var.cluster_name}-k8s-worker"
}
defaults = {}
}
If i run the above code with latest version of terraform version 0.12.x, it is failing with below error. But running the same code with 0.11.x, it is working as expected.
Error message:
Error: Unable to find remote state
on example2.tf line 20, in data "terraform_remote_state" "xxxxxx":
20: data "terraform_remote_state" "xxxxxx" {
No stored state was found for the given workspace in the given backend.
Any one faced similar issue in terraform 0.12.x with Azure blob storage.
I think the possible reasons would be here:
use the wrong storage account
use the wrong container name
use the wrong key
All the above reasons will cause the error you got. And the remote state works fine in terraform version 0.12.x.
I have encountered this issue when I have one terraform configuration that stores state in azurerm and then I want to use that state in another terraform configuration as a remote azurerm data source.
Specifically the issue appears when the first configuration uses terraform workspaces. The azurerm backend silently appends a suffix of the form env:${terraform.workspace} at the end of the blob key. You must explicitly correct for this in the data source.
If the backend of the first configuration looks like this:
terraform {
backend "azurerm" {
resource_group_name = "rg-myapp"
storage_account_name = "myappterraform"
container_name = "tfstate"
key = "myapp.tfstate"
}
}
The data source of the second configuration must look like this:
data "terraform_remote_state" "myapp" {
backend = "azurerm"
config = {
resource_group_name = "rg-myapp"
storage_account_name = "myappterraform"
container_name = "tfstate"
key = "myapp.tfstateenv:${terraform.workspace}"
}
}

state management in terraform

I'm building terraform scripts to orcastrate Azure deployment. I used Azure blob storage to store a tfstate file. This file is shared with several pipelines IAC pipelines.
If for instance I create an Azure Resource Group with terraform, when that is done, I try to create a new custom role, terraform plan will mark the Resource Group for destruction.
This is the script for the role creation:
terraform {
backend "azurerm" {
storage_account_name = "saiac"
container_name = "tfstate"
key = "dev.terraform.tfstate"
resource_group_name = "rg-devops"
}
}
data "azurerm_subscription" "primary" {
}
resource "azurerm_role_definition" "roles" {
count = length(var.roles)
name = "${var.role_prefix}${var.roles[count.index]["suffix_name"]}${var.role_suffix}"
scope = "${data.azurerm_subscription.primary.id}"
permissions {
actions = split(",", var.roles[count.index]["actions"])
not_actions = split(",", var.roles[count.index]["not_actions"])
}
assignable_scopes = ["${data.azurerm_subscription.primary.id}"]
}
and this is script for resource group creation:
terraform {
backend "azurerm" {
storage_account_name = "saiac"
container_name = "tfstate"
key = "dev.terraform.tfstate"
resource_group_name = "rg-devops"
}
}
resource "azurerm_resource_group" "rg" {
count = "${length(var.rg_purposes)}"
name = "${var.rg_prefix}-${var.rg_postfix}-${var.rg_purposes[count.index]}"
location = "${var.rg_location}"
tags = "${var.rg_tags}"
}
If I remove the backend block, everything works as expected, does that mean I need the backend block?
Terraform use the .tfstate file to check and compare your code and existing cloud infra structure, it is like backbone of terraform.
If your code and existing infra is differe, terraform will destroy it and apply code changes.
To overcome this, terraform provides the import facility, you can import the existing resource and terraform will update it's .tfstate file.
This .tfstate file must be specify into your backend.tf file,best practices is to store your .tfstate file on cloude storage not in local directory.
When you run the terraform init command, it will check for the .tfstate file.
below is the sample file for backend.tf file (aws s3 is used):
backend "s3" {
bucket = "backends.terraform.file"
key = "my-terraform.tfstate_key"
region = "my-region-1"
encrypt = "false"
acl = "bucket-owner-full-control"
}
}
A terraform backend is not required for terraform. If you do not use it however no one else will be able to pull your code and run your terraform. The state will ONLY be stored in your .terraform directory. This means if you lose your local files your in trouble. It is recommended to use a backend that also supports state locking which azurerm does. With a backend in place the state will get pulled on terraform init after pulling the repo.

is it expected that data terraform_remote_sate creates state when it does not exist?

I'm defining a remote state data resource (gcp backend) in terraform. When I plan, the state file, for the remote state, is created if it does not exist previously, even when I'm not referencing the state in other resources.
Terraform v0.11.14
So when I plan for env dev:
data "terraform_remote_state" "example" {
backend = "gcs"
workspace = "dev-us-east1"
config {
bucket = "bucket"
prefix = "global/projects/example-project"
}
}
and the file in gcs bucket/global/projects/example-project/dev-us-east1 does not exist, then it's created as an empty state.
I expected kind of a state not found error, but instead, the remote state is created with an empty content.

Terraform remote state for different environments

How do I manage remote state for different environments? I originally wanted to use variables in my remote state definations but realized I cannot use variables like:
provider "aws" {
region = "ap-southeast-1"
}
terraform {
backend "s3" {
bucket = "${var.state_bucket}"
key = "${var.state_key}"
region = "ap-southeast-1"
}
}
data "terraform_remote_state" "s3_state" {
backend = "s3"
config {
bucket = "${var.state_bucket}"
key = "${var.state_key}"
region = "ap-southeast-1"
}
}
But realised I cannot use variables in this case? I can hardcode the bucket name but the bucket may not be the same across environments
You will want to use what Terraform calls workspaces. Here is the documentation: https://www.terraform.io/docs/state/workspaces.html
So way you have a piece of state called: MyStateKey
When you use workspaces it will append the workspace name to the end of the existing key. For example if you created a workspace called "dev" then the key in the remote state would be "MyStateKey:dev".
I would suggest you use some conventions to make it easier like using the "default" workspace as production, with additional workspaces named after your other environments. Then when you run terraform you can set the workspace or use the TF_WORKSPACE environment variable to set it.

Resources