I am using Azure provider and storing the terraform state in Azure blob storage. Using the below code snippet for this.
data "terraform_remote_state" "xxxxxx" {
backend = "azurerm"
config = {
container_name = "terraform-state"
resource_group_name = "${var.remote_state_resource_group}"
storage_account_name = "${var.remote_state_storage_account}"
access_key = "${var.remote_state_credentials}"
key = "${var.cluster_name}-k8s-worker"
}
defaults = {}
}
If i run the above code with latest version of terraform version 0.12.x, it is failing with below error. But running the same code with 0.11.x, it is working as expected.
Error message:
Error: Unable to find remote state
on example2.tf line 20, in data "terraform_remote_state" "xxxxxx":
20: data "terraform_remote_state" "xxxxxx" {
No stored state was found for the given workspace in the given backend.
Any one faced similar issue in terraform 0.12.x with Azure blob storage.
I think the possible reasons would be here:
use the wrong storage account
use the wrong container name
use the wrong key
All the above reasons will cause the error you got. And the remote state works fine in terraform version 0.12.x.
I have encountered this issue when I have one terraform configuration that stores state in azurerm and then I want to use that state in another terraform configuration as a remote azurerm data source.
Specifically the issue appears when the first configuration uses terraform workspaces. The azurerm backend silently appends a suffix of the form env:${terraform.workspace} at the end of the blob key. You must explicitly correct for this in the data source.
If the backend of the first configuration looks like this:
terraform {
backend "azurerm" {
resource_group_name = "rg-myapp"
storage_account_name = "myappterraform"
container_name = "tfstate"
key = "myapp.tfstate"
}
}
The data source of the second configuration must look like this:
data "terraform_remote_state" "myapp" {
backend = "azurerm"
config = {
resource_group_name = "rg-myapp"
storage_account_name = "myappterraform"
container_name = "tfstate"
key = "myapp.tfstateenv:${terraform.workspace}"
}
}
Related
I am trying the terraform operations using azure as remote backend. So far, I have been able to store the state remotely in azure. I am now unable to retrieve the state from remote, as I do not store the .tf files or .tfstate files locally. How to fetch the same from remote? This is the code I have:
terraform init -backend-config=config.backend.tfbackend
terraform state pull # fails due to no configuration file existing locally
This is my config.backend.tfbackend
backend "azurerm" {
resource_group_name = "rg098"
storage_account_name = "tstate6565"
container_name = "tstate"
key = "test5176"
}
I have a terraform code that deploys Azure resources and outputs a bunch of values I need to use for further configuration.
I have initialized terraform backend and the state is saved to an Azure Storage account, I see the tfstate file with all the correct values.
FYI I have added this configuration but still no luck, also I am running the terraform init command in the second location so the backend is initialized with the same state:
backend "azurerm" {
storage_account_name = "${var.STATE_STORAGE_ACCOUNT_NAME}"
container_name = "${var.STATE_CONTAINER_NAME}"
key = "${var.STATE_STORAGE_ACCOUNT_KEY}"
}
What I want to be able to do is pull this state in some way so I can do terraform output -raw some_output in a different location than where I deployed the resources.
I can't seem to find a way to do this. How could this be achieved? Thanks
It really depends on your use case. You can take two different approaches:
Import resource with "data" source
Data sources allow Terraform use information defined outside of Terraform, defined by another separate Terraform configuration, or modified by functions.
Terraform docs
For AWS it would be something like this:
// create ssm parameter in Terraform code A
resource "aws_ssm_parameter" "secret" {
name = "/secret"
type = "String"
value = "SecretValue"
}
// Import this resource in Terraform code B
data "aws_ssm_parameter" "imported_secret" {
name = "/secret"
}
// So later you can reference it
locals any {
secretValue = data.aws_ssm_parameter.secret.value
}
Create modules
Modules are containers for multiple resources that are used together. A module consists of a collection of .tf and/or .tf.json files kept together in a directory. Modules are the main way to package and reuse resource configurations with Terraform.
Terraform docs
It is basic example of Terraform modules. We created module vpc where source of this module is in ../../modules/vpc directory and we referenced this module by module.vpc in rds module.
module "vpc" {
source = "../../modules/vpc"
env = var.env
azs = var.azs
cidr = var.cidr
db_subnets = var.db_subnets
private_subnets = var.private_subnets
public_subnets = var.public_subnets
}
module "rds" {
source = "../../modules/rds"
db_subnets_cidr_blocks = module.vpc.db_subnets_cidr_block
private_subnets_cidr_blocks = module.vpc.private_subnets_cidr_block
public_subnets_cidr_blocks = module.vpc.public_subnets_cidr_block
vpc_id = module.vpc.vpc_id
env = var.env
db_subnets_ids = module.vpc.db_subnets
}
I didn't find a straight forward way to do this, so the solution was, since the state file was being saved to Azure Blob Storage:
Run an Azure CLI command to get the blob locally:
az storage blob download --container-name tstate --file $tf_state_file_name --name $tf_state_file_name --account-key $tf_state_key --account-name $tf_state_storage_account_name
Where the local file name is: $tf_state_file_name
Read the desired values using JQ:
jq '.outputs.storage_account_name.value' ./$tf_state_file_name -r
You can read the values raw thanks to the -r paramater. This is the same as doing:
terraform output -raw storage_account_name
I'm running terraform v0.14.8 with a non-default terraform workspace.
I have an Atlantis server that handles my plans and applies.
When I clone my repo locally and run my plans I get errors about my datasource. I don't quite understand why as I don't get these errors on my Atlantis server which I believe performs the same operations. The Atlantis server also uses tf v0.14.8.
My terraform:
data "terraform_remote_state" "route53" {
backend = "s3"
config = {
key = "web/terraform.tfstate"
region = "us-west-2"
bucket = "prod-terraform"
role_arn = "arn:aws:iam::xxxxxxxxxx:role/atlantis"
}
Before I run my local plan, i switch the workspace
terraform workspace select web
# in addition I also tried
export TF_WORKSPACE=web
My plan
teraform plan
...
Error: Unable to find remote state
on provider.tf line 46, in data "terraform_remote_state" "route53":
46: data "terraform_remote_state" "route53" {
No stored state was found for the given workspace in the given backend.
I could easily edit my "key" with env: and things will work, but I'm trying to figure out how to do this without making that adjustment, seeing that my Atlantis server just works.
data "terraform_remote_state" "route53" {
backend = "s3"
config = {
key = "env:/web/web/terraform.tfstate"
region = "us-west-2"
bucket = "prod-terraform"
role_arn = "arn:aws:iam::xxxxxxxxxx:role/atlantis"
}
Your question seems to imply some confusion over which of these backends the web workspace is selected for. Running terraform workspace select web selects the web workspace from the backend of the current configuration (the directory where you are running Terraform), but I suspect your intent is to select the web backend from the backend you've configured in data "terraform_remote_state" instead.
If so, you can do that by setting the workspace argument in the data source configuration:
data "terraform_remote_state" "route53" {
backend = "s3"
workspace = "web"
config = {
key = "web/terraform.tfstate"
region = "us-west-2"
bucket = "prod-terraform"
role_arn = "arn:aws:iam::xxxxxxxxxx:role/atlantis"
}
}
I'm building terraform scripts to orcastrate Azure deployment. I used Azure blob storage to store a tfstate file. This file is shared with several pipelines IAC pipelines.
If for instance I create an Azure Resource Group with terraform, when that is done, I try to create a new custom role, terraform plan will mark the Resource Group for destruction.
This is the script for the role creation:
terraform {
backend "azurerm" {
storage_account_name = "saiac"
container_name = "tfstate"
key = "dev.terraform.tfstate"
resource_group_name = "rg-devops"
}
}
data "azurerm_subscription" "primary" {
}
resource "azurerm_role_definition" "roles" {
count = length(var.roles)
name = "${var.role_prefix}${var.roles[count.index]["suffix_name"]}${var.role_suffix}"
scope = "${data.azurerm_subscription.primary.id}"
permissions {
actions = split(",", var.roles[count.index]["actions"])
not_actions = split(",", var.roles[count.index]["not_actions"])
}
assignable_scopes = ["${data.azurerm_subscription.primary.id}"]
}
and this is script for resource group creation:
terraform {
backend "azurerm" {
storage_account_name = "saiac"
container_name = "tfstate"
key = "dev.terraform.tfstate"
resource_group_name = "rg-devops"
}
}
resource "azurerm_resource_group" "rg" {
count = "${length(var.rg_purposes)}"
name = "${var.rg_prefix}-${var.rg_postfix}-${var.rg_purposes[count.index]}"
location = "${var.rg_location}"
tags = "${var.rg_tags}"
}
If I remove the backend block, everything works as expected, does that mean I need the backend block?
Terraform use the .tfstate file to check and compare your code and existing cloud infra structure, it is like backbone of terraform.
If your code and existing infra is differe, terraform will destroy it and apply code changes.
To overcome this, terraform provides the import facility, you can import the existing resource and terraform will update it's .tfstate file.
This .tfstate file must be specify into your backend.tf file,best practices is to store your .tfstate file on cloude storage not in local directory.
When you run the terraform init command, it will check for the .tfstate file.
below is the sample file for backend.tf file (aws s3 is used):
backend "s3" {
bucket = "backends.terraform.file"
key = "my-terraform.tfstate_key"
region = "my-region-1"
encrypt = "false"
acl = "bucket-owner-full-control"
}
}
A terraform backend is not required for terraform. If you do not use it however no one else will be able to pull your code and run your terraform. The state will ONLY be stored in your .terraform directory. This means if you lose your local files your in trouble. It is recommended to use a backend that also supports state locking which azurerm does. With a backend in place the state will get pulled on terraform init after pulling the repo.
We are using Terraform to codify all of our infrastructure on AWS. We're using Gitlab for SCM and Gitlab-Runner for CI/CD. We've also started using Atlantis so that we can run all of our Terraform automatically in pull requests.
The Terraform provider we have configured in code looks something like this:
provider "aws" {
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::123456789012:role/atlantis"
session_name = "terraform"
}
}
The Gitlab-Runner instance on which Atlantis runs has permissions to assume the "atlantis" role that is referenced in the assume_role block. And that all works great.
However, there are times when I still need to run Terraform manually from the command line. The trouble with this is that when I do so, my account (which is configured as a federated/SAML login) isn't able to assume roles. It does have access to do everything relating to creating and destroying resources, though.
This means that I need to temporarily delete the assume_role block above on my local machine, and then run my Terraform commands. This isn't the end of the world, but it is a little bit annoying. What I want to do is something like: create a second "aws" provider - one which doesn't try to assume another role - like this:
provider "aws" {
region = "us-east-1"
alias = "local-cli"
}
And then I'd call something like terraform plan --provider=local-cli. But sadly there is no such --provider option; I just made that up now. According to the Terraform docs, it looks like I can configure a second provider on a per-resource basis, but really what I'm trying to do is to run Terraform with a second provider on a per-session basis. Are there any solutions for this?
This is what I do. I have created a small wrapper in bash which generates the terraform code that changes and generates provider.tf file for you:
cat << EOF > ./provider.tf
terraform {
backend "s3" {
bucket = "${TF_VAR_state_bucket}"
dynamodb_table = "${DYNAMODB_STATE_TABLE}"
key = "terraform/$STATE_PATH/terraform.tfstate"
region = "$REGION"
encrypt = "true"
}
}
provider "aws" {
region = "$REGION"
version = "1.51.0"
}
provider "archive" { version = "1.1.0" }
provider "external" { version = "1.0.0" }
provider "local" { version = "1.1.0" }
provider "null" { version = "1.0.0" }
provider "random" { version = "2.0.0" }
provider "template" { version = "1.0.0" }
provider "tls" { version = "1.2.0" }
EOF
This way the provider and setup can change completely across environments.