Get the .tf file from azure remote backend in terraform - terraform

I am trying the terraform operations using azure as remote backend. So far, I have been able to store the state remotely in azure. I am now unable to retrieve the state from remote, as I do not store the .tf files or .tfstate files locally. How to fetch the same from remote? This is the code I have:
terraform init -backend-config=config.backend.tfbackend
terraform state pull # fails due to no configuration file existing locally
This is my config.backend.tfbackend
backend "azurerm" {
resource_group_name = "rg098"
storage_account_name = "tstate6565"
container_name = "tstate"
key = "test5176"
}

Related

Unable to pull terraform state from AWS-S3

I'm trying to create a mechanism so that I use terraform backend to upload the state to a S3, so that my teammate can use my terraform state to resume my work. This is my setup:
terraform {
backend "s3" {
bucket = "username-terraform-state"
key = "billow/terraform.tfstate"
region = "us-west-2"
dynamodb_table = "username-terraform-state-test-locks"
encrypt = true
}
}
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = "username-terraform-state"
key = "billow/terraform.tfstate"
region = var.region
}
}
With this setup, I have two folders in the S3 bucket. One is billow/ with a terraform.tfstate file. There's another folder env:/remote_s3/billow/ (remote_s3 is the name of my terraform workspace) with another terraform.tfstate. Both of them are updated also when I execute a terraform import command.
What I want is when I create a new workspace, I would be able to pull the state file from existing folder in the bucket, and continue the project. The step I took was placing the same .tf file in the directory and run terraform init, terraform refresh and then terraform state pull to pull the state file. However, this only pulls an empty state file, and I would need to re-import all the resources again.
So here are my two questions:
Why are there two folders in the bucket? I thought with my backend setup there should be only one of them.
What should I do to make it so that when I set up a new terraform workspace, I would be able to import the whole state file from my previously saved terraform state?
Thanks!

data source terraform_remote_state with workspaces

I'm running terraform v0.14.8 with a non-default terraform workspace.
I have an Atlantis server that handles my plans and applies.
When I clone my repo locally and run my plans I get errors about my datasource. I don't quite understand why as I don't get these errors on my Atlantis server which I believe performs the same operations. The Atlantis server also uses tf v0.14.8.
My terraform:
data "terraform_remote_state" "route53" {
backend = "s3"
config = {
key = "web/terraform.tfstate"
region = "us-west-2"
bucket = "prod-terraform"
role_arn = "arn:aws:iam::xxxxxxxxxx:role/atlantis"
}
Before I run my local plan, i switch the workspace
terraform workspace select web
# in addition I also tried
export TF_WORKSPACE=web
My plan
teraform plan
...
Error: Unable to find remote state
on provider.tf line 46, in data "terraform_remote_state" "route53":
46: data "terraform_remote_state" "route53" {
No stored state was found for the given workspace in the given backend.
I could easily edit my "key" with env: and things will work, but I'm trying to figure out how to do this without making that adjustment, seeing that my Atlantis server just works.
data "terraform_remote_state" "route53" {
backend = "s3"
config = {
key = "env:/web/web/terraform.tfstate"
region = "us-west-2"
bucket = "prod-terraform"
role_arn = "arn:aws:iam::xxxxxxxxxx:role/atlantis"
}
Your question seems to imply some confusion over which of these backends the web workspace is selected for. Running terraform workspace select web selects the web workspace from the backend of the current configuration (the directory where you are running Terraform), but I suspect your intent is to select the web backend from the backend you've configured in data "terraform_remote_state" instead.
If so, you can do that by setting the workspace argument in the data source configuration:
data "terraform_remote_state" "route53" {
backend = "s3"
workspace = "web"
config = {
key = "web/terraform.tfstate"
region = "us-west-2"
bucket = "prod-terraform"
role_arn = "arn:aws:iam::xxxxxxxxxx:role/atlantis"
}
}

Removing Backend pools and load balancer rules before creating another

I have terraform script which creates Backend address pools and Loadbalancer rules in Loadbalancer in resource group. These tasks are included in Azure-pipeline. FOr the first time I run the pipeline.Its creating properly. If I run the pipeline for the second time. Its not updating the existing one .Its keeping the Backend address pools and Loadbalancer rules which are created by previous release and adding the extra Backend address pools and Loadbalancer rules for this release which is causing duplicates in Backend address pools and Loadbalancer rules. Any suggestions on this please
resource "azurerm_lb_backend_address_pool" "example" {
resource_group_name = azurerm_resource_group.example.name
loadbalancer_id = azurerm_lb.example.id
name = "BackEndAddressPool"
}
resource "azurerm_lb_rule" "example" {
resource_group_name = azurerm_resource_group.example.name
loadbalancer_id = azurerm_lb.example.id
name = "LBRule"
protocol = "All"
frontend_port = 0
backend_port = 0
frontend_ip_configuration_name = "PublicIPAddress"
enable_floating_ip = true
backend_address_pool_id = azurerm_lb_backend_address_pool.example
}
This is likely happening because the Terraform state file is being lost between pipeline runs.
By default, Terraform stores state locally in a file named terraform.tfstate. When working with Terraform in a team, use of a local file makes Terraform usage complicated because each user must make sure they always have the latest state data before running Terraform and make sure that nobody else runs Terraform at the same time.
With remote state, Terraform writes the state data to a remote data store, which can then be shared between all members of a team. Terraform supports storing state in Terraform Cloud, HashiCorp Consul, Amazon S3, Alibaba Cloud OSS, and more.
Remote state is a feature of backends. Configuring and using remote backends is easy and you can get started with remote state quickly. If you then want to migrate back to using local state, backends make that easy as well.
You will want to configure Remote State storage to keep the state. Here is an example using Azure Blob Storage:
terraform {
backend "azurerm" {
resource_group_name = "StorageAccount-ResourceGroup"
storage_account_name = "abcd1234"
container_name = "tfstate"
key = "prod.terraform.tfstate"
}
}
Stores the state as a Blob with the given Key within the Blob Container within the Blob Storage Account. This backend also supports state locking and consistency checking via native capabilities of Azure Blob Storage.
This is more completely described in the azurerm Terraform backend docs.
Microsoft also provides a Tutorial: Store Terraform state in Azure Storage, which goes through the setup step by step.

state management in terraform

I'm building terraform scripts to orcastrate Azure deployment. I used Azure blob storage to store a tfstate file. This file is shared with several pipelines IAC pipelines.
If for instance I create an Azure Resource Group with terraform, when that is done, I try to create a new custom role, terraform plan will mark the Resource Group for destruction.
This is the script for the role creation:
terraform {
backend "azurerm" {
storage_account_name = "saiac"
container_name = "tfstate"
key = "dev.terraform.tfstate"
resource_group_name = "rg-devops"
}
}
data "azurerm_subscription" "primary" {
}
resource "azurerm_role_definition" "roles" {
count = length(var.roles)
name = "${var.role_prefix}${var.roles[count.index]["suffix_name"]}${var.role_suffix}"
scope = "${data.azurerm_subscription.primary.id}"
permissions {
actions = split(",", var.roles[count.index]["actions"])
not_actions = split(",", var.roles[count.index]["not_actions"])
}
assignable_scopes = ["${data.azurerm_subscription.primary.id}"]
}
and this is script for resource group creation:
terraform {
backend "azurerm" {
storage_account_name = "saiac"
container_name = "tfstate"
key = "dev.terraform.tfstate"
resource_group_name = "rg-devops"
}
}
resource "azurerm_resource_group" "rg" {
count = "${length(var.rg_purposes)}"
name = "${var.rg_prefix}-${var.rg_postfix}-${var.rg_purposes[count.index]}"
location = "${var.rg_location}"
tags = "${var.rg_tags}"
}
If I remove the backend block, everything works as expected, does that mean I need the backend block?
Terraform use the .tfstate file to check and compare your code and existing cloud infra structure, it is like backbone of terraform.
If your code and existing infra is differe, terraform will destroy it and apply code changes.
To overcome this, terraform provides the import facility, you can import the existing resource and terraform will update it's .tfstate file.
This .tfstate file must be specify into your backend.tf file,best practices is to store your .tfstate file on cloude storage not in local directory.
When you run the terraform init command, it will check for the .tfstate file.
below is the sample file for backend.tf file (aws s3 is used):
backend "s3" {
bucket = "backends.terraform.file"
key = "my-terraform.tfstate_key"
region = "my-region-1"
encrypt = "false"
acl = "bucket-owner-full-control"
}
}
A terraform backend is not required for terraform. If you do not use it however no one else will be able to pull your code and run your terraform. The state will ONLY be stored in your .terraform directory. This means if you lose your local files your in trouble. It is recommended to use a backend that also supports state locking which azurerm does. With a backend in place the state will get pulled on terraform init after pulling the repo.

Terraform remote state azure

I have worked with terraform before, where terraform can place the tfstate files in S3. Does terraform also support azure blob storage as a backend? What would be the commands to set the backend to be azure blob storage?
As of Terraform 0.7 (not currently released but you can compile from source) support for Azure blob storage has been added.
The question asks for some commands, so I'm adding a little more detail in case anyone needs it. I'm using Terraform v0.12.24 and azurerm provider v2.6.0. You need two things:
Create a storage account (general purpose v2) and a container for storing your states.
Configure your environment and your main.tf
As for the second point, your terraform block in main.tf should contain a "azurerm" backend:
terraform {
required_version = "=0.12.24"
backend "azurerm" {
storage_account_name = "abcd1234"
container_name = "tfstatecontainer"
key = "example.prod.terraform.tfstate"
}
provider "azurerm" {
version = "=2.6.0"
features {}
subscription_id = var.subscription_id
}
Before calling to plan or apply, init the ARM_ACCESS_KEY variable with a bash export:
export ARM_ACCESS_KEY=<storage access key>
Finally, run the init command:
terraform init
Now, if you run terraform plan you will see the tfstate created in the container. Azure has a file locking feature built in, in case anyone tries to update the state file at the same time.

Resources