Terraform: Cannot find modules moved to new state - terraform

I am moving s3 buckets between states. Specifically I am moving 2 buckets from state_a to state_b. After copying the Terraform code to state_b - on the root folder - I am running the following commands:
terraform state mv -state=terraform.tfstate -state-out=../state_b/terraform.tfstate module.bucket_a module.bucket_a
terraform state mv -state=terraform.tfstate -state-out=../state_b/terraform.tfstate module.bucket_b module.bucket_b
I am running these commands under the state_a folder. The output of both is
Move "module.bucket_a" to "module.bucket_a"
Successfully moved 1 object(s).
and the same for bucket b
Then I remove the code for these buckets from state_a and run a terraform plan under state_a. Everything looks good:
Plan: 0 to add, 0 to change, 0 to destroy.
Then I run terraform plan under state_b. I would expect to see the same result - 0 to add, 0 to change and 0 to destroy. Unfortunately, I am seeing the following result:
Plan: 2 to add, 0 to change, 0 to destroy.
# module.bucket_a.aws_s3_bucket.retention_bucket[0] will be created
+ resource "aws_s3_bucket" "retention_bucket" {
....
# module.bucket_b.aws_s3_bucket.retention_bucket[0] will be created
+ resource "aws_s3_bucket" "retention_bucket" {
....
So it seems like the move was not successful. I am trying to terraform state list under state_b to see all the modules in hope that I moved them under a different address but I cannot find anything under bucket_a or bucket_b name.
What did I do wrong?

Related

how does terraform state and it's refresh work

I'm new to terraform and want to understand how the terraform state and the refresh work together. As I understand, you write the desired resources in a .tf file and run 'terraform apply' and the resources are created, if they did not exist. Also, the terraform.tfstate file is created, containing all the metadata from terraform's viewpoint about the resource. This way, terraform can compare the desired state (in the .tf file) to the current state (in the terraform.state file) . So far so good.
But what happens, if there is any change on the 'real world' resource?
Shouldn't terraform recognize this? This should be done by the 'refresh' of terraform, or?
However, I've experimented a bit and created this simple resource:
resource "local_file" "mypet" {
filename = "/tmp/mypet.txt"
content = "I love pets"
file_permission = "0755"
}
Running 'terraform init && terraform apply' created the file /tmp/mypet.txt with the content "I love pets". Cool.
Then I changed the file permissions:
chmod 0444 /tmp/mypet.txt"
ls -l /tmp/mypet.txt
-r--r--r--. 1 devel devel 11 Nov 25 09:28 /tmp/mypet.txt
And another 'terraform apply':
local_file.mypet: Refreshing state... [id=4246687863ee17bf4daf5fc376ab01222e989aca]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Actually, terraform does not recognize this real world change and that the target configuration has drifted from the state known to terraform!
Another interesting example: I did a vi /tmp/mypet.txt but did not change the content at all. However, I left the vi using :wq . And this changes the checksum of the file:
sha256sum /tmp/mypet.txt
595e30750adb4244cfcfc31de9b37d1807c20e7ac2aef049755b6c0e5f580170 /tmp/mypet.txt
[devel#fedora test]$ !vi
vi /tmp/mypet.txt
[devel#fedora test]$ sha256sum /tmp/mypet.txt
4fcc76cae3c8950e9633ca8220e36f4313a97d0f436ab050fd1d586b40d682f0 /tmp/mypet.txt
This is recognized by terraform and thus it wants to recreate the file:
terraform apply
local_file.mypet: Refreshing state... [id=4246687863ee17bf4daf5fc376ab01222e989aca]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
+ create
Terraform will perform the following actions:
# local_file.mypet will be created
+ resource "local_file" "mypet" {
+ content = "I love pets"
+ directory_permission = "0777"
+ file_permission = "0755"
+ filename = "/tmp/mypet.txt"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Means, terraform this time recognized the real-world change.
When I call 'terraform apply' without refreshing, it does not see the change that happened in the real world, as expected:
terraform apply --refresh=false
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
It seems, that only some attributes of the file resource-type of the local provider are able to recognize a drift from the terraform state.
Is that true? Is this documented somewhere, if an attribute of a resource-type is "drift-aware" or not?

Terragrunt provider.tf generation not generating files

The expectation is that when running a terragrunt run-all apply from the root directory, a provider.tf file will be created in the sub directories. I've verified my backend is able to talk to my azure storage account and it will create a terraform.tfstate file there. I expect a provider.tf file to appear under each "service#" folder. I'm extremely new to terragrunt. This is my first exercise with it. I am not actually trying to deploy any terraform resources. Just have the provider.tf file created in my sub directories. TF version is 1.1.5. Terragrunt version is 0.36.1.
my folder structure
tfpractice
├───terragrunt.hcl
├───environment_vars.yaml
├───dev
│ ├───service1
│ ├───terragrunt.hcl
│ └───service2
│ ├───terragrunt.hcl
└───prod
├───service1
│ ├───terragrunt.hcl
└───service2
│ ├───terragrunt.hcl
root terragrunt.hcl config
# Generate provider configuration for all child directories
generate "provider" {
path = "provider.tf"
if_exists = "overwrite"
contents = <<EOF
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "2.94.0"
}
}
backend "azurerm" {}
}
provider "azurerm" {
features {}
}
EOF
}
# Remote backend settings for all child directories
remote_state {
backend = "azurerm"
config = {
resource_group_name = local.env_vars.resource_group_name
storage_account_name = local.env_vars.storage_account_name
container_name = local.env_vars.container_name
key = "${path_relative_to_include()}/terraform.tfstate"
}
}
# Collect values from environment_vars.yaml and set as local variables
locals {
env_vars = yamldecode(file("environment_vars.yaml"))
}
environment_vars.yaml config
resource_group_name: "my-tf-test"
storage_account_name: "mystorage"
container_name: "tfstate"
terragrunt.hcl config in the service# folders
# Collect values from parent environment_vars.yaml file and set as local variables
locals {
env_vars = yamldecode(file(find_in_parent_folders("environment_vars.yaml")))
}
# Include all settings from the root terragrunt.hcl file
include {
path = find_in_parent_folders()
}
When i run the terragrunt run-all apply this is the output
Are you sure you want to run 'terragrunt apply' in each folder of the stack described above? (y/n) y
Initializing the backend...
Initializing the backend...
Initializing the backend...
Initializing the backend...
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Initializing provider plugins...
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
No changes. Your infrastructure matches the configuration.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
It looks successful, however NO provider.tf files show up in ANY directory. Not even root. It just creates a terraform.tfstate file under the service# directories.
But...if I run a terragrunt init from the root directory, it will create the provider.tf file as expected in the root directory. This does NOT work in the service# directories although the terragrunt init is successful.
What am I missing? This is the most basic terragrunt use case, but the examples lead me to believe this should just work.
I got it to work, but the terragrunt run-all apply command doesn't work at all. Instead, at the root I have to run terragrunt apply. If you don't at the root, then all sub folders get grouped together rather than in a dev/prod sub folder. Then I have to go to every sub folder and run this again. It's the only way i've gotten it to work.

Terraform long term lock

Using Terraform 0.12 with the remote state in an S3 bucket with DynamoDB locking.
It seems that a common pattern for Terraforming automation goes more or less like this:
terraform plan -out=plan
[review plan]
terraform apply plan
But then, maybe I'm overlooking something obvious, there's no guarantee other terraform apply invocations haven't updated the infrastructure between 1 and 3 above.
I know locking will prevent a concurrent run of terraform apply while another one is running (and locking is enabled) but can I programmatically grab a "long term locking" so the effective workflow looks like this?
[something to the effect of...] "terraform lock"
terraform plan -out=plan
[review plan]
terraform apply plan
[something to the effect of...] "terraform release lock"
Are there any other means to "protect" infrastructure from concurrent/interdependant updates that I'm overlooking?
You don't need this as long as you are only worrying about the state file changing.
If you provide a plan output file to apply and the state has changed since then Terraform will error before making any changes, complaining that the saved plan is stale.
As an example:
$ cat <<'EOF' >> main.tf
> resource "random_pet" "bucket_suffix" {}
>
> resource "aws_s3_bucket" "example" {
> bucket = "example-${random_pet.bucket_suffix.id}"
> acl = "private"
>
> tags = {
> ThingToChange = "foo"
> }
> }
> EOF
$ terraform init
# ...
$ terraform apply
# ...
$ $ sed -i 's/foo/bar/' main.tf
$ terraform plan -out=plan
# ...
$ sed -i 's/bar/baz/' main.tf
$ terraform apply
# ...
$ terraform apply plan
Error: Saved plan is stale
The given plan file can no longer be applied because the state was changed by
another operation after the plan was created.
What it won't do is fail if something outside of Terraform has changed anything. So if instead of applying Terraform again with baz as the tag for the bucket I had changed the tag on the bucket via the AWS CLI or the AWS console then Terraform would have happily changed it back to bar on the apply with the stale plan.

terraform: How to destroy oldest instance when lowering aws_instance count

Given a pair of aws instances deployed with
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "example" {
count = 2
ami = "ami-2757f631"
instance_type = "t2.micro"
tags = {
Name = "Test${count.index}"
}
}
Lowering count = 1 will destroy the last instance deployed:
Terraform will perform the following actions:
- aws_instance.example[1]
Is it possible to get terraform to destroy the first instance. ie.
Terraform will perform the following actions:
- aws_instance.example[0]
Terraform is tracking which instance is which via its state. When you reduce your count on the aws_instance resource Terraform will simply remove the later instances. While this shouldn't really be much of an issue because I would only really recommend that you are deploying groups of homogenous instances that can handle the load being interrupted (and would sit behind some form of load balancer mechanism) if you really needed to you could edit the state file to reorder the instances before reducing the number of instances.
The state file is serialised as JSON so you can just edit it directly (making sure it's uploaded to whatever you're using for remote state if you are using remote state) or better yet you can use the first class tools for editing remote state that the Terraform CLI provides with terraform state mv.
As an example you can do this:
# Example from question has been applied already
# `count` is edited from 2 to 1
$ terraform plan
...
aws_instance.example[1]: Refreshing state... (ID: i-0c227dfbfc72fb0cd)
aws_instance.example: Refreshing state... (ID: i-095fd3fdf86ce8254)
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
- aws_instance.example[1]
Plan: 0 to add, 0 to change, 1 to destroy.
...
$
$
$
$ terraform state list
aws_instance.example[0]
aws_instance.example[1]
$
$
$
$ terraform state mv aws_instance.example[1] aws_instance.example[2]
Moved aws_instance.example[1] to aws_instance.example[2]
$ terraform state mv aws_instance.example[0] aws_instance.example[1]
Moved aws_instance.example[0] to aws_instance.example[1]
$ terraform state mv aws_instance.example[2] aws_instance.example[0]
Moved aws_instance.example[2] to aws_instance.example[0]
$
$
$
$ terraform plan
...
aws_instance.example[1]: Refreshing state... (ID: i-095fd3fdf86ce8254)
aws_instance.example: Refreshing state... (ID: i-0c227dfbfc72fb0cd)
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
- destroy
Terraform will perform the following actions:
~ aws_instance.example
tags.Name: "Test1" => "Test0"
- aws_instance.example[1]
Plan: 0 to add, 1 to change, 1 to destroy.
...

terraform init creating empty tfstate file

When i am doing terraform init commands, TF is not having the tfstate ( my tfstate file is in s3 bucket ). Also i am not able to see the terraform backend-config file inside the .terraform folder.
I am using terraform 0.10.4 version
Outputs:
$ terraform --version
Terraform v0.10.4
$ terraform init \
-lock="true"
-backend-config="bucket=$TF_STATE_BUCKET"
-backend-config="key=$TF_STATE_KEY"
-backend-config="dynamodb_table=$TF_LOCK_TABLE"
-backend-config="region=$AWS_REGION"
-backend-config="profile=$AWS_PROFILE"
-backend-config="encrypt=true"
.
Downloading modules...
Get: git::ssh://XXXXXXXXXXXXXXXXX/add/tf-vpc.git?ref=1.0.1
Get: git::ssh://XXXXXXXXXXXXXXXXX/add/tf-ec-redis.git?ref=1.1.3
Get: git::ssh://XXXXXXXXXXXXXXXXX/add/tf-rds-pg.git?ref=1.3.0
Initializing provider plugins...
Checking for available provider plugins on https://releases.hashicorp.com...
Downloading plugin for provider "aws" (0.1.4)...
The following providers do not have any version constraints in configuration, so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking changes, it is recommended to add version = "..." constraints to the corresponding provider blocks in configuration, with the constraint strings suggested below.
provider.aws: version = "~> 0.1"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
$ ll -al .terraform/
total 8
drwxr-xr-x 1 XXXXX 1049089 0 Sep 12 18:10 modules/
drwxr-xr-x 1 XXXXX 1049089 0 Sep 12 18:10 plugins/
In present location TF will take the backup of the s3-tfstate file, but it's not there.
This is expect behaviour after 0.9.x. The local tfstate file at .terraform/terraform.tfstate is almost an empty file. The only change is the serial number in it. It keeps increasing.
"serial": 1,
If you don't run terraform apply, the remote tfstate file will not be updated. If you never run terraform apply, the remote tfstate file is not exist.
So try to make some changes, then check the remote tfstate file (in your case, it is s3://$TF_STATE_BUCKET/$TF_STATE_KEY), you should see the difference.

Resources