terraform tfstate is not refreshing from remote s3 bucket - terraform

i deployed AWS infrastructure using the tf code on my local machine which stored the terraform.tfstate on my machine. now i want other developers to refer the same state file and hence i copied the code to github repo and added state.tf and copied the terraform.tfstate from my local machine to s3 bucket prefix that my state file is pointing to. i also one changes in the repo, instead of having one large tf file i divided it into three files - state.tf, vpc.tf and dynamodb.tf.
my state.tf file:
terraform {
backend "s3" {
bucket = "testing-d-tf-state"
key = "aws-xyz/terraform.tfstate"
region = "us-west-2"
}
}
however, when my developer is running the code on his machine he is getting
Plan: 26 to add, 0 to change, 25 to destroy.
I am unable to figure out why he is not getting the terraform.tfstate file refreshed correctly to read such that he does not get "add" or "destroy" as there is no change made to the infrastructure.

You shouldn't manually copy the terraform.tfstate file to the remote location. After you've coded up the backend info, you would re-run terraform init and Terraform will take care of setting that up correctly for you both locally and in the remote bucket.
After you've done this, there are a few things you should do to confirm it worked:
Log into the AWS console and confirm that there is now a terraform.tfstate file in the correct bucket & location.
move the local terraform.tfstate file (don't delete it yet just in case), then running something like terraform state list that will query the state file. If it works, then your remote state config is working.

Related

Terraform doesn't pick up one certain variable change

I have an ECS Fargate cluster set up that currently has 4 tasks of the same app running on it. The desired number of tasks is defined within a variable:
variable "desired_task_count" {
description = "Desired ECS tasks to run in service"
default = 4
}
When I change the default value to any given number, save it and run terraform plan or terraform apply, terraform doesn't see any changes. The tfstate file remains unchanged.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found
no differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
TFstate:
"desired_count": 4,
If I change any other variable in that exact same variables.tf file, terraform picks up the changes and applies them.
What I tried to do:
Create a new variable to pass the value - didn't work
Rebuild infrastructure with destroy and then apply. This did work as it rewrites a new state file.
TF and provider versions:
terraform --version
Terraform v1.2.4
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.75.2
Could this be a provider issue? It seems like the problem only occurs with a viariable that points to a specific setting in a specific resource.
What else can I check?
SOLVED:
There was a lifecycle block in the ECS resource that contained a list of change ignores. It was there because of autoscaling, which was temporarily removed from the project.
lifecycle {
ignore_changes = [task_definition, desired_count]
}

How to properly reset Terraform default tfstate?

Recently, I've started using workspace per env in my Terraform configuration. I ended up having three workspaces dev, staging and production. But for historical reasons my default workspace still contains obsolete tfstate.
What is the proper what to "reset" it to the default state? Like having nothing in it.
One way to achieve this is to manually execute terraform state rm for each resource. But in this way, I would end up with hundreds of such calls. Is there some kind of terraform state reset analogue?
The easiest way I know of so far is to create a new state.
For local state...
Delete the local state files
.terraform
terraform.lock.hcl
terraform.tfstate
terraform.tfstate.backup
and run terraform init to create a new state.
For (AWS s3) remote state...
Change the backend storage "key" path.
For example...
terraform {
backend "s3" {
bucket = "terraform-storage"
key = "backends/stateX" ###...changed to "backends/stateY"
region = "us-west-1"
}
}
...and then run terraform init -reconfigure to create the new state and attach the current project to that state. You can then clean up the old remote state file using whatever method is convenient. Old state files shouldn't interfere with new state files, but best practice is to clean them up anyway.
If you have AWS CLI installed, you can clean up the old state file using a one-liner...
aws s3api delete-object --bucket terraform-storage --key backends/stateX

could not build dsn for snowflake connection: no authentication method provided

I am following this terraforming Snowflake tutorial: https://quickstarts.snowflake.com/guide/terraforming_snowflake/index.html?index=..%2F..index#6
When I run the command terraform plan in my project folder, it says:
provider.snowflake.account
Enter a value:
and then
provider.snowflake.username
Enter a value: MYUSERNAME
Which value do I have to enter? I tried entering my snowflake instance link as the account value:
dc70490.eu-central-1.snowflakecomputing.com
as well as dc70490as the account
and then my username MYUSERNAME as the username value.
However, it gives me an error that:
│ Error: could not build dsn for snowflake connection: no authentication method provided
│
│ with provider["registry.terraform.io/chanzuckerberg/snowflake"],
│ on <input-prompt> line 1:
│ (source code not available)
I also tried tf-snow as the username, since we exported this in a previous step of the tutorial
The account name should be without snowflakecomputing.com and the username need not be in caps.
https://quickstarts.snowflake.com/guide/devops_dcm_terraform_github/index.html?index=..%2F..index#3
Edit:
This is what I have used for terraform configuration to Snowflake for successful connection.
provider "snowflake"{
alias = "sys_admin"
role = "SYSADMIN"
region = "EU-CENTRAL-1"
account = "abcd123"
private_key_path = "<path to the key>"
username = "tf-snow"
}
I've been testing the Snowflake provider with version 0.25.10.
From 0.25.10 - to - 0.25.11, 0.25.11 was able to see resources the previous version (0.25.10) couldn't. The current version is 0.26.33.
I'm using Terraform 1.1.2. This is all important, because, along the way I've seen many strange errors depending on the combination.
If in doubt, try 0.25.10 first. I used:
provider "snowflake" {
account = "zx12345"
username = "A_SUITABLE_USER"
region = "eu-west-1"
private_key_path = "./my_private_key.p8"
}
I created a Snowflake user with key/pair authentication. Look at that private key path (Not for production kids), when I put it in a suitable location:
~/.ssh/my_private_tf_key.p8
This was the error:
Terraform v1.1.2
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
Error: could not build dsn
for snowflake connection: Private Key file could not be read: Could
not read private key:
open /home/terraform/keys/tf_london_admin_key.p8:
no such file or directory
with provider["registry.terraform.io/chanzuckerberg/snowflake"],
on main.tf line 27, in provider "snowflake": 27: provider
snowflake" {
Why highlight this? Because, I have no idea how it decided to use that dir, there's not even a /home/terraform/ dir on my system. Completely made up.
So let's just say, I'm not sure this provider is ready for prime time!
Day wasted, (YMMV).
I hope the Chan/Zukerberg combo keep supporting this going forward; I'll open a few issues on GitHub, I'm sure when all the issues are ironed out it'll be good, as I said, probably not for production though.
There is a mistake in the Snowflake tutorial, the path of the ssh key should not be :
export SNOWFLAKE_PRIVATE_KEY_PATH="~/.ssh/snowflake_tf_snow_key"
but
export SNOWFLAKE_PRIVATE_KEY_PATH="~/.ssh/snowflake_tf_snow_key.p8"
Please not that you should run terraform plan to make it work and not sudo terraform plan otherwise it will look for a ssh-key in /root/.ssh/ instead of $HOME/.ssh/ and so the whole process won't work.

Terraform backend empty state

I am experiencing a weird behaviour with terraform. I have been working on an infra. I have a backend state configured to state my state file in a storage account in azure. Until yesterday everything was fine, this morning when I tried to update my infra, the output from terraform plan was weird as its trying to create all the resources as new, when I checked my local testate..it was empty.
I tried terraform pull and terraform refresh but nothing, still same result. I checked my remote state and I have all the resources still declared.
So I went for plan b, copy and paste my remote state into my local project and run terraform once again, but nothing, seems that terraform is ignoring my terraform state on my local and doesn't wanna pull the remote one.
EDIT:
this is the structure of my terraform backend:
terraform {
backend "azurerm" {
resource_group_name = "<resource-group-name>"
storage_account_name = "<storage-name>"
container_name = "<container-name>"
key = "terraform.tfstate"
}
}
The weird thing also, is that I just used terraform to create 8 resource for another project, and it did created everything and updated my backend state without any issue. The problem is only with the old resources.
Any help please?
if you run terraform workspace show are you in the default workspace?
if you have the tfstate locally but you're not on the correct workspace terraform will ignore it : https://www.terraform.io/docs/language/state/workspaces.html#using-workspaces
also is it possible to see your backend file structure?
EDIT:
i dont know why it ignores your remote state, but i think that your problem is that when you run terraform refresh it ignores your local file because you have a remote config:
Usage: terraform refresh [options]
-state=path - Path to read and write the state file to. Defaults to "terraform.tfstate". Ignored when remote state is used.
-state-out=path - Path to write updated state file. By default, the -state path will be used. Ignored when remote state is used.
is it possible to see the ouput of your terraform state pull?

How to create two different environments using terraform at the same time

Below is the problem I'm trying to solve -
We have a Web application called 'deployment console' which we will use to manage our environments on AWS.
The deployment console will receive requests to create/maintain either staging/prod environments using terraform.
The console can/will receive requests in parallel.
So my question is how can run terraform in parallel to create/maintain environments based on the requests, without screwing up the state files of the respective environments
My terraform folder structure is as follows
terraform
create_env.tf
variables.tf
staging.tfvars
prod.tfvars
staging_backend_cnf.tfvars
prod_backend_cnf.tfvars
ec2module
create_ec2.tf
varialbes.tf
output.tf
elbmodule
create_ec2.tf
varialbes.tf
output.tf
ec2secmodule
create_ec2.tf
varialbes.tf
output.tf
elbsecmodule
create_ec2.tf
varialbes.tf
output.tf
If you want to avoid risking state file corruption through parallel runs then you should use state file locking.
Because you seem to be using AWS you should probably be already storing your state in S3 and from there it's just a case of adding a DynamoDB lock table:
terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
dynamodb_table = "mylocktable
}
}

Resources