terraform restore an RDS backup - terraform

We use terraform to create all our resources in AWS. It is convenient if everything goes as planned. However, we do have to consider times when things go wrong. One question we have is the RDS instance. It is created and tracked by terraform. In the case of a system crash, we need to restore the backup. I think AWS automatically makes backups every day so we do not need to worry about the backup. But I am not sure if terraform can handle the restore well. If we manually restore a backup, how would terraform be able to track it? Is it even doable? Or should we let terraform do the restore? How does the code look like?

I believe you can use restore_to_point_in_time (https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster#restore_to_point_in_time-argument-reference) for this, but I have not personally tried it:
resource "aws_rds_cluster" "example-clone" {
# ... other configuration ...
restore_to_point_in_time {
source_cluster_identifier = "example"
restore_type = "copy-on-write"
use_latest_restorable_time = true
}
}

Related

Terraform doesn't pick up one certain variable change

I have an ECS Fargate cluster set up that currently has 4 tasks of the same app running on it. The desired number of tasks is defined within a variable:
variable "desired_task_count" {
description = "Desired ECS tasks to run in service"
default = 4
}
When I change the default value to any given number, save it and run terraform plan or terraform apply, terraform doesn't see any changes. The tfstate file remains unchanged.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found
no differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
TFstate:
"desired_count": 4,
If I change any other variable in that exact same variables.tf file, terraform picks up the changes and applies them.
What I tried to do:
Create a new variable to pass the value - didn't work
Rebuild infrastructure with destroy and then apply. This did work as it rewrites a new state file.
TF and provider versions:
terraform --version
Terraform v1.2.4
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.75.2
Could this be a provider issue? It seems like the problem only occurs with a viariable that points to a specific setting in a specific resource.
What else can I check?
SOLVED:
There was a lifecycle block in the ECS resource that contained a list of change ignores. It was there because of autoscaling, which was temporarily removed from the project.
lifecycle {
ignore_changes = [task_definition, desired_count]
}

How to properly reset Terraform default tfstate?

Recently, I've started using workspace per env in my Terraform configuration. I ended up having three workspaces dev, staging and production. But for historical reasons my default workspace still contains obsolete tfstate.
What is the proper what to "reset" it to the default state? Like having nothing in it.
One way to achieve this is to manually execute terraform state rm for each resource. But in this way, I would end up with hundreds of such calls. Is there some kind of terraform state reset analogue?
The easiest way I know of so far is to create a new state.
For local state...
Delete the local state files
.terraform
terraform.lock.hcl
terraform.tfstate
terraform.tfstate.backup
and run terraform init to create a new state.
For (AWS s3) remote state...
Change the backend storage "key" path.
For example...
terraform {
backend "s3" {
bucket = "terraform-storage"
key = "backends/stateX" ###...changed to "backends/stateY"
region = "us-west-1"
}
}
...and then run terraform init -reconfigure to create the new state and attach the current project to that state. You can then clean up the old remote state file using whatever method is convenient. Old state files shouldn't interfere with new state files, but best practice is to clean them up anyway.
If you have AWS CLI installed, you can clean up the old state file using a one-liner...
aws s3api delete-object --bucket terraform-storage --key backends/stateX

Terraform backend empty state

I am experiencing a weird behaviour with terraform. I have been working on an infra. I have a backend state configured to state my state file in a storage account in azure. Until yesterday everything was fine, this morning when I tried to update my infra, the output from terraform plan was weird as its trying to create all the resources as new, when I checked my local testate..it was empty.
I tried terraform pull and terraform refresh but nothing, still same result. I checked my remote state and I have all the resources still declared.
So I went for plan b, copy and paste my remote state into my local project and run terraform once again, but nothing, seems that terraform is ignoring my terraform state on my local and doesn't wanna pull the remote one.
EDIT:
this is the structure of my terraform backend:
terraform {
backend "azurerm" {
resource_group_name = "<resource-group-name>"
storage_account_name = "<storage-name>"
container_name = "<container-name>"
key = "terraform.tfstate"
}
}
The weird thing also, is that I just used terraform to create 8 resource for another project, and it did created everything and updated my backend state without any issue. The problem is only with the old resources.
Any help please?
if you run terraform workspace show are you in the default workspace?
if you have the tfstate locally but you're not on the correct workspace terraform will ignore it : https://www.terraform.io/docs/language/state/workspaces.html#using-workspaces
also is it possible to see your backend file structure?
EDIT:
i dont know why it ignores your remote state, but i think that your problem is that when you run terraform refresh it ignores your local file because you have a remote config:
Usage: terraform refresh [options]
-state=path - Path to read and write the state file to. Defaults to "terraform.tfstate". Ignored when remote state is used.
-state-out=path - Path to write updated state file. By default, the -state path will be used. Ignored when remote state is used.
is it possible to see the ouput of your terraform state pull?

How do I ignore changes to everything for an ec2 instance except a certain attribute?

I'm creating a terraform configuration for an already deployed EC2 instance. I want to change the instance type alone for this instance. I want something like this :
resource "aws_instance" "ec2" {
ami = "ami-09a4a9ce71ff3f20b"
instance_type = "t2.micro"
lifecycle {
ignore_changes = [
<everything except instance_type>
]
}
}
How do I ignore changes to everything for an ec2 instance except a certain attribute?
Unfortunately I can't seem to find a way to do this while the state of the resource does not match the existing state. However I have tested that it is possible, but you need to do the operation in stages... starting with telling terraform what the current state of that ec2 instance is and working from there.
Step 1: Create a Resource Block for The ec2 Instance as it Currently Exists
I would do this with a combination of manual entry (yes, tedious I know) and utilsing terraform import.
You can run terraform plan a bunch of times until it reveals no changes on the resource, which would indicate that the resource now matches the current state of the resource.
Step 2. Update the Block With the New Instance Type
Once they are equal, it would then be a matter of simply updating the aws_instance resource block to your desired instance_type.
Step 3. Apply the Changes to the EC2 Instance in a Targeted Way
To ensure that only changes to this instance are applied, you can lean on the terraform apply -target command to target applying just for this resource specifically. This will void any other resources updating in your plan.
Step 4. Make Further Adjustments as Required
Once the resource is now matching the instance you want. Go ahead and modify the rest of the resource block to reflect future state changes.

Terraform Throttling Route53

Did anyone experienced issues with Terraform being throttled when using it with AWS Route53 records and being VERY slow?
I have enabled DEBUG mode and getting this:
2018-11-30T14:35:08.467Z [DEBUG] plugin.terraform-provider-aws_v1.36.0_x4: 2018/11/30 14:35:08 [DEBUG] [aws-sdk-go] <?xml version="1.0"?>
2018-11-30T14:35:08.467Z [DEBUG] plugin.terraform-provider aws_v1.36.0_x4: <ErrorResponse xmlns="https://route53.amazonaws.com/doc/2013-04-01/"><Error><Type>Sender</Type><Code>Throttling</Code><Message>Rate exceeded</Message></Error><RequestId>REQUEST_ID</RequestId></ErrorResponse>
2018-11-30T14:35:08.518Z [DEBUG] plugin.terraform-provider-aws_v1.36.0_x4: 2018/11/30 14:35:08 [DEBUG] [aws-sdk-go] DEBUG: Validate Response route53/ListResourceRecordSets failed, will retry, error Throttling: Rate exceeded
Terraform takes >1h just to do simple Plan, something which normally takes <5 mins.
My infrastructure is organized like this:
alb.tf:
module "ALB"
{ source = "modules/alb" }
modules/alb/alb.tf:
resource "aws_alb" "ALB"
{ name = "alb"
subnets = var.subnets ...
}
modules/alb/dns.tf
resource "aws_route53_record" "r53" {
count = "${length(var.cnames_generic)}"
zone_id = "HOSTED_ZONE_ID"
name = "${element(var.cnames_generic_dns, count.index)}.${var.environment}.${var.domain}"
type = "A"
alias {
name = "dualstack.${aws_alb.ALB.dns_name}"
zone_id = "${aws_alb.ALB.zone_id}"
evaluate_target_health = false
}
}
modules/alb/variables.tf:
variable "cnames_generic_dns" {
type = "list"
default = [
"hostname1",
"hostname2",
"hostname3",
"hostname4",
"hostname5",
"hostname6",
"hostname7",
...
"hostname25"
]
}
So I am using modules to configure Terraform, and inside modules there are resources (ALB, DNS..).
However, looks like Terraform is describing every single DNS Resource (CNAME and A records, which I have ~1000) in a HostedZone which is causing it to Throttle?
Terraform v0.10.7
Terraform AWS provider version = "~> 1.36.0"
that's a lot of DNS records! And partly the reason why the AWS API is throttling you.
First, I'd recommend upgrading your AWS provider. v1.36 is fairly old and there have been more than a few bug fixes since.
(Next, but not absolutely necessary, is to use TF v0.11.x if possible.)
In your AWS Provider block, increase max_retries to at least 10 and experiment with higher values.
Then, use Terraform's --parallelism flag to limit TF's concurrency rate. Try setting that to 5 for starters.
Last, enable Terraform's debug mode to see if it gives you any more useful info.
Hope this helps!
The problem is solved by performing the following actions:
since we re-structured DNS records by adding one resource and then variables / iterate through them, this probably caused Terraform to query constantly all DNS records
we decided to leave Terraform to finish refresh (took 4h and lots of throttling)
manually deleted DNS records from R53 for the Workspace which we were doing this
commenting out Terraform DNS resources so let it also delete from state files
uncommenting Terraform DNS and re-run it again so it created them again
run Terraform plan went fine again
Looks like throttling with Terraform AWS Route53 is completely resolved after upgrading to newer AWS provider. We have updated TF AWS provider to 1.54.0 like this in our init.tf :
version = "~> 1.54.0"
Here are more details about the issue and suggestions from Hashicorp engineers:
https://github.com/terraform-providers/terraform-provider-aws/issues/7056

Resources