I am using Gitlab CI/CD to deploy to AWS with Terraform.
I would like to use the Gitlab REST API to store and lock/unlock my state.
To add some security and prevent any loss of my state file, I want also to backup my state file to an S3 bucket.
My question is: how to sync/update my state file present in my S3 bucket when my pipeline run a terraform apply and make changes to my AWS resources ?
Related
Locally I:
Created main.tf
Initialize with ‘terraform init’
Imported GCP project and Google Run service
Updated main.tf so ‘terraform plan’ was not trying to do anything.
Checked main.tf to GitHub
I setup GitHub actions so:
Checkout
Setup Gcloud
Initialize with ‘terraform init’
Plan with ‘terraform plan’
Terraform plan is trying to recreate everything.
How do I make it detect existing resources?
By default Terraform will initialise a local state. The problem with this state is that it will be available only for you on your PC. If you execute a plan somewhere else, this state will be lost. To solve this issue, you need to set up a remote backend for Terraform for being able to store the state file in a centralised location.
If you are using Google Cloud, you can use a Cloud Store bucket for storing the state file. Terraform offers gcs module for being able to configure this backend using Cloud Store. You have to create a bucket and provide the bucket name to the gcs backend configuration:
terraform {
backend "gcs" {
bucket = "tf-state-prod"
prefix = "terraform/state"
}
}
I have backend on aws s3 bucket, where I have all my *.tfstate files.
When I do
cd terraform/project.foo
terraform destroy
I would like that it will also remove foo.tfstate file from my backend S3 Bucket, but it's not doing so.
Is there any option, to remove needed tfstate file from backend via terraform?
Thank you!
This is totally possible if you are using Terraform workspace
I had two workspace default and prod.
I switched to prod workspace and ran terraform destroy
This is the S3 state file content, post terraform destroy
Once destroyed, switch to default workspace terraform workspace select default
From default workspace run terraform workspace delete prod
Poof, your state file is completely cleared up
Note: I'm using fish shell with Terraform plugin, terraform workspace gets printed in prompt (represented by arrow )
Does terraform support aws backup feature for to restore the image from vault (https://www.terraform.io/docs/providers/aws/r/backup_plan.html )?
As I read the document I can see that it does support creating of backup plan, assigning resources and policy, creating vault but doesnot support restore of an image or ebs volume
How do i add the restore block in my terraform template
Terraform's execution model is designed for translating declarative descriptions of an intended state into imperative actions to reach that state automatically, and so its model doesn't really support "exceptional" processes like restoring backups.
However, you can develop a process for restoring backups alongside Terraform whereby the main restore action is done using the AWS Console, AWS CLI, or API in your own automation, and then you inform Terraform after the fact that it should use the restored object via its state manipulation commands.
For example, if you have an EBS volume managed by Terraform using an aws_ebs_volume resource, you might also use Terraform to configure an AWS Backup plan for that volume, and then backups will be created automatically as per your plan.
In the exceptional situation where your existing volume is lost or corrupted and you want to restore the backup, the person responding to the incident can follow the following process:
Create an AWS Backup restore job either using the AWS Console, the AWS CLI, or some software of your own design using the AWS Backup API.
Once the backup job is complete, consult the CreatedResourceARN to find the id if the new object that was created by restoring the backup. In the case of an EBS volume, this will be the final part of the after the :volume/ separator.
Tell Terraform to "forget" the existing EBS volume object that is now destroyed or damaged:
terraform state rm aws_ebs_volume.example
Tell Terraform to import the object created by restoring the backup as the new remote object associated with the Terraform resource:
terraform import aws_ebs_volume.example vol-049df61146c4d7901
If your old EBS volume is still present but corrupted or otherwise damaged, the final step would be to locate and manually destroy the remant of it, because Terraform is no longer managing it and therefore it would otherwise be left in place forever.
After this process is complete, Terraform will consider the new object to be the one managed by that resource, and you can use Terraform as normal with that resource moving forward. The same principle applies to any of the object types supported by AWS Backup, as long as they have a resource type in the AWS provider that supports terraform import.
Correct me if I'm wrong, when you run terraform init you are asked to name a storage account and container for the terraform state.
Can these also automatically be made with terraform?
Edit: I'm using Azure.
I usually split my terraform configurations into two parts.
One that creates a storage account with container, with a specific tag (tf=backend for example). The second one that creates all other resources. I share a backend.tfvars between the two, and in the second one, I get the storage account key using Azure CLI and the previously set tag (that way I don't have to get the key and pass it manually to my second script).
You could even migrate the state of the first terraform configuration once deployed, if you don't want to rely on a local state
Yes, absolutely. You would in general want an S3 bucket for each of your environments, although it's also possible to have a bucket shared across all environments and then set up access controls using bucket policies. Don't create this bucket as part of provisioning other resources, as their lifecycles will likely be different (you would want to retain the bucket for a long time and would be unlikely to want to destroy it).
What you do is you define this bucket in Terraform using local state first. After it is created, you add a remote backend pointing to this bucket.
terraform {
required_version = ">= 0.11.7"
backend "s3" {
bucket = "my-state-bucket"
key = "s3_state_bucket"
region = "us-west-2"
encrypt = "true"
}
}
After you run terraform init, Terraform will ask if you want to migrate the local state file to S3. Answer yes, and after this completes you can delete the local state file, as it's no longer used.
This approach allows you to break out of this chicken and egg situation and still manage all of your infrastructure as code, rather then creating it manually using web console or bash scripts.
When using Terraform to deploy our AWS infra, if we use the AWS console to redeploy an API Gateway deployment, it creates a new deployment ID. Afterwards when running terraform apply again, Terraform lists aws_api_gateway_stage.deployment_id as an in-place update item. Running terraform refresh does not update the state. It continues to show this as a delta. Am I missing something?
Terraform computes the delta based on what it already has saved on the backend and what there is on AWS at the moment of application. You can use Terraform to apply changes to AWS, but not the other way around, i.e. you cannot change your AWS configuration and expect Terraform to pull the changes into its state.