Cloudfront distribution - recreation every time on terraform apply - amazon-cloudfront

my cloudfront distribution in changed every time I run terraform, even if I don't change anything? I can see in every plan that web_acl_id is going to be removed. What should I do to change that? I would like not to change cloudfront every time because its deployment always takes a lot of time.
Trying terraform apply and it re-creating cloudfront distribution

Related

Is there a terraform resource which can force an intermediate update of the remote statefile, before continuing?

I set up terraform to use a backend to remotely store the statefile. That works fine.
My project take several minutes for the full terraform apply to complete. During development, sometimes one of the later stages hangs (seemingly) eternally. I need the outputs in order to manually connect to the servers and inspect what is broken. However, the statefile does not get written until the terraform process completes. So there are no outputs available during the first terraform apply.
Is there a way to make terraform update the statefile intermediately, while it is still busy applying things?
I know I could solve this by separating the process into multiple modules, and apply each one after the other. But I am looking for a solution where I can still apply all at once.
When you run
terraform plan
you get the outputs also. What you can do is save that file before applying -
terraform plan -out tf.plan
Then you apply.
You can look into this file to find the changes.
Remember, you won't find the output data that were to be showed after apply, like a thing that does not exist yet.
Best wishes.

I host an Angular app on AWS S3 via a Cloudfront distribution. I want to setup a staging distribution: how do I change my CodePipeline workflow?

I learned that AWS Cloudfront now supports continued deployment:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/continuous-deployment.html
I would like to use this before I release a major update to my app, in order to roll it out slowly and catch any unforeseen issues early.
My current deployment happens via AWS CodePipeline: after building my app, it is deployed directly to the AWS S3 bucket to which the Cloudfront distribution is tied.
I couldn't find any documentation on how to change my CodePipeline configuration to account for a staging Cloudfront distribution.
Ideally, I would like the following setup:
Normally, things work as before (as I don't often release major updates that need a gradual roll out), so the default behavior would be to automatically promote the staging distribution to production after every release.
In case I exceptionally want a gradual roll out, it's okay to do things manually: I would go into Cloudfront, change some settings, make my release, and then gradually increase the roll out over a few days, until I reach 100%, I'm happy with the release, and restore the settings to behave as the point above.
Can anyone suggest how to tackle this scenario?
Thanks!

Terraform Refresh after manual change

So here's what I'm trying to do
Given I changed a configuration in the load balancer
And I added that to my terraform declaration
When I run a plan there are zero changes which is expected
Do I need to refresh at this point to match my hardware state before applying?
Or when I run an apply this would just update the state?
If you've changed the settings outside of Terraform and you've updated the Terraform configuration to match then indeed there's no extra step to run here: terraform plan should report that it detected the value changed outside of Terraform (assuming you're using Terraform v1.0.0 or later) but then report that it doesn't need to make any changes to match with the configuration.
Note also that in recent Terraform the terraform refresh command is still available but no longer recommended. Instead, you can use terraform apply -refresh-only to get a similar effect but with the opportunity to review the detected changes before creating a new state snapshot. In the situation you've described, a refresh-only apply like this will also allow you to commit the detected change as a new state snapshot so that future terraform plan won't re-report that it detected a change made outside of Terraform, which might avoid your coworkers being confused by this message when they make a later change.

Terraform plan: Saved plan is stale

How do I force Terraform to rebuild its plans and tfstate files from scratch?
I'm considering moving my IAC from GCP's Deployment Manager to Terraform, so I thought I'd run a test, since my TF is pretttty rusty. In my first pass, I successfully deployed a network, subnet, firewall rule, and Compute instance. But it was all in a single file and didn't scale well for multiple environments.
I decided to break it out into modules (network and compute), and I was done with the experiment for the day, so I tore everything down with a terraform destroy
So today I refactored everything into its modules, and accidentally copypasta-ed the network resource from the network module to the compute module. Ran a terraform plan, and then a terraform apply, and it complained about the network already existing.
And I thought that it was because I had somehow neglected to tear down the network I'd created the night before? So I popped over to the GCP console, and yeah, it was there, so...I deleted it. In the UI. Sigh. I'm my own chaos engineer.
Anyway, somewhere right around there, I discovered my duplicate resource and removed it, realizing that the aforementioned complaint about the "network resource already existing" was coming from the 2nd module to run.
And I ran a terraform plan again, and it didn't complain about anything, so I ran a terraform apply, and that's when I got the "stale plan" error. I've tried the only thing I could think of - terraform destroy, terraform refresh - and then would try a plan and apply after that,
I could just start fresh from a new directory and new names on the tfstate/tfplan files, but it bothers me that I can't seem to reconcile this "stale plan" error. Three questions:
Uggh...what did I do wrong? Besides trying to write good code after a 2-hour meeting?
Right now this is just goofing around, so who cares if everything gets nuked? I'm happy to lose all created resources. What are my options in this case?
If I end up going to prod with this, obviously idempotence is a priority here, so what are my options then, if I need to perform some disaster recovery? (Ultimately, I would be using remote state to make sure we've got the tfstate file in a safe place.
I'm on Terraform 0.14.1, if that matters.
Saved plan is stale means out of date. Your plan is matching the current state of your infrastructure.
Either the infrastructure was changed outside of terraform or used terraform apply without -save flag.
Way 1: To fix that you could run terraform plan with the -out flag to save the new plan and re-apply it later on.
Way 2: But more easily I would use terraform refresh and after that terraform apply
I created the infrastructure via the gcloud CLI first for testing purposes. As soon as it was proven as working, I transferred the configuration to gitlab and encountered the same issue in one of my jobs. The issue disappeared after I changed the network's and cluster's names.

Backing up of Terraform statefile

I usually run all my Terraform scripts through Bastion server and all my code including the tf statefile resides on the same server. There happened this incident where my machine accidentally went down (hard reboot) and somehow the root filesystem got corrupted. Now my statefile is gone but my resources still exist and are running. I don't want to again run terraform apply to recreate the whole environment with a downtime. What's the best way to recover from this mess and what can be done so that this doesn't get repeated in future.
I have already taken a look at terraform refresh and terraform import. But are there any better ways to do this ?
and all my code including the tf statefile resides on the same server.
As you don't have .backup file, I'm not sure if you can recover the statefile smoothly in terraform way, do let me know if you find a way :) . However you can take few step which will help you come out from situation like this.
The best practice is keep all your statefiles in some remote storage like S3 or Blob and configure your backend accordingly so that each time you destroy or create a new stack, it will always contact the statefile remotely.
On top of it, you can take the advantage of terraform workspace to avoid the mess of statefile in multi environment scenario. Also consider creating a plan for backtracking and versioning of previous deployments.
terraform plan -var-file "" -out "" -target=module.<blue/green>
what can be done so that this doesn't get repeated in future.
Terraform blue-green deployment is the answer to your question. We implemented this model quite a while and it's running smoothly. The whole idea is modularity and reusability, same templates is working for 5 different component with different architecture without any downtime(The core template remains same and variable files is different).
We are taking advantage of Terraform module. We have two module called blue and green, you can name anything. At any given point of time either blue or green will be taking traffic. If we have some changes to deploy we will bring the alternative stack based on state output( targeted module based on terraform state), auto validate it then move the traffic to the new stack and destroy the old one.
Here is an article you can keep as reference but this exactly doesn't reflect what we do nevertheless good to start with.
Please see this blog post, which, unfortunately, illustrates import being the only solution.
If you are still unable to recover the terraform state. You can create a blueprint of terraform configuration as well as state for a specific aws resources using terraforming But it requires some manual effort to edit the state for managing the resources back. You can have this state file, run terraform plan and compare its output with your infrastructure. It is good to have remote state especially using any object stores like aws s3 or key value store like consul. It has support for locking the state when multiple transactions happened at a same time. Backing up process is also quite simple.

Resources