I have a Gitlab CI/CD Pipeline that deploys to CloudFront. Each app has some variables defined as pipeline variables like so:
APP_CONTEXT=/myapp
S3_BUCKET=abx-dev-myapp
EXTERNAL_ACCESS=true
We have our CloudFront Terraform managed and the TF files are stored in GIT in a separate repo, say static-appstate. The configuration uses and S3 backend to store the .tfstate and uses Dynamo for locking. The actual .tf definitions are stored in the repo mentioned.
The Pipeline has a Terraform configuration stage that reads these variables, clones the static-appstate repo, makes appropriate changes to the .tf file to add Behaviors to CloudFront and create Origins are required and performs a terraform apply.
All this works fine, however, I am wondering how to update the actual .tf file. Without the update the main configuration will never be updated in the repository and subsequent pipeline invocations will not have the current configuration.
If I execute a git commit to commit the changes I have the issues of contention and possible conflicts if two pipelines for different apps are simultaneously running in parallel.
The only solution at the moment is that the TF files are being manually updated. Is manual management the only solution? It would be nice if my Pipeline could somehow auto-update the TF configuration based on the app metadata.
So ideally, when a new app is onboarded, the above variables are the only thing required, the Pipeline would check out the state repo, update the TF file to add in the new config, check in the new TF file and perform the Terraform application so the TF state in the repo, the .tfstate in S3 and the CloudFront config, all will be current.
Related
I have a couple apps that use the same GCP project. There are dev, stage, and prod projects, but they're basically the same, apart from project IDs and project numbers. I would like to have a repo in gitlab like config where I keep these IDs, in a dev.tfvars, stage.tfvars, prod.tfvars. Currently each app's repo has a directory of config/{env}.tfvars, which is really repetitive.
Googling for importing or including terraform resources is just getting me results about terraform state, so hasn't been fruitful.
I've considered:
Using a group-level Gitlab variable just as key=val env file and have my gitlab-ci yaml source the correct environment's file, then just include what I need using -var="key=value" in my plan and apply commands.
Creating a terraform module that either uses TF_WORKSPACE or an input prop to return the correct variables. I think this may be possible, but I'm new to TF, so I'm not sure how to return data back from a module, or if this type of "side-effects only" solution is an abusive workaround to something there's a better way to achieve.
Is there any way to include terraform variables from another Gitlab project?
I have a project where I'm using Terraform in Azure DevOps Pipeline create Infrastructure but want to destroy the infrastructure in a PowerShell script running locally.
So the PScommand that I want to run is this:
$TerraCMD = "terraform destroy -var-file C:/Users/Documents/Terraform/config.json"
Invoke-Expression -Command $TerraCMD
But I get the following output:
[0m[1m[32mNo changes.[0m[1m No objects need to be destroyed.[0m
[0mEither you have not created any objects yet or the existing objects were
already deleted outside of Terraform.
[33mâ•·[0m[0m
[33m│[0m [0m[1m[33mWarning: [0m[0m[1mValue for undeclared variable[0m
[33m│[0m [0m
[33m│[0m [0m[0mThe root module does not declare a variable named "config" but a value was
[33m│[0m [0mfound in file
[33m│[0m [0m"C:/Users/mahera.erum.baloch/source/repos/PCFA-CloudMigration/On-Prem-Env/IaC/Terraform/config.json".
[33m│[0m [0mIf you meant to use this value, add a "variable" block to the
[33m│[0m [0mconfiguration.
[33m│[0m [0m
[33m│[0m [0mTo silence these warnings, use TF_VAR_... environment variables to provide
[33m│[0m [0mcertain "global" settings to all configurations in your organization. To
[33m│[0m [0mreduce the verbosity of these warnings, use the -compact-warnings option.
[33m╵[0m[0m
[0m[1m[32m
Destroy complete! Resources: 0 destroyed.
I know this is probably due to that I created the resources through the pipeline and not from local repository, but is there a way to do this?
Any help would be appreciated.
P.S. The State file is saved in the Azure Storage.
I'm going to assume that your code is kept in a repo that you have access to, since you mentioned that it's being deployed from Terraform running in an Azure DevOps Pipeline.
As others mentioned, the state file AND your terraform code is your source of truth. Hence, you'd need for both the PowerShell script and the Pipeline to be referring to the same state file and code, to achieve what you're trying to.
For the terraform destroy to run, it would need access to both your Terraform code and the state file so that it can compare what needs to be destroyed.
Unless your setup is very different from this, you could have your PowerShell script just git clone or git pull the repo, depending on your requirements, and then execute a terraform destroy on that version of the code. Your state file will then be updated accordingly.
I've just run into the problem of keeping Terraform state from an Azure Pipeline build. Repeated builds of the pipeline fail because the resource group already exists, but the Terraform state is not kept by the build pipeline. And I can find no way to execute terraform destroy on the pipeline even if I had the state.
One approach I found in chapter 2 of this book is storing terraform.tfstate in a remote back end. This looks like it will keep .tfstate across multiple builds of the pipeline and from elsewhere too.
I don't know yet if it will allow a terraform destroy.
I started working with Terraform and realized that the state files were created and saved locally. After some searching I found that it is not recommended that terraform state files be committed to git.
So I added a backend configuration using S3 as the backend. Then I ran the following command
terraform init -reconfigure
I realize now that this set the backend as S3 but didn't copy any files.
Now when I run terraform plan, it plans to recreate the entire infrastructure that already exists.
I don't want to destroy and recreate the existing infrastructure. I just want terraform to recognize the local state files and copy them to S3.
Any suggestions on what I might do now?
State files are basically JSON files containing information about the current setup. You can manually copy files from the local to remote(S3) backend and use them without issues. You can read more about state files here: https://learn.hashicorp.com/tutorials/terraform/state-cli
I also manage a package to handle remote states in S3/Blob/GCS, if you want to try: https://github.com/tomarv2/tfremote
the scenario is:
I have 1 repository with many terraform files including IAM, Instances etc
I have to split this repository in two (IAM configs will migrate to another repository that stores terraform state in another bucket)
So I want update the state for this new repository adding the IAM configs state and delete IAM state from the older repository but I don't want to apply changes in my infrastructure because I would have to delete all configurations from older repository and then create all again.
Is there any way to update state without apply changes?
The best way would be to create the new repositories that each hold new state files.
Let's say the resources created from you old repo are stored in a state called "repo1.tfstate". Then you create a new repo where you want some stuff that is split from repo1 here. You could then use terraform import to import resources into repo2. Don't forget to remove the resources you just imported from repo1 with terraform state rm.
Another way would be to do a terraform state pull > state-for-repo-2.tfstate, edit that manually, put it into repo2 and do a terraform state push. Of cause, you would have to edit the state for repo1 as well. But be aware that terraform state push will overwrite the original state file...
I am working on terraform tasks and trying to understand how state files work. I have created main.tf file which has
vpc,firewall,subnet,compute_instance
which has to be create in GCP. So i have applied this to GCP environment and a file name terraform.tfstate file got created and i did backup of this file into folder called 1st-run.
Now i have updated my main.tf with
2vpc,2firewalls,2subnets,compute_instance
as i need to add another nic for my vm.Did terraform apply and environment got created and terraform.tfstate file got created. I did backup of this file into folder called 2nd-run.
I want to rollback the environment where i have executed for 1st-run. I have that state file which is in 1st-run folder.
What is the command to rollback by using statefile instead of touching the code so that automatically my GCP environment will have
vpc,firewall,subnet,compute_instance
which i have executed for the 1st time.
There is no way to roll back to a previous state as described in a state file in Terraform today. Terraform always plans changes with the goal of moving from the prior state (the latest state snapshot) to the goal state represented by the configuration. Terraform also uses the configuration for information that is not tracked in the state, such as the provider configurations.
The usual way to represent "rolling back" in Terraform is to put your configuration in version control and commit before each change, and then you can use your version control system's features to revert to an older configuration if needed.
Not all changes can be rolled back purely by reverting a VCS change though. For example, if you added a new provider block and resources for that provider all in one commit and then applied the result, in order to roll back you'd need to change the configuration to still include the provider block but not include any of the resource blocks, so you'd need to adjust the configuration during the revert. Terraform will then use the remaining provider block to configure the provider to run the destroy actions, after which you can finally remove the provider block too.
While there are commands to manipulate state, there is no command to rollback to the previous state, i.e. before the last terraform apply.
However, if you use a remote S3 backend with a dynamodb lock table, it is possible to roll back if versioning was enabled on the S3 bucket. For example, you could copy the previous version such that it becomes the latest version. You then must also update the digest in the dynamodb table, otherwise the terraform init will give you a message like:
Error refreshing state: state data in S3 does not have the expected content.
This may be caused by unusually long delays in S3 processing a previous state
update. Please wait for a minute or two and try again. If this problem
persists, and neither S3 nor DynamoDB are experiencing an outage, you may need
to manually verify the remote state and update the Digest value stored in the
DynamoDB table to the following value: vvvvvvvvvvvvvv
You can just use this value to update the table and the rollback is done. To revert it, simply delete the last state from the S3 bucket so it goes back to its old "latest" and update the dynamodb table back to the corresponding digest.
Note that remote state is shared with your co-workers, so the above procedure should be avoided.
It's important to understand that changing the state files won't change the infrastructure by itself. That should be done by versioning the terraform code and doing terraform plan and terraform apply on the code that describes the desired infrastructure.
make sure versioning is enable for AWS bucket which maintaining your tfstate files in AWS.
by enabling (show version / view) versioning inside bucket i found tfstate file by name.
Deleted the latest version which causes mismatch (as in my case it is for terraform version), it add delete marker file for that version. means it actually backup after deletion. you can easily restore original file back by just deleting this added delete marker file.)
then i looked into old versions of tfstate files to restore back, by checking history of deployment, downloaded required one (after download ca see details, for me its checking terraform correct version match)
then uploaded that old tfstate file to the same location from where i deleted conflicted tfstate file.
on resume deployment was getting error like below.
Error refreshing state: state data in S3 does not have the expected content.
This may be caused by unusually long delays in S3 processing a previous state
update. Please wait for a minute or two and try again. If this problem
persists, and neither S3 nor DynamoDB are experiencing an outage, you may need
to manually verify the remote state and update the Digest value stored in the
DynamoDB table to the following value: b55*****************************
which means there is digest value already present for previous tfstate lock file which need update with this new value, found in DynamoDB>table>view table details.
on resume deployment in spinnaker able to complete deployment ( exceptional case : but in my case the latest pipeline was included changes in which there was destroying unused resource, which was created using different provider, hence I required to first revert back the provider first then on resume I able to successfully deploy the changes.)