the scenario is:
I have 1 repository with many terraform files including IAM, Instances etc
I have to split this repository in two (IAM configs will migrate to another repository that stores terraform state in another bucket)
So I want update the state for this new repository adding the IAM configs state and delete IAM state from the older repository but I don't want to apply changes in my infrastructure because I would have to delete all configurations from older repository and then create all again.
Is there any way to update state without apply changes?
The best way would be to create the new repositories that each hold new state files.
Let's say the resources created from you old repo are stored in a state called "repo1.tfstate". Then you create a new repo where you want some stuff that is split from repo1 here. You could then use terraform import to import resources into repo2. Don't forget to remove the resources you just imported from repo1 with terraform state rm.
Another way would be to do a terraform state pull > state-for-repo-2.tfstate, edit that manually, put it into repo2 and do a terraform state push. Of cause, you would have to edit the state for repo1 as well. But be aware that terraform state push will overwrite the original state file...
Related
I have 2 projects (git repo). They each have their own terraform files and state.
Now I want these 2 projects to depend on 1 database.
I would like to create a common repo with terraform files to create that database and make my 2 initial projects depend on it.
I know that with a monorepo and terragrunt I can do something like:
dependency "vpc" {
config_path = "../vpc"
}
but is there a way to do this with multiples git repos (no monorepos).
I'm guessing it can't be done, and I suspect that there would be a problem with the multiple states.
Yes, what you could do is use the state of the common terraform. Make sure you output the database id or something on the common terraform so that you can reference to it.
And then inside your child repos use terraform_remote_state data source.
More info here: https://www.terraform.io/docs/language/state/remote-state-data.html
I started working with Terraform and realized that the state files were created and saved locally. After some searching I found that it is not recommended that terraform state files be committed to git.
So I added a backend configuration using S3 as the backend. Then I ran the following command
terraform init -reconfigure
I realize now that this set the backend as S3 but didn't copy any files.
Now when I run terraform plan, it plans to recreate the entire infrastructure that already exists.
I don't want to destroy and recreate the existing infrastructure. I just want terraform to recognize the local state files and copy them to S3.
Any suggestions on what I might do now?
State files are basically JSON files containing information about the current setup. You can manually copy files from the local to remote(S3) backend and use them without issues. You can read more about state files here: https://learn.hashicorp.com/tutorials/terraform/state-cli
I also manage a package to handle remote states in S3/Blob/GCS, if you want to try: https://github.com/tomarv2/tfremote
I have a Gitlab CI/CD Pipeline that deploys to CloudFront. Each app has some variables defined as pipeline variables like so:
APP_CONTEXT=/myapp
S3_BUCKET=abx-dev-myapp
EXTERNAL_ACCESS=true
We have our CloudFront Terraform managed and the TF files are stored in GIT in a separate repo, say static-appstate. The configuration uses and S3 backend to store the .tfstate and uses Dynamo for locking. The actual .tf definitions are stored in the repo mentioned.
The Pipeline has a Terraform configuration stage that reads these variables, clones the static-appstate repo, makes appropriate changes to the .tf file to add Behaviors to CloudFront and create Origins are required and performs a terraform apply.
All this works fine, however, I am wondering how to update the actual .tf file. Without the update the main configuration will never be updated in the repository and subsequent pipeline invocations will not have the current configuration.
If I execute a git commit to commit the changes I have the issues of contention and possible conflicts if two pipelines for different apps are simultaneously running in parallel.
The only solution at the moment is that the TF files are being manually updated. Is manual management the only solution? It would be nice if my Pipeline could somehow auto-update the TF configuration based on the app metadata.
So ideally, when a new app is onboarded, the above variables are the only thing required, the Pipeline would check out the state repo, update the TF file to add in the new config, check in the new TF file and perform the Terraform application so the TF state in the repo, the .tfstate in S3 and the CloudFront config, all will be current.
We use an Azure blob storage as our Terraform remote state, and I'm trying to move state info about specific existing resources to a different container in that Storage Account. The new container (terraforminfra-v2) already exists, and the existing Terraform code points to the old container (terraforminfra). I've tried the following steps:
Use "terraform state pull > migrate.tfstate" to create a local copy of the state data in terraforminfra. When I look at this file, it seems to have all the proper state info.
Update the Terraform code to now refer to container terraforminfra-v2.
Use "terraform init" which recognizes that the backend config has changed and asks to migrate all the workspaces. I enter 'no' because I only want specific resources to change, not everything from all workspaces.
Use the command "terraform state push migrate.tfstate".
The last command seems to run for a bit like it's doing something, but when it completes (with no hint of an error), there still is no state info in the new container.
Is it because I answer 'no' in step #3, does this mean it doesn't actually change to which remote state it "points"? Related to that, is there any way with the "terraform state" command to tell where your state is?
Am I missing a step here? Thanks in advance.
OK, I think I figured out how to do this (or at least, these steps seemed to work):
rename the current folder with the .tf files to something else (like folder.old)
use "terraform state pull" to get a local copy of the state for the current workspace (you need to repeat these steps for each workspace you want to migrate)
create a new folder with the original name and copy your code to it.
create a new workspace with the same name as the original.
modify the code for the remote backend to point to the new container (or whatever else you're changing about the name/location of the remote state).
run "terraform init" so it's pointing to the new remote backend.
use "terraform state push local state file" to push the exported state to the new backend.
I then used "terraform state list" and "terraform plan" in the new folder to sanity check that everything seemed to be there.
I am working on terraform tasks and trying to understand how state files work. I have created main.tf file which has
vpc,firewall,subnet,compute_instance
which has to be create in GCP. So i have applied this to GCP environment and a file name terraform.tfstate file got created and i did backup of this file into folder called 1st-run.
Now i have updated my main.tf with
2vpc,2firewalls,2subnets,compute_instance
as i need to add another nic for my vm.Did terraform apply and environment got created and terraform.tfstate file got created. I did backup of this file into folder called 2nd-run.
I want to rollback the environment where i have executed for 1st-run. I have that state file which is in 1st-run folder.
What is the command to rollback by using statefile instead of touching the code so that automatically my GCP environment will have
vpc,firewall,subnet,compute_instance
which i have executed for the 1st time.
There is no way to roll back to a previous state as described in a state file in Terraform today. Terraform always plans changes with the goal of moving from the prior state (the latest state snapshot) to the goal state represented by the configuration. Terraform also uses the configuration for information that is not tracked in the state, such as the provider configurations.
The usual way to represent "rolling back" in Terraform is to put your configuration in version control and commit before each change, and then you can use your version control system's features to revert to an older configuration if needed.
Not all changes can be rolled back purely by reverting a VCS change though. For example, if you added a new provider block and resources for that provider all in one commit and then applied the result, in order to roll back you'd need to change the configuration to still include the provider block but not include any of the resource blocks, so you'd need to adjust the configuration during the revert. Terraform will then use the remaining provider block to configure the provider to run the destroy actions, after which you can finally remove the provider block too.
While there are commands to manipulate state, there is no command to rollback to the previous state, i.e. before the last terraform apply.
However, if you use a remote S3 backend with a dynamodb lock table, it is possible to roll back if versioning was enabled on the S3 bucket. For example, you could copy the previous version such that it becomes the latest version. You then must also update the digest in the dynamodb table, otherwise the terraform init will give you a message like:
Error refreshing state: state data in S3 does not have the expected content.
This may be caused by unusually long delays in S3 processing a previous state
update. Please wait for a minute or two and try again. If this problem
persists, and neither S3 nor DynamoDB are experiencing an outage, you may need
to manually verify the remote state and update the Digest value stored in the
DynamoDB table to the following value: vvvvvvvvvvvvvv
You can just use this value to update the table and the rollback is done. To revert it, simply delete the last state from the S3 bucket so it goes back to its old "latest" and update the dynamodb table back to the corresponding digest.
Note that remote state is shared with your co-workers, so the above procedure should be avoided.
It's important to understand that changing the state files won't change the infrastructure by itself. That should be done by versioning the terraform code and doing terraform plan and terraform apply on the code that describes the desired infrastructure.
make sure versioning is enable for AWS bucket which maintaining your tfstate files in AWS.
by enabling (show version / view) versioning inside bucket i found tfstate file by name.
Deleted the latest version which causes mismatch (as in my case it is for terraform version), it add delete marker file for that version. means it actually backup after deletion. you can easily restore original file back by just deleting this added delete marker file.)
then i looked into old versions of tfstate files to restore back, by checking history of deployment, downloaded required one (after download ca see details, for me its checking terraform correct version match)
then uploaded that old tfstate file to the same location from where i deleted conflicted tfstate file.
on resume deployment was getting error like below.
Error refreshing state: state data in S3 does not have the expected content.
This may be caused by unusually long delays in S3 processing a previous state
update. Please wait for a minute or two and try again. If this problem
persists, and neither S3 nor DynamoDB are experiencing an outage, you may need
to manually verify the remote state and update the Digest value stored in the
DynamoDB table to the following value: b55*****************************
which means there is digest value already present for previous tfstate lock file which need update with this new value, found in DynamoDB>table>view table details.
on resume deployment in spinnaker able to complete deployment ( exceptional case : but in my case the latest pipeline was included changes in which there was destroying unused resource, which was created using different provider, hence I required to first revert back the provider first then on resume I able to successfully deploy the changes.)