I created AzDo repositories using the Terraform. But what is the best way to rename the repository without deleting it. I found a command "terraform state mv" for the purpose. Please let me know if there is a better way.
Currently when I use the Terraform state mv command I'm getting below error
terraform state mv "module.azdo-module.azuredevops_git_repository[\"repo1\"]" "module.azdo-module.azuredevops_project.azuredevops_git_repository[\"repo2\"]"
Getting below error
Error: Invalid address
│
│ on line 1:
│ (source code not available)
│
│ A resource name is required.
maybe
terraform state mv "module.azdo-module.azuredevops_git_repository.repo1" "module.azdo-module.azuredevops_project.azuredevops_git_repository.repo2"
list resources to check the name
terraform state list
Terraform has introduced a new declarative way to refactor resources with the move block syntax in 1.1 version.
Steps:
rename the resource to the new name
moved {
from = module.azdo-module.azuredevops_git_repository.repo1
to = module.azdo-module.azuredevops_project.azuredevops_git_repository.repo2
}
update references to a new resource name
plan and apply
remove the moved block (given no one else is using your modules)
More info:
https://developer.hashicorp.com/terraform/language/modules/develop/refactoring#moved-block-syntax
Related
I'm using terraform and terragrunt to create some repositories in bitbucket, and since they do not have an official provider I'm using one from DrFaust92/bitbucket. I have done everything in my computer and I could apply all, now I'm moving the workflow to circle ci, and when I run it there I always get this error:
bitbucket_repository.repository: Creating...
╷
│ Error: Empty Summary: This is always a bug in the provider and should be reported
to the provider developers.
│
│ with bitbucket_repository.repository,
│ on main.tf line 5, in resource "bitbucket_repository" "repository":
│ 5: resource "bitbucket_repository" "repository" {
The resource does not have anything in special:
resource "bitbucket_repository" "repository" {
name = var.name
description = var.description
owner = var.owner
project_key = var.project_key
language = var.project_language
fork_policy = var.fork_policy
is_private = true
}
I'm using terraform 1.3.7 and terragrunt 0.43.1 (in my computer and in circle ci, both run with the same versions). It fails when it access any tfstate: if the tfstate already exists, it throws the error when planning, if it doesn't, the plan runs well, but when I apply it fails with the same error.
Any help to fix this will be appreciated!
This is most likely issue associated with provider version. In your local, it may have a certain version downloaded/cached. Within Circle CI, it may be fetching latest provider that is available(which may have some issues). I would suggest you find the provider version currently in use locally, and then add required_providers block accordingly to make sure that it uses the same version of the provider. You can find the version presently in use from the terminal output that is generated on 'terraform init'. Below is the sample block to specify specific provider version(taken from: https://registry.terraform.io/providers/aeirola/bitbucket/latest/docs/resources/repository).
terraform {
required_providers {
bitbucket = {
source = "DrFaust92/bitbucket"
version = "v2.30.0"
}
}
}
I followed Google's instructions to export my GCloud project in a terraform format. I tried using gcloud alpha and gcloud beta and the result is the same: It creates a resource named google_logging_log_sink, for which I can't find documentation in Terraform's Google Cloud Platform Provider.
The commands I executed are in the following order, with + to show the generated files and folders. They worked the same using gcloud alpha and gcloud beta, and I omit sensitive data:
$> gcloud alpha resource-config bulk-export --path=terraform-export --project=PROJECT_ID --resource-format=terraform
+ ./terraform-export/...
$> gcloud beta resource-config terraform generate-import terraform-export
+ ./gcloud-export-modules.tf
+ ./terraform_import_2022MMDD-HH-mm-ss.sh
$> terraform init
+ ./.terraform/…
+ ./terraform.lock.hcl
$> zsh ./terraform_import_2022MMDD-HH-mm-ss.sh # <- the errors are thrown here
+ ./.terraform.tfstate.lock.info
+ ./.terraform.tfstate.backup
There are specifically two errors in that script, their commands and messages are the following.
unknown resource type: google_logging_log_sink:
$> terraform import module.terraform-export-PROJECTNUMBER-PROJECTNUMBER-Project-LoggingLogSink.google_logging_log_sink.a_required PROJECTNUMBER###_Required
module.terraform-export-PROJECTNUMBER-PROJECTNUMBER-Project-LoggingLogSink.google_logging_log_sink.a_required: Importing from ID "PROJECTNUMBER###_Required"...
╷
│ Error: unknown resource type: google_logging_log_sink
│
│
╵
(I also tried adding a space in PROJECTNUMBER###_Required -> PROJECT_NUMBER ###_Required and it fails with the same message.)
Cannot import non-existent remote object:
$> terraform import module.terraform-export-projects-PROJECTID-IAMServiceAccount.google_service_account.PROJECTID projects/PROJECTID/serviceAccounts/some_service_account#PROJECTID.iam.gserviceaccount.com
module.terraform-export-projects-PROJECTID-IAMServiceAccount.google_service_account.PROJECTID: Importing from ID "projects/PROJECTID/serviceAccounts/some_service_account#PROJECTID.iam.gserviceaccount.com"...
module.terraform-export-projects-PROJECTID-IAMServiceAccount.google_service_account.PROJECTID: Import prepared!
Prepared google_service_account for import
module.terraform-export-projects-PROJECTID-IAMServiceAccount.google_service_account.PROJECTID: Refreshing state... [id=projects/PROJECTID/serviceAccounts/some_service_account#PROJECTID.iam.gserviceaccount.com]
╷
│ Error: Cannot import non-existent remote object
│
│ While attempting to import an existing object to "module.terraform-export-projects-PROJECTID-IAMServiceAccount.google_service_account.PROJECTID", the provider detected that no object exists with the given id. Only
│ pre-existing objects can be imported; check that the id is correct and that it is associated with the provider's configured region or endpoint, or use "terraform apply" to create a new remote object for this resource.
╵
Calling terraform -v shows the following versions:
Terraform v1.2.1
on darwin_amd64
+ provider registry.terraform.io/hashicorp/google_v4.22.0
How do I solve these errors?
Would fixing the
google_logging_log_sink error also allow the second failing
command to succeed?
I have looked for some documentation of the google_logging_log_sink resource but have found none, so don't know if I need to change it for some other resource name. I also think my terraform CLI and the google provider versions should be working. I couldn't find the version of the format in which gcloud is exporting the project.
As of Jun 2022, there is no fix! The config connector that lets you use Google Cloud's Terraform bulk-export tool needs this fix. In future versions, you can expect this to be fixed.
The simple workaround for now, is to ignore the Terraform Export only for google_logging_log_sink resource and remove it.
I am attempting to create new resources on GCP with a remote backend
After doing terraform init plan -out=tfplan and then terraform apply tfplan I get the following error:
Error: Inconsistent dependency lock file
│
│ The following dependency selections recorded in the lock file are
│ inconsistent with the configuration in the saved plan:
│ Terraform 0.13 and earlier allowed provider version constraints inside the
│ provider configuration block, but that is now deprecated and will be
│ removed in a future version of Terraform. To silence this warning, move the
│ provider version constraint into the required_providers block.
│
│ (and 22 more similar warnings elsewhere)
╵
│ - provider registry.terraform.io/hashicorp/aws: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/external: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/google-beta: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/google: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/local: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/null: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/random: required by this configuration but no version is selected
│
│ A saved plan can be applied only to the same configuration it was created
│ from. Create a new plan from the updated configuration.
╵
╷
│ Error: Inconsistent dependency lock file
│
│ The given plan file was created with a different set of external dependency
│ selections than the current configuration. A saved plan can be applied only
│ to the same configuration it was created from.
│
│ Create a new plan from the updated configuration.
On the other hand when I do it terraform init plan and apply -auto-approve its working with no issues
Part of the work of terraform init is to install all of the required providers into a temporary directory .terraform/providers so that other Terraform commands can run them. For entirely new providers, it will also update the dependency lock file to record which versions it selected so that future terraform init can guarantee to make the same decisions.
If you are running terraform apply tfplan in a different directory than where you ran terraform plan -out=tfplan then that local cache of the needed provider plugins won't be available and thus the apply will fail.
Separately from that, it also seems that when you ran terraform init prior to creating the plan, Terraform had to install some new providers that were not previously recorded in the dependency lock file, and so it updated the dependency lock file. However, when you ran terraform apply tfplan later those changes to the lock file were not visible and so Terraform reported that the current locks are inconsistent with what the plan was created from.
The Running Terraform in Automation guide has a section Plan and Apply on Different Machines which discusses some of the special concerns that come into play when you're trying to apply somewhere other than where you created the plan. However, I'll try to summarize the parts which seem most relevant to your situation, based on this error message.
Firstly, an up-to-date dependency lock file should be recorded in your version control system so that your automation is only reinstalling previously-selected providers and never making entirely new provider selections. That will then ensure that all of your runs use the same provider versions, and upgrades will always happen under your control.
You can make your automation detect this situation by adding the -lockfile=readonly option to terraform init, which makes that command fail if it would need to change the dependency lock file in order to perform its work:
terraform init -lockfile=readonly
If you see that fail in your automation, then the appropriate fix would be to run terraform init without -lockfile=readonly inside your development environment, and then check the updated lock file into your version control system.
If you cannot initialize the remote backend in your development environment, you can skip that step but still install the needed providers by adding -backend=false, like this:
terraform init -backend=false
Getting the same providers reinstalled again prior to the apply step is the other part of the problem here.
The guide I linked above suggests to achieve this by archiving up the entire working directory after planning as an artifact and then re-extracting it at the same path in the apply step. That is the most thorough solution, and in particular is what Terraform Cloud does in order to ensure that any other files created on disk during planning (such as using the archive_file data source from the hashicorp/archive provider) will survive into the apply phase.
However, if you know that your configuration itself doesn't modify the filesystem during planning (which is a best practice, where possible) then it can also be valid to just re-run terraform init -lockfile=readonly before running terraform apply tfplan, which will therefore reinstall the previously-selected providers, along with all of the other working directory initialization work that terraform init usually does.
As a final note, tangential to the rest of this, it seems like Terraform was also printing a warning about a deprecated language feature and on your system the warning output became interleaved with the error output, making the first message confusing because it includes a paragraph from the warning inside of it.
I believe the intended error message text, without the errant extra content from the warning, is as follows:
Error: Inconsistent dependency lock file
│
│ The following dependency selections recorded in the lock file are
│ inconsistent with the configuration in the saved plan:
│ - provider registry.terraform.io/hashicorp/aws: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/external: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/google-beta: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/google: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/local: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/null: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/random: required by this configuration but no version is selected
│
│ A saved plan can be applied only to the same configuration it was created
│ from. Create a new plan from the updated configuration.
╵
There could be various reasons but main reason is you might update your template and you might use some plugins in my case i updated my template and added random module solution to that problem is to upgrade lockfile with below command
terraform init -upgrade
it will install new modules and make lock file consistent automatically
When you do terraform init, it generates a file called .terraform.lock.hcl. Make sure you have this file in the directory where you are running terraform apply. If you are using CICD to run terraform, make sure the file is available too when running terraform apply.
For example, in a GitLab CICD pipeline, you can add these files in the cache:
cache:
paths:
- .terraform
- .terraform.lock.hcl
I configure my terraform using a GCS backend, with a workspace. My CI environment exactly has access to the state file it requires for the workspace.
terraform {
required_version = ">= 0.14"
backend "gcs" {
prefix = "<my prefix>"
bucket = "<my bucket>"
credentials = "credentials.json"
}
}
I define the output of my terraform module inside output.tf:
output "base_api_url" {
description = "Base url for the deployed cloud run service"
value = google_cloud_run_service.api.status[0].url
}
My CI Server runs terraform apply -auto-approve -lock-timeout 15m. It succeeds and it shows me the output in the console logs:
Outputs:
base_api_url = "https://<my project url>.run.app"
When I call terraform output base_api_url and it gives me the following error:
│ Warning: No outputs found
│
│ The state file either has no outputs defined, or all the defined outputs
│ are empty. Please define an output in your configuration with the `output`
│ keyword and run `terraform refresh` for it to become available. If you are
│ using interpolation, please verify the interpolated value is not empty. You
│ can use the `terraform console` command to assist.
I try calling terraform refresh like it mentions in the warning and it tells me:
╷
│ Warning: Empty or non-existent state
│
│ There are currently no remote objects tracked in the state, so there is
│ nothing to refresh.
╵
I'm not sure what to do. I'm calling terraform output RIGHT after I call apply, but it's still giving me no outputs. What am I doing wrong?
I had the exact same issue, and this was happening because I was running terraform commands from a different path than the one I was at.
terraform -chdir="another/path" apply
And then running the output command would fail with that error. Unless you cd to that path before running the output command.
cd "another/path"
terraform output
I'm using Terraform v1.0.0 and created a remote backend using AWS S3 and AWS DynamoDB as explained in Terraform Up & Running by Yevgeniy Brikman:
I wrote the code for the S3 bucket and the DynamoDB table and created the resources via terraform apply
I added terraform { backend "S3" {} } to my code
I created a backend.hcl file with all the relevant parameters
I moved my local state to S3 by invoking terraform init -backend-config=backend.hcl
Now I want to convert the remote state back to local state so I can safely delete the remote backend. Brikman explains to do this, one has to remove the backend configuration and invoke terraform init. When I try this, I see this:
$ terraform init
Initializing modules...
Initializing the backend...
╷
│ Error: Backend configuration changed
│
│ A change in the backend configuration has been detected, which may require migrating existing state.
│
│ If you wish to attempt automatic migration of the state, use "terraform init -migrate-state".
│ If you wish to store the current configuration with no changes to the state, use "terraform init -reconfigure".
╵
I figured the correct approach is to use -reconfigure which seems to work at first glance:
$ terraform init -reconfigure
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v3.47.0
- Using previously-installed hashicorp/random v3.1.0
Terraform has been successfully initialized!
However, executing terraform plan reveals that the initialization did not succeed:
$ terraform plan
╷
│ Error: Backend initialization required, please run "terraform init"
│
│ Reason: Unsetting the previously set backend "s3"
│
│ The "backend" is the interface that Terraform uses to store state,
│ perform operations, etc. If this message is showing up, it means that the
│ Terraform configuration you're using is using a custom configuration for
│ the Terraform backend.
│
│ Changes to backend configurations require reinitialization. This allows
│ Terraform to set up the new configuration, copy existing state, etc. Please run
│ "terraform init" with either the "-reconfigure" or "-migrate-state" flags to
│ use the current configuration.
│
│ If the change reason above is incorrect, please verify your configuration
│ hasn't changed and try again. At this point, no changes to your existing
│ configuration or state have been made.
╵
The only way to unset the backend seems to be via terraform init -migrate-state:
$ terraform init -migrate-state
Initializing modules...
Initializing the backend...
Terraform has detected you're unconfiguring your previously set "s3" backend.
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "s3" backend to the
newly configured "local" backend. No existing state was found in the newly
configured "local" backend. Do you want to copy this state to the new "local"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
Successfully unset the backend "s3". Terraform will now operate locally.
Is it not possible to convert the state via terraform init -reconfigure despite Terraform explicitly telling me so? If so, what does terraform init -reconfigure do exactly?
Below work around solved this problem for me.
Add below and run terraform init
terraform
{
backend "local"
{
}
}
From the official docs1, it seems -reconfigure is a bit destructive in the sense that it disregards the existing config. I would think that if you did changes to the backend, and then ran the command, it would only work from the assumption that this is a new config. I just recently read the docs myself, and I did not know that this was the behavior.
So, back to your question, I would assume -migrate-state is the desired option to use when migrating states between different backends. I understand from your issue that this was the case using terraform init -migrate-state?
As said by MindTooth, command init -migrate-state does exactly what you want to do.
It migrates the state unchanged when a different backend is configured.
init -reconfigure will initialise the new backend with a clean empty state.
Another way to do it is by pulling the state from s3 backend to a json file.
Then initialising an empty local backend using init -reconfigure, and pushing the state back in.
terraform state pull > state.json
terraform init -reconfigure
terraform state push state.json