I've created an Azure Storage Account to be used as the backend state store for Terraform, and I was able to write to this from an Azure DevOps pipeline running Terraform commands. I can see the container in the Storage Account and confirm that it has the state content from the pipeline execution in it with that same key. However, when I try to run Terraform "manually" using the same backend store, I'm getting an error that it cannot find that container:
$ terraform init -backend-config="storage_account_name=<redacted>" -backend-config="container_name=auto-api-tfstate" -backend-config="access_key=<redacted>" -backend-config="key=dev-internal2/dev-internal2.tfstate:us"
Initializing modules...
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Error: Error inspecting states in the "azurerm" backend:
storage: service returned error: StatusCode=404, ErrorCode=ContainerNotFound, ErrorMessage=The specified container does not exist.
RequestId:89a9b361-a01e-00b1-0fb4-ba5d51000000
Time:2021-10-06T13:18:41.2460433Z, RequestInitiated=Wed, 06 Oct 2021 13:18:40 GMT, RequestId=89a9b361-a01e-00b1-0fb4-ba5d51000000, API Version=2016-05-31, QueryParameterName=, QueryParameterValue=
Prior to changing backends, Terraform inspects the source and destination
states to determine what kind of migration steps need to be taken, if any.
Terraform failed to load the states. The data in both the source and the
destination remain unmodified. Please resolve the above error and try again.
My main.tf file has simply:
terraform {
backend "azurerm" {}
}
As mentioned, this same terraform init command worked when invoked in a Bash script in an ADO pipeline, so not sure what may be the issue. Any suggestions for debugging this appreciated.
Uncovered the issue ... there was state information in the .terraform folder which conflicted with the new backend. Once I cleared that out, the "terraform init" command worked as expected.
Related
I'm using terraform version v0.12.14. whenever I use terraform init I'm unable to see the terraform provider in my folder(hidden files are enabled to visible). also the plan command always fails with the error " no changes, infrastructure is up-to-date". kindly help me since I'm getting these errors I am not able to create the resource group in azure.
I am trying to migrate a project's CLI workspaces to Terraform Cloud. I am using Terraform version 0.14.8 and following the official guide here.
$ terraform0.14.8 workspace list
default
* development
production
staging
Currently, the project uses the S3 remote state backend configuration
terraform {
backend "s3" {
profile = "..."
key = "..."
workspace_key_prefix = "environments"
region = "us-east-1"
bucket = "terraform-state-bucketA"
dynamodb_table = "terraform-state-bucketA"
encrypt = true
}
I changed the backend configuration to:
backend "remote" {
hostname = "app.terraform.io"
organization = "orgA"
workspaces {
prefix = "happyproject-"
}
}
and execute terraform0.14.8 init in order to begin the state migration process. Expected behaviour would be to create 3 workspaces in Terraform Cloud:
happyproject-development
happyproject-staging
happyproject-production
However, I get the following error:
$ terraform0.14.8 init
Initializing modules...
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Terraform detected that the backend type changed from "s3" to "remote".
Error: Error looking up workspace
Workspace read failed: invalid value for workspace
I also enabled TRACE level logs and just before it throws the error I can see this: 2021/03/23 10:08:03 [TRACE] backend/remote: looking up workspace for orgA/.
Notice the empty string after orgA/ and the omission of the prefix! I am guessing that TF tries to query Terraform Cloud for the default workspace, which is an empty string, and it fails to do so.
I have not been using the default workspace at all and it just appears when I am executing terraform0.14.8 init. The guide mentions:
Some backends, including the default local backend, allow a special default workspace that doesn't have a specific name. If you previously used a combination of named workspaces and the special default workspace, the prompt will next ask you to choose a new name for the default workspace, since Terraform Cloud doesn't support unnamed workspaces:
However, it never prompts me to choose a name for the default workspace. Any help would be much appreciated!
I had similar issue and what helped me was to create in advance the empty workspace with expected name and then run terraform init.
I have also copied .tfstate file from remote location to root directory of the project before doing init. Hope this will help you as well.
What I ended up doing was
Created the empty workspaces in Terraform Cloud
For every CLI workspace, I pointed the backend to the respective TFC workspace and executed terraform init. That way, the Terraform state was automatically migrated from S3 backend to TFC
Finally, after all CLI workspaces were migrated, I used the prefix argument of the workspaces block instead of the name argument to manage the different TFC workspaces
I'm in the process of a Terraform code refactoring in which some resources are moved to a module A and a module B into a submodule of A and I'm now getting this error in Terraform Enterprise:
Error: Provider configuration not present
To work with
module.account-baseline.module.iam-policy.aws_iam_role.ops_role
its original provider configuration at module.account-baseline.provider.aws is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.account-baseline.module.iam-policy.aws_iam_role.ops_role,
after which you can remove the provider configuration again.
I've tried in my playground account using a local Terraform state to run "terraform state mv" commands moving the module into a sub-module and it works, but I don't know how to apply this state change to Terraform Enterprise.
Any help would be more than welcome, thanks in advance!
I am running an application inside pod in aks, that is provisioning a aws service using terraform, if that pod is deleted or stopped in between when provisioning is going on, the terraform state file is corrupted.
When I try provisioning again using that state file I get apply error. Some of the resources got provisioned but are not updated in the state file. I get following error.
Error: Error applying plan:
1 error(s) occurred:
* aws_s3_bucket.examplebucket: 1 error(s) occurred:
* aws_s3_bucket.examplebucket: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409
so how to update the state file so I can use it again?
Not sure the error is related to kubernetes resources and pods.
But if you need refresh / recreate the bucket, you can taint it.
terraform taint aws_s3_bucket.examplebucket
terraform plan
terraform apply
Let me know if this is helpful or not.
If terraform tries to create something that already exists, you will need to import the resource into terraform.
Every kind of terraform resource, in this case a aws_s3_bucket, has listed in its documentation, at the bottom, on how to import it.
In this case, the following command should do the trick:
terraform import aws_s3_bucket.bucket **BUCKETNAME**
Replace BUCKETNAME with your bucket.
I have been using the below to successfully create a back-end state file for terraform in Azure storage, but for some reason its stopped working. I've recycled passwords for the storage, trying both keys and get the same error every-time
backend.tf
terraform {
backend "azurerm" {
storage_account_name = "terraformstorage"
resource_group_name = "automation"
container_name = "terraform"
key = "testautomation.terraform.tfstate"
access_key = "<storage key>"
}
}
Error returned
terraform init
Initializing the backend...
Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: storage: service returned error: StatusCode=403, ErrorCode=AuthenticationFailed, ErrorMessage=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:665e0067-b01e-007a-6084-97da67000000
Time:2018-12-19T10:18:18.7148241Z, RequestInitiated=Wed, 19 Dec 2018 10:18:18 GMT, RequestId=665e0067-b01e-007a-6084-97da67000000, API Version=, QueryParameterName=, QueryParameterValue=
Any ideas what im doing wrong?
What worked for me is to delete the local .terraform folder and try again.
Another problem can be time resolution.
I experienced those problems as well, tried all the above mentioned steps, but nothing helped.
What happened on my system (Windows 10, WSL2) was, that WSL lost its time sync and I was hours apart. This behaviour is described in https://github.com/microsoft/WSL/issues/4245.
For me it helped to
get the appropriate time in WSL (sudo hwclock -s) and
to reboot WSL
Hope, this will help others too.
Here are few suggestions:
Run: terraform init -reconfigure.
Confirm your "terraform/backend" credentials.
In case your Terraform contains some "azurerm_storage_account/network_rules" to allow certain IP addresses, or make sure you're connected to the right VPN network.
If above won't work, run TF_LOG=TRACE terraform init to debug further.
Please ensure you've been authenticated properly to Azure Cloud.
If you're running Terraform externally, re-run: az login.
If you're running Terraform on the instance, you can use managed identities, or by defining the following environmental variables:
ARM_USE_MSI=true
ARM_SUBSCRIPTION_ID=xxx-yyy-zzz
ARM_TENANT_ID=xxx-yyy-zzz
or just run az login --identity, then assign the right role (azurerm_role_assignment, e.g. "Contributor") and appropriate policies (azurerm_policy_definition).
See also:
Azure Active Directory Provider: Authenticating using Managed Service Identity.
Unable to programmatically get the keys for Azure Storage Account.
There should a .terraform directory , where you are running the terraform init command from.
Remove .terraform or move it to Someotehr name. Next time terraform init runs , it will recreate that directory with new init.