Unable to import AWS infrastructure configurartion using terraformer - terraform

I am trying to import existing AWS infra configuration using google's terraformer and I am unsuccessful due to AWS provider authentication problem. My AWS credentials are MFA enabled and hence i have to use session token. I failed to find options to enable terraformer to use aws session token params.
Here is the debug logs for the terraformer program. Could someone help me with this please. The below is generating empty tf files and states.
Master $ terraformer import aws --resources=vpc --regions=eu-central-1 -c -v
2020/06/02 23:17:53 aws importing region eu-central-1
2020/06/02 23:17:53 aws importing... vpc
2020-06-02T23:17:53.525+0530 [INFO] plugin: configuring client automatic mTLS
2020-06-02T23:17:53.593+0530 [DEBUG] plugin: starting plugin: path=.terraform/plugins/darwin_amd64/terraform-provider-aws_v2.64.0_x4 args=[.terraform/plugins/darwin_amd64/terraform-provider-aws_v2.64.0_x4]
2020-06-02T23:17:53.597+0530 [DEBUG] plugin: plugin started: path=.terraform/plugins/darwin_amd64/terraform-provider-aws_v2.64.0_x4 pid=47500
2020-06-02T23:17:53.597+0530 [DEBUG] plugin: waiting for RPC address: path=.terraform/plugins/darwin_amd64/terraform-provider-aws_v2.64.0_x4
2020-06-02T23:17:54.254+0530 [INFO] plugin.terraform-provider-aws_v2.64.0_x4: configuring server automatic mTLS: timestamp=2020-06-02T23:17:54.253+0530
2020-06-02T23:17:54.329+0530 [DEBUG] plugin: using plugin: version=5
2020-06-02T23:17:54.329+0530 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: plugin address: network=unix address=/var/folders/jj/2w6phyrs1fj68ks7ry714z000000gn/T/plugin871781403 timestamp=2020-06-02T23:17:54.328+0530
2020-06-02T23:17:54.586+0530 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: 2020/06/02 23:17:54 [INFO] No assume_role block read from configuration
2020-06-02T23:17:54.586+0530 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: 2020/06/02 23:17:54 [INFO] Building AWS auth structure
2020-06-02T23:17:54.586+0530 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: 2020/06/02 23:17:54 [INFO] Setting AWS metadata API timeout to 100ms
2020-06-02T23:17:56.003+0530 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: 2020/06/02 23:17:55 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id
2020-06-02T23:17:56.010+0530 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: 2020/06/02 23:17:56 [INFO] AWS Auth provider used: "EnvProvider"
2020-06-02T23:17:56.013+0530 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: 2020/06/02 23:17:56 [DEBUG] Trying to get account information via sts:GetCallerIdentity
2020-06-02T23:17:57.577+0530 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: 2020/06/02 23:17:57 [DEBUG] Trying to get account information via sts:GetCallerIdentity
2020-06-02T23:17:59.652+0530 [DEBUG] plugin: plugin process exited: path=.terraform/plugins/darwin_amd64/terraform-provider-aws_v2.64.0_x4 pid=47500
2020-06-02T23:17:59.652+0530 [DEBUG] plugin: plugin exited
2020/06/02 23:17:59 aws Connecting....
2020/06/02 23:17:59 aws save vpc
2020/06/02 23:17:59 aws save tfstate for vpc

I managed to resolve the problem by explicitily setting the environment variable AWS_SHARED_CREDENTIALS_FILE=~/.aws/credential
Without the above additional env my setup failed.

Related

import existing databricks user in terraform

Trying to import existing users by using Terraform import but getting an import error. The detailed log is as follows.
terraform import databricks_user.user user#company.com
2022-01-23T12:28:18.894-0800 [INFO] Terraform version: 1.1.4
2022-01-23T12:28:18.894-0800 [INFO] Go runtime version: go1.17.6
2022-01-23T12:28:18.894-0800 [INFO] CLI args: []string{"terraform", "import", "databricks_user.user", "user#company.com"}
2022-01-23T12:28:18.895-0800 [INFO] CLI command args: []string{"import", "databricks_user.user", "user#company.com"}
2022-01-23T12:28:18.906-0800 [INFO] Attempting to use session-derived credentials
2022-01-23T12:28:19.590-0800 [INFO] Successfully derived credentials from session
2022-01-23T12:28:19.590-0800 [INFO] AWS Auth provider used: "SSOProvider"
2022-01-23T12:28:21.940-0800 [INFO] provider: configuring client automatic mTLS
2022-01-23T12:28:21.982-0800 [INFO] provider.terraform-provider-databricks_v0.4.5: configuring server automatic mTLS: timestamp=2022-01-23T12:28:21.982-0800
2022-01-23T12:28:22.075-0800 [ERROR] AttachSchemaTransformer: No provider config schema available for provider["terraform.io/builtin/terraform"]
2022-01-23T12:28:22.075-0800 [INFO] ReferenceTransformer: reference not found: "count.index"
2022-01-23T12:28:22.080-0800 [INFO] provider: configuring client automatic mTLS
2022-01-23T12:28:22.117-0800 [INFO] provider.terraform-provider-databricks_v0.4.5: configuring server automatic mTLS: timestamp=2022-01-23T12:28:22.117-0800
2022-01-23T12:28:22.209-0800 [WARN] ValidateProviderConfig from "provider[\"registry.terraform.io/databrickslabs/databricks\"]" changed the config value, but that value is unused
2022-01-23T12:28:22.210-0800 [INFO] provider.terraform-provider-databricks_v0.4.5: Explicit and implicit attributes: host, token: timestamp=2022-01-23T12:28:22.210-0800
databricks_user.user: Importing from ID "user#company.com"...
2022-01-23T12:28:22.212-0800 [INFO] provider.terraform-provider-databricks_v0.4.5: Using directly configured PAT authentication: timestamp=2022-01-23T12:28:22.212-0800
2022-01-23T12:28:22.213-0800 [INFO] provider.terraform-provider-databricks_v0.4.5: Configured pat auth: host=https://company.cloud.databricks.com, token=***REDACTED***: timestamp=2022-01-23T12:28:22.213-0800
2022-01-23T12:28:22.577-0800 [WARN] provider.terraform-provider-databricks_v0.4.5: /api/2.0/preview/scim/v2/Users/user#company.com:405 - Endpoint not supported.: timestamp=2022-01-23T12:28:22.577-0800
2022-01-23T12:28:22.578-0800 [WARN] provider.terraform-provider-databricks_v0.4.5: /api/2.0/preview/scim/v2/Users/user#company.com:405 - Endpoint not supported.: timestamp=2022-01-23T12:28:22.578-0800
databricks_user.user: Import prepared!
Prepared databricks_user for import
databricks_user.user: Refreshing state... [id=user#company.com]
2022-01-23T12:28:22.832-0800 [WARN] provider.terraform-provider-databricks_v0.4.5: /api/2.0/preview/scim/v2/Users/user#company.com:405 - Endpoint not supported.: timestamp=2022-01-23T12:28:22.832-0800
2022-01-23T12:28:22.833-0800 [WARN] provider.terraform-provider-databricks_v0.4.5: /api/2.0/preview/scim/v2/Users/user#company.com:405 - Endpoint not supported.: timestamp=2022-01-23T12:28:22.832-0800
2022-01-23T12:28:22.837-0800 [ERROR] vertex "import databricks_user.user result" error: cannot read user: Endpoint not supported.
2022-01-23T12:28:22.837-0800 [ERROR] vertex "databricks_user.user (import id \"user#company.com\")" error: cannot read user: Endpoint not supported.
╷
│ Error: cannot read user: Endpoint not supported.
│
│
╵
Any suggestions what i am doing wrong?
Edit - Here is the corresponding terraform resource and provider block
resource "databricks_user" "user" {}
provider "databricks" {
host = "https://company.cloud.databricks.com"
token = "xxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
As per the Databricks provider documentation, the only required argument is the user_name [1]. So the block of code you are using to import the user:
resource "databricks_user" "user" {}
is not valid. Unfortunately, Terraform still does not create everything you need when importing the resources, so you have to provide the information. In your case that would be:
resource "databricks_user" "user" {
user_name = "user#company.com"
}
In the documentation, the command to import the user is:
terraform import databricks_user.me <user-id>
Make sure to understand if the <user-id> is the same thing as the user name or you need to provide the ID instead. On their website [2], I can see this:
<user-id> with the Databricks workspace ID of the user, for example 2345678901234567. To get the user ID, call Get users.
[1] https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs/resources/user
[2] https://docs.databricks.com/dev-tools/api/latest/scim/scim-users.html#get-users

Local TFstate does not migrate to remote location (Azure Blob)

I am using Azure Blob to store my state. I follow these [steps] (https://github.com/hashicorp/terraform-cdk/blob/main/docs/working-with-cdk-for-terraform/remote-backend.md#migrating-local-state-storage-to-remote) the only difference is that I am using the AzurermBackend. The problem is when I do terraform init it does not migrate the existing state to the blob, it just create a new one in which there is no resources, so when i execute cdktf diff terraform says that it needs to create each resource that was already created in the local state. I checked the file the file is empty. I also tried with thr stack.addOveride that don't works too. Next thing I did is I execute the TF_LOG=DEBUG terraform init and got the following logs:
2021-12-20T16:00:03.228+0100 [DEBUG] Adding temp file log sink: /tmp/terraform-log769761292
2021-12-20T16:00:03.228+0100 [INFO] Terraform version: 1.0.9
2021-12-20T16:00:03.228+0100 [INFO] Go runtime version: go1.16.4
2021-12-20T16:00:03.228+0100 [INFO] CLI args: []string{"/usr/bin/terraform", "init"}
2021-12-20T16:00:03.228+0100 [DEBUG] Attempting to open CLI config file: /home/shurbeski/.terraformrc
2021-12-20T16:00:03.228+0100 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2021-12-20T16:00:03.228+0100 [DEBUG] ignoring non-existing provider search directory terraform.d/plugins
2021-12-20T16:00:03.228+0100 [DEBUG] ignoring non-existing provider search directory /home/shurbeski/.terraform.d/plugins
2021-12-20T16:00:03.228+0100 [DEBUG] ignoring non-existing provider search directory /home/shurbeski/.local/share/terraform/plugins
2021-12-20T16:00:03.228+0100 [DEBUG] ignoring non-existing provider search directory /usr/share/ubuntu/terraform/plugins
2021-12-20T16:00:03.228+0100 [DEBUG] ignoring non-existing provider search directory /usr/local/share/terraform/plugins
2021-12-20T16:00:03.228+0100 [DEBUG] ignoring non-existing provider search directory /usr/share/terraform/plugins
2021-12-20T16:00:03.228+0100 [DEBUG] ignoring non-existing provider search directory /var/lib/snapd/desktop/terraform/plugins
2021-12-20T16:00:03.228+0100 [INFO] CLI command args: []string{"init"}
Initializing the backend...
2021-12-20T16:00:03.229+0100 [DEBUG] New state was assigned lineage "2abdb28d-45b7-02a5-d5b1-851b3c446ef3"
2021-12-20T16:00:03.229+0100 [DEBUG] checking for provisioner in "."
2021-12-20T16:00:03.233+0100 [DEBUG] checking for provisioner in "/usr/bin"
2021-12-20T16:00:03.233+0100 [INFO] Failed to read plugin lock file .terraform/plugins/linux_amd64/lock.json: open .terraform/plugins/linux_amd64/lock.json: no such file or directory
2021-12-20T16:00:03.233+0100 [DEBUG] New state was assigned lineage "ea01857e-a1b7-080a-dda5-a5081c10f48b"
Actually it just creates a new state, so I tried TF_LOG=DEBUG terraform init -migrate-state and got the following logs:
2021-12-20T16:08:07.541+0100 [DEBUG] Adding temp file log sink: /tmp/terraform-log411077971
2021-12-20T16:08:07.541+0100 [INFO] Terraform version: 1.0.9
2021-12-20T16:08:07.541+0100 [INFO] Go runtime version: go1.16.4
2021-12-20T16:08:07.541+0100 [INFO] CLI args: []string{"/usr/bin/terraform", "init", "-migrate-state"}
2021-12-20T16:08:07.541+0100 [DEBUG] Attempting to open CLI config file: /home/shurbeski/.terraformrc
2021-12-20T16:08:07.541+0100 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2021-12-20T16:08:07.541+0100 [DEBUG] ignoring non-existing provider search directory terraform.d/plugins
2021-12-20T16:08:07.541+0100 [DEBUG] ignoring non-existing provider search directory /home/shurbeski/.terraform.d/plugins
2021-12-20T16:08:07.541+0100 [DEBUG] ignoring non-existing provider search directory /home/shurbeski/.local/share/terraform/plugins
2021-12-20T16:08:07.541+0100 [DEBUG] ignoring non-existing provider search directory /usr/share/ubuntu/terraform/plugins
2021-12-20T16:08:07.541+0100 [DEBUG] ignoring non-existing provider search directory /usr/local/share/terraform/plugins
2021-12-20T16:08:07.542+0100 [DEBUG] ignoring non-existing provider search directory /usr/share/terraform/plugins
2021-12-20T16:08:07.542+0100 [DEBUG] ignoring non-existing provider search directory /var/lib/snapd/desktop/terraform/plugins
2021-12-20T16:08:07.542+0100 [INFO] CLI command args: []string{"init", "-migrate-state"}
Initializing the backend...
2021-12-20T16:08:07.543+0100 [DEBUG] New state was assigned lineage "4af0afde-830e-1836-4bb8-4013609be0ad"
2021-12-20T16:08:07.970+0100 [DEBUG] checking for provisioner in "."
2021-12-20T16:08:07.974+0100 [DEBUG] checking for provisioner in "/usr/bin"
2021-12-20T16:08:07.974+0100 [INFO] Failed to read plugin lock file .terraform/plugins/linux_amd64/lock.json: open .terraform/plugins/linux_amd64/lock.json: no such file or directory
2021-12-20T16:08:07.975+0100 [DEBUG] New state was assigned lineage "472594f8-73dc-abe6-3691-5c7bddfb715e"
Even this didn't work.
The only thing that works if when I manually copy the tf state file and put it in the blob for the state, but i I do not like that.
Any ideas how would I get terraform to ask me if I want to migrate my pre-existing tfstate?
This is my code in the cdktf stack:
// new AzurermBackend(mystack, {
// storageAccountName: "cdkremotebackendtest",
// containerName: "test1",
// subscriptionId: "",
// key: "terraform.tfcdk-demo.tfstate",
// accessKey: "",
// });
You also need to specify a backend provider under main terraform config. If you don't specify it it will assume local so no migration. Something like this
terraform {
required_providers {
--------------------
}
backend "azurerm" {
resource_group_name = "cloud"
storage_account_name = "cdkremotebackendtest"
container_name = "test1"
key = "terraform.tfcdk-demo.tfstate"
}
}
More info on backends: https://www.terraform.io/language/settings/backends/configuration

Azure Pipelines : Terraform Apply fails when given a tfplan

I'm trying to use Terraform with Azure Pipelines. I use the 0.12.24 version of Terraform.
The steps are the basics :
Install Terraform 0.12.24,
Terraform 'init -reconfigure',
Terraform 'plan -out=$(Agent.TempDirectory)/my.tfplan)',
Terraform 'apply'
Everything goes smoothly until step 4. If I specify the tfplan file ($(Agent.TempDirectory)/my.tfplan), this step fails. If I don't, deployment ends successfully.
Here are the execution trace (generated by TF_LOG = TRACE) :
##[section]Starting: terraform apply
==============================================================================
Task : Terraform CLI
Description : Execute terraform cli commands
Version : 0.5.2
Author : Charles Zipp
Help :
==============================================================================
[command]C:\hostedtoolcache\windows\terraform\0.12.24\x64\terraform.exe version
2020/04/27 16:56:39 [INFO] Terraform version: 0.12.24
2020/04/27 16:56:39 [INFO] Go runtime version: go1.12.13
2020/04/27 16:56:39 [INFO] CLI args: []string{"C:\\hostedtoolcache\\windows\\terraform\\0.12.24\\x64\\terraform.exe", "version"}
2020/04/27 16:56:39 [DEBUG] Attempting to open CLI config file: C:\Users\VssAdministrator\AppData\Roaming\terraform.rc
2020/04/27 16:56:39 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2020/04/27 16:56:39 [INFO] CLI command args: []string{"version"}
Terraform v0.12.24
2020/04/27 16:56:39 [DEBUG] checking for provider in "."
2020/04/27 16:56:39 [DEBUG] checking for provider in "C:\\hostedtoolcache\\windows\\terraform\\0.12.24\\x64"
2020/04/27 16:56:39 [DEBUG] checking for provider in ".terraform\\plugins\\windows_amd64"
2020/04/27 16:56:39 [DEBUG] found provider "terraform-provider-azurerm_v2.4.0_x5.exe"
2020/04/27 16:56:39 [DEBUG] found valid plugin: "azurerm", "2.4.0", "D:\\a\\r1\\a\\Build\\drop\\terraform\\.terraform\\plugins\\windows_amd64\\terraform-provider-azurerm_v2.4.0_x5.exe"
+ provider.azurerm v2.4.0
[command]C:\hostedtoolcache\windows\terraform\0.12.24\x64\terraform.exe apply -auto-approve D:\a\_temp/my.tfplan
2020/04/27 16:56:40 [INFO] Terraform version: 0.12.24
2020/04/27 16:56:40 [INFO] Go runtime version: go1.12.13
2020/04/27 16:56:40 [INFO] CLI args: []string{"C:\\hostedtoolcache\\windows\\terraform\\0.12.24\\x64\\terraform.exe", "apply", "-auto-approve", "D:\\a\\_temp/my.tfplan"}
2020/04/27 16:56:40 [DEBUG] Attempting to open CLI config file: C:\Users\VssAdministrator\AppData\Roaming\terraform.rc
2020/04/27 16:56:40 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2020/04/27 16:56:40 [INFO] CLI command args: []string{"apply", "-auto-approve", "D:\\a\\_temp/my.tfplan"}
##[error]Terraform command 'apply' failed with exit code '1'.
##[section]Finishing: terraform apply
I've tried this with the two plugins available (the one from MSFT and the other from Charles Zipp).
Any question, input or suggestion is very much welcome.
Thank you for your time :)

Azure backed terraform Error building account

I got suddenly and unexpectedly following error when executing terraform plan.
Error: Error building account: Error getting authenticated object ID: Error parsing json result from the Azure CLI: Error retrieving running Azure CLI: Unable to encode the output with ANSI_X3.4-1968 encoding. U
nsupported characters are discarded.
on main.tf line 4, in provider "azurerm":
4: provider "azurerm" {
Log nearby error looks like this:
2020-04-14T10:22:53.257Z [DEBUG] plugin.terraform-provider-azurerm_v2.5.0_x5: Testing if Service Principal / Client Certificate is applicable for Authentication..
2020-04-14T10:22:53.257Z [DEBUG] plugin.terraform-provider-azurerm_v2.5.0_x5: Testing if Multi Tenant Service Principal / Client Secret is applicable for Authentication..
2020-04-14T10:22:53.257Z [DEBUG] plugin.terraform-provider-azurerm_v2.5.0_x5: Testing if Service Principal / Client Secret is applicable for Authentication..
2020-04-14T10:22:53.257Z [DEBUG] plugin.terraform-provider-azurerm_v2.5.0_x5: Testing if Managed Service Identity is applicable for Authentication..
2020-04-14T10:22:53.257Z [DEBUG] plugin.terraform-provider-azurerm_v2.5.0_x5: Testing if Obtaining a token from the Azure CLI is applicable for Authentication..
2020-04-14T10:22:53.257Z [DEBUG] plugin.terraform-provider-azurerm_v2.5.0_x5: Using Obtaining a token from the Azure CLI for Authentication
2020-04-14T10:22:53.258Z [DEBUG] plugin.terraform-provider-azurerm_v2.5.0_x5: [DEBUG] Resource "https://management.core.windows.net/" isn't for the correct Tenant
2020/04/14 10:22:54 [ERROR] <root>: eval: *terraform.EvalConfigProvider, err: Error building account: Error getting authenticated object ID: Error parsing json result from the Azure CLI: Error retrieving running
Azure CLI: Unable to encode the output with ANSI_X3.4-1968 encoding. Unsupported characters are discarded.
2020/04/14 10:22:54 [ERROR] <root>: eval: *terraform.EvalSequence, err: Error building account: Error getting authenticated object ID: Error parsing json result from the Azure CLI: Error retrieving running Azure
CLI: Unable to encode the output with ANSI_X3.4-1968 encoding. Unsupported characters are discarded.
2020/04/14 10:22:54 [ERROR] <root>: eval: *terraform.EvalOpFilter, err: Error building account: Error getting authenticated object ID: Error parsing json result from the Azure CLI: Error retrieving running Azure
CLI: Unable to encode the output with ANSI_X3.4-1968 encoding. Unsupported characters are discarded.
2020/04/14 10:22:54 [ERROR] <root>: eval: *terraform.EvalSequence, err: Error building account: Error getting authenticated object ID: Error parsing json result from the Azure CLI: Error retrieving running Azure
CLI: Unable to encode the output with ANSI_X3.4-1968 encoding. Unsupported characters are discarded.
2020/04/14 10:22:54 [TRACE] [walkRefresh] Exiting eval tree: provider.azurerm
2020/04/14 10:22:54 [TRACE] vertex "provider.azurerm": visit complete
2020/04/14 10:22:54 [TRACE] dag/walk: upstream of "azurerm_cosmosdb_mongo_database.cupi" errored, so skipping
2020/04/14 10:22:54 [TRACE] dag/walk: upstream of "azurerm_log_analytics_workspace.law-cupi" errored, so skipping
2020/04/14 10:22:54 [TRACE] dag/walk: upstream of "azurerm_cosmosdb_account.cosmodb_account" errored, so skipping
2020/04/14 10:22:54 [TRACE] dag/walk: upstream of "azurerm_cosmosdb_mongo_collection.customer" errored, so skipping
2020/04/14 10:22:54 [TRACE] dag/walk: upstream of "azurerm_resource_group.rg-cupi" errored, so skipping
2020/04/14 10:22:54 [TRACE] dag/walk: upstream of "azurerm_log_analytics_solution.las-cupi" errored, so skipping
2020/04/14 10:22:54 [TRACE] dag/walk: upstream of "azurerm_kubernetes_cluster.aks-cupi" errored, so skipping
2020/04/14 10:22:54 [TRACE] dag/walk: upstream of "azurerm_cosmosdb_mongo_collection.deactivationRequest" errored, so skipping
2020/04/14 10:22:54 [TRACE] dag/walk: upstream of "azurerm_cosmosdb_mongo_collection.customerHash" errored, so skipping
2020/04/14 10:22:54 [TRACE] dag/walk: upstream of "azurerm_cosmosdb_mongo_collection.apiAuth" errored, so skipping
2020/04/14 10:22:54 [TRACE] dag/walk: upstream of "provider.azurerm (close)" errored, so skipping
2020/04/14 10:22:54 [TRACE] dag/walk: upstream of "root" errored, so skipping
and versions of my terraform
$ terraform version
2020/04/14 10:24:24 [INFO] Terraform version: 0.12.24
2020/04/14 10:24:24 [INFO] Go runtime version: go1.12.13
2020/04/14 10:24:24 [INFO] CLI args: []string{"/usr/bin/terraform", "version"}
2020/04/14 10:24:24 [DEBUG] Attempting to open CLI config file: /root/.terraformrc
2020/04/14 10:24:24 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2020/04/14 10:24:24 [INFO] CLI command args: []string{"version"}
Terraform v0.12.24
2020/04/14 10:24:24 [DEBUG] checking for provider in "."
2020/04/14 10:24:24 [DEBUG] checking for provider in "/usr/bin"
2020/04/14 10:24:24 [DEBUG] checking for provider in ".terraform/plugins/linux_amd64"
2020/04/14 10:24:24 [DEBUG] found provider "terraform-provider-azuread_v0.8.0_x4"
2020/04/14 10:24:24 [DEBUG] found provider "terraform-provider-azurerm_v2.5.0_x5"
2020/04/14 10:24:24 [DEBUG] found provider "terraform-provider-random_v2.2.1_x4"
2020/04/14 10:24:24 [DEBUG] found valid plugin: "azurerm", "2.5.0", "/cupi/operations/terraform/.terraform/plugins/linux_amd64/terraform-provider-azurerm_v2.5.0_x5"
2020/04/14 10:24:24 [DEBUG] found valid plugin: "random", "2.2.1", "/cupi/operations/terraform/.terraform/plugins/linux_amd64/terraform-provider-random_v2.2.1_x4"
2020/04/14 10:24:24 [DEBUG] found valid plugin: "azuread", "0.8.0", "/cupi/operations/terraform/.terraform/plugins/linux_amd64/terraform-provider-azuread_v0.8.0_x4"
+ provider.azuread v0.8.0
+ provider.azurerm v2.5.0
+ provider.random v2.2.1
and finally my az cli
$ az --version
azure-cli 2.3.1
command-modules-nspkg 2.0.3
core 2.3.1
nspkg 3.0.4
telemetry 1.0.4
Python location '/opt/az/bin/python3'
Extensions directory '/root/.azure/cliextensions'
Python (Linux) 3.6.5 (default, Apr 1 2020, 07:19:45)
[GCC 7.5.0]
Legal docs and information: aka.ms/AzureCliLegal
My main.tf file:
provider "azuread" {
version = "~>0.8"
}
provider "azurerm" {
version = "~>2"
subscription_id = "..."
features {}
}
terraform {
backend "azurerm" {}
}
I have also read threads bellow. None of which helped or resolved my issue. Same config that doesn't work today, worked with no modification couple of days ago (only thing that could change on client side are versions of plugins - i tried up/down grades but with no success).
https://github.com/terraform-providers/terraform-provider-azurerm/issues/3686
https://github.com/terraform-providers/terraform-provider-azurerm/issues/4906
Terraform with azure CLI - error building account
As mentioned in the comments, The issue was not providing the service principal in the provider.the correct syntax is :
# Configure the Azure Provider
# https://www.terraform.io/docs/providers/azurerm/index.html
provider "azurerm" {
subscription_id = var.SUBSCRIPTION_ID
client_id = var.SP_CLIENT_ID
client_secret = var.SP_CLIENT_SECRET
tenant_id = var.SP_TENANT_ID
version = "=2.0.0" #Can be overide as you wish
features {}
}
What is Service principal?
An Azure service principal is an identity created for use with
applications, hosted services, and automated tools to access Azure
resources. This access is restricted by the roles assigned to the
service principal, giving you control over which resources can be
accessed and at which level. For security reasons, it's always
recommended to use service principals with automated tools rather than
allowing them to log in with a user identity.
More info here.
With that been said, Why we should use service principal with Terraform?
When using Service principal you can give limited permissions to specific resources.
Service Principal is not attached to any user. Therefore, multiple users can use this Service principal.
You can assign permissions to the app identity that is different than your own permissions.
Azure Provider: Authenticating using a Service Principal with a Client Secret.
About AZ CLI login issue:
to be honest I don't have an answer that I feel confident to share. But, my guess is that there is an issue with AZ CLI version 2.3.1.
As you can see about 2 weeks ago when the new version released Azure team fixed an issue related to az login so I guess this is why things are acting differently now.
In case you want to check that, you can downgrade to 2.3.0 and check if this is still happening.
As noted in the official documentation for Terraform on how to authenticate using the Azure CLI, it is recommended to authenticate using personal credentials (through the az cli) when running locally.
We recommend using either a Service Principal or Managed Service Identity when running Terraform non-interactively (such as when running Terraform in a CI server) - and authenticating using the Azure CLI when running Terraform locally.
This becomes a little problematic when you would like to run locally in a Docker container, especially since it seems like the output generated by the az cli have changed its output (intentionally or not), so that Terraform can no longer use it.
As Amit already noted in the accepted answer, this seems to be due to a change, but I would argue that it occurred earlier, since I have to roll all the way back to 2.2.0 (2.2.0-1~bionic on ubuntu) to have it working again.
I had the same issue running terraform in a docker container through an ssh client. I managed to fix it with:
export LC_ALL=en_US.UTF-8

Terraform Destroy Error when connected to TFE

I have created a workspace in Terraform Enterprise by running a terraform init && terraform plan locally with Terraform enterprise set up as my back end:
# Using a single workspace:
terraform {
backend "remote" {
hostname = "dep.app.example.io"
organization = "nnnn"
workspaces {
name = "create-workspace"
}
}
}
Terraform Apply works, and I can launch an ec2 via Terraform Enterprise with this code:
provider "aws" {
region = "${var.region}"
}
resource "aws_instance" "feature" {
count = 1
ami = "${var.ami}"
availability_zone = "${var.availability_zone}"
instance_type = "${var.instance_type}"
tags = {
Name = "${var.name_tag}"
}
}
Now when I run a terraform destroy, I get this error:
Error: error creating run: Invalid Attribute Infrastructure is
not destroyable
The configured "remote" backend encountered an unexpected
error. Sometimes this is caused by network connection problems,
in which case you could retry the command. If the issue
persists please open a support ticket to get help resolving the
problem.
What am I doing wrong here? I want to be able to run a terraform destroy that destroys the infrastructure my new Terraform enterprise workspace spins up.
EDIT: LOGS:
2019/04/03 09:11:54 [INFO] Terraform version: 0.11.11 ac4fff416318bf0915a0ab80e062a99ef3724334
2019/04/03 09:11:54 [INFO] Go runtime version: go1.11.1
2019/04/03 09:11:54 [INFO] CLI args: []string{"/usr/local/bin/terraform", "destroy"}
2019/04/03 09:11:54 [DEBUG] Attempting to open CLI config file: /Users/nlegorrec/.terraformrc
2019/04/03 09:11:54 Loading CLI configuration from /Users/nlegorrec/.terraformrc
2019/04/03 09:11:54 [INFO] CLI command args: []string{"destroy"}
2019/04/03 09:11:54 [TRACE] Preserving existing state lineage "f7abdc54-236c-c906-e701-049f3e2cc00c"
2019/04/03 09:11:54 [TRACE] Preserving existing state lineage "f7abdc54-236c-c906-e701-049f3e2cc00c"
2019/04/03 09:11:54 [DEBUG] Service discovery for dep.app.redbull.com at https://dep.app.redbull.com/.well-known/terraform.json
2019/04/03 09:11:56 [DEBUG] Retrieve version constraints for service tfe.v2 and product terraform
2019/04/03 09:11:57 [INFO] command: backend initialized: *remote.Remote
2019/04/03 09:11:57 [DEBUG] checking for provider in "."
2019/04/03 09:11:57 [DEBUG] checking for provider in "/usr/local/bin"
2019/04/03 09:11:57 [DEBUG] checking for provider in ".terraform/plugins/darwin_amd64"
2019/04/03 09:11:57 [DEBUG] found provider "terraform-provider-aws_v2.4.0_x4"
2019/04/03 09:11:57 [DEBUG] found valid plugin: "aws", "2.4.0", "/Users/nlegorrec/dev/Software Engineering/emp-kpi-tracker_web/dep/.terraform/plugins/darwin_amd64/terraform-provider-aws_v2.4.0_x4"
2019/04/03 09:11:57 [DEBUG] checking for provisioner in "."
2019/04/03 09:11:57 [DEBUG] checking for provisioner in "/usr/local/bin"
2019/04/03 09:11:57 [DEBUG] checking for provisioner in ".terraform/plugins/darwin_amd64"
2019/04/03 09:11:57 [INFO] backend/remote: starting Apply operation
2019/04/03 09:12:00 [DEBUG] plugin: waiting for all plugin processes to complete...
Error: error creating run: Invalid Attribute Infrastructure is not destroyable
The configured "remote" backend encountered an unexpected error. Sometimes
this is caused by network connection problems, in which case you could retry
the command. If the issue persists please open a support ticket to get help
resolving the problem.
Even though its a bit late hopefully this answer can help others in the future.
When using Terraform Enterprise or Terraform Cloud, you need to ensure that you are following their guidance on Destruction and Deletion from within the Workspace
Documentation for this is located here
To queue the destruction of infrastructure that is managed by a workspace you need to ensure that within the Variables of the workspace that you have assigned a variable with the name CONFIRM_DESTROY with a value of 1
Importantly, any changes to the workspace require admin privleges
Once you have completed that you should be able to use the CLI Workflow as you would locally in Terraform.

Resources