I created this simple example:
terraform {
required_version = "~> 1.0"
required_providers {
tfe = {
source = "hashicorp/tfe"
version = "~> 0.40"
}
}
cloud {
organization = "myorg"
workspaces {
name = "test-management"
}
}
}
resource "tfe_workspace" "test" {
organization = "myorg"
name = "test"
}
When I run the init command, it creates the test-management workspace as expected. The problem is that when I run the apply I get this error:
│ Error: Error creating workspace test for organization myorg: resource not found
│
│ with tfe_workspace.test,
│ on main.tf line 17, in resource "tfe_workspace" "test":
│ 17: resource "tfe_workspace" "test" {
Is it impossible to have Terraform Cloud manage its resources?
It's a must to define the API token. At first, I thought it would be done automatically by Terraform Cloud, but it isn't.
You can set the token variable in the provider "tfe" {} or the TFE_TOKEN environment variable. I recommend the environment variable using the sensitive mode so you don't leak the API token.
(I'm assuming that the problem is credentials-related due to the other existing answer.)
The hashicorp/tfe provider is (like all providers) a separate plugin program from Terraform itself, but it supports mostly the same methods for searching for credentials that Terraform CLI does.
If you use one of the following options then the credentials will work across both Terraform CLI and the hashicorp/tfe provider at the same time:
Run terraform login to generate a credentials file for the host app.terraform.io (Terraform Cloud's hostname) in your home directory.
Set the environment variable TF_TOKEN_app_terraform_io to an existing Terraform Cloud API token. Terraform CLI and hashicorp/terraform both treat this environment variable as equivalent to a credentials configuration like what terraform login would create.
(You only need to do one of these things; they all achieve a similar result.)
Related
I want to create two different workspaces on Terraform Cloud: One for DEV environment, the other for PROD environment.
I am trying to create them hust using a single configuration file. The infrastructure will be the same just in two different Azure subscriptions with different credentials.
Here the code I am trying:
terraform {
required_version = ">= 1.1.0"
required_providers {
#https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.40.0"
}
}
cloud {
organization = "mycompany"
workspaces {
tags = ["dev", "prod"]
}
}
}
I am watching the documentantion. It seems like inside the cloud -> workspace command I just can use either name or tags attributes. It is required I have at least one of them in my configuration.
Now in my Terraform Cloud account, I have two workspaces: 1 with the tag prod and one with the tag dev.
I set the envinroment variable:
$Env:TF_WORKSPACE="mycompany-infrastructure-dev"
And I try to initialize Terraform Cloud:
terraform init
But I get this error:
Error: Invalid workspace selection
Terraform failed to find workspace "mycompany-infrastructure-dev" with the tags specified in your configuration: │ [dev, prod]
How can I create one configuration that I can use with different environment/workspaces?
Thank you
First, I ran the similar code as yours in my environment and received an error shown below: It prompted me to use terraform login to generate a token for accessing the organization on Terraform Cloud.
The login was successful, and the browser generated an API token.
Token received and entered.
Logged into Terraform cloud as shown:
In Terraform Cloud -> Organizations, I created a new organization:
Script for creating different workspaces from a single configuration file:
cloud {
organization = "mycompanyone"
workspaces {
tags = ["dev", "prod"]
}
}
Taken your script and made a few changes as seen below:
Terraform will prompt for basic concerns while initializing, as shown here.
Now run terraform init or terraform init -upgrade.
terraform initialized successfully:
I have moved Terraform configuration from one Git repo to other.
Then I ran Terraform init and it completed successfully.
When I run Terraform plan, I find below issue.
Terraform plan
╷
│ Error: Provider configuration not present
│
│ To work with data.aws_acm_certificate.cloudfront_wildcard_product_env_certificate its original provider
│ configuration at provider["registry.terraform.io/hashicorp/aws"].cloudfront-acm-us-east-1 is required, but it
│ has been removed. This occurs when a provider configuration is removed while objects created by that provider
│ still exist in the state. Re-add the provider configuration to destroy
│ data.aws_acm_certificate.cloudfront_wildcard_product_env_certificate, after which you can remove the provider
│ configuration again.
The data resource looks like this,
data "aws_acm_certificate" "cloudfront_wildcard_product_env_certificate" {
provider = aws.cloudfront-acm-us-east-1
domain = "*.${var.product}.${var.environment}.xyz.com"
statuses = ["ISSUED"]
}
After further research I found that by removing below line, it works as expected.
provider = aws.cloudfront-acm-us-east-1
Not sure what is the reason.
It appears that you were using a multi-provider configuration in the former repo. I.e. you were probably using one provider block like
provider "aws" {
region = "some-region"
access_key = "..."
secret_key = "..."
}
and a second like
provider "aws" {
alias = "cloudfront-acm-us-east-1"
region = "us-east-1"
access_key = "..."
secret_key = "..."
}
Such a setup can be used if you need to create or access resources in multiple regions or multiple accounts.
Terraform will use the first provider by default to create resources (or to lookup in case of data sources) if there is no provider specified in the resource block or data source block.
With the provider argument in
data "aws_acm_certificate" "cloudfront_wildcard_product_env_certificate" {
provider = aws.cloudfront-acm-us-east-1
domain = "*.${var.product}.${var.environment}.xyz.com"
statuses = ["ISSUED"]
}
you tell Terraform to use a specific provider.
I assume you did not move the second provider config to the new repo, but you still tell Terraform to use a specific provider which is not there. By removing the provider argument, Terraform will use the default provider for aws.
Further possible reason for this error message
Just for completeness:
The same error message can appear also in a slightly different setting, where you have a multi-provider config with resources created via the second provider. If you now remove the resource config of these resources from the Terraform config and at the same time remove the specific provider config, then Terraform will not be able to destroy the resources via the specific provider and thus show the error message like in your post.
Literally, the error message indicates this second setting, but it does not fit exactly to your problem description.
I'm building CI/CD pipeline using GitHub Actions and Terraform. I have a main.tf file like below, which I'm calling from GitHub action for multiple environments. I'm using https://github.com/hashicorp/setup-terraform to interact with Terraform in GitHub actions. I have MyService component and I'm deploying to DEV, UAT and PROD environments. I would like to reuse main.tf for all of the environments and dynamically set workspace name like so: MyService-DEV, MyService-UAT, MyService-PROD. Usage of variables is not allowed in the terraform/cloud block. I'm using HashiCorp cloud to store state.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.0"
}
}
cloud {
organization = "tf-organization"
workspaces {
name = "MyService-${env.envname}" #<==not allowed to use variables
}
}
}
Update
I finally managed to get this up and running with helpful comments. Here are my findings:
TF_WORKSPACE needs to be defined upfront like: service-dev
I didn't get tags to work the way I want when running in automation. If I define a tag in cloud.workspaces.tags as 'service' then there is no way to set a second tag like 'dev' dynamically. Both of the tags are needed to during init ['service', 'dev'] in order for TF to select workspace service-dev automatically.
I ended up using tfe provider in order to set up workspaces(with tags) automatically. In the end I still needed to set TF_WORKSPACE=service-dev
It doesn't make sense to refer to terraform.workspace as part of the workspaces block inside a cloud block, because that block defines which remote workspaces Terraform will use and therefore dictates what final value terraform.workspace will have in the rest of your configuration.
To declare that your Terraform configuration belongs to more than one workspace in Terraform Cloud, you can assign each of those workspaces the tag "MyService" and then use the tags argument instead of the name argument:
cloud {
organization = "tf-organization"
workspaces {
tags = ["MyService"]
}
}
If you assign that tag to hypothetical MyService-dev and MyService-prod workspaces in Terraform Cloud and then initialize with the configuration above, Terraform will present those two workspaces for selection using the terraform workspace commands when working in this directory.
terraform.workspace will then appear as either MyService-dev or MyService-prod, depending on which one you have selected.
I am working on self-development to better see how I can implement Infrastructure as Code (Terraform) for a Snowflake Environment.
I have a GitHub repo with GitHub actions configured workflow that does the following:
setups up terraform cloud alongside the following
Setups up terraform v1.1.2
Runs Terraform fmt -check
Terraform validate
Terraform plan
Terraform apply
Public Repo https://github.com/waynetaylor/sfguide-terraform-sample/blob/main/.github/workflows/actions.yml here which pretty much is following github actions for terraform cloud steps.
I have configured TF cloud and if I run the terraform validate step this fails with environment variables for snowflake - whether I run locally or remotely via actions. However, if I run a terraform plan and apply and exclude the terraform validate it works.
Example error
Error: Missing required argument
│
│ on main.tf line 27, in provider "snowflake":
│ 27: provider "snowflake" {
│
│ The argument "account" is required, but no definition was found.
The snowflake provider documentation suggests that there are three required values: username, account, and region.
Where you call your provider in your code you'll need to provide those values.
e.g.
from
provider "snowflake" {
alias = "sys_admin"
role = "SYSADMIN"
}
to
provider "snowflake" {
// required
username = "..."
account = "..."
region = "..."
alias = "sys_admin"
role = "SYSADMIN"
}
I have worked with terraform before, where terraform can place the tfstate files in S3. Does terraform also support azure blob storage as a backend? What would be the commands to set the backend to be azure blob storage?
As of Terraform 0.7 (not currently released but you can compile from source) support for Azure blob storage has been added.
The question asks for some commands, so I'm adding a little more detail in case anyone needs it. I'm using Terraform v0.12.24 and azurerm provider v2.6.0. You need two things:
Create a storage account (general purpose v2) and a container for storing your states.
Configure your environment and your main.tf
As for the second point, your terraform block in main.tf should contain a "azurerm" backend:
terraform {
required_version = "=0.12.24"
backend "azurerm" {
storage_account_name = "abcd1234"
container_name = "tfstatecontainer"
key = "example.prod.terraform.tfstate"
}
provider "azurerm" {
version = "=2.6.0"
features {}
subscription_id = var.subscription_id
}
Before calling to plan or apply, init the ARM_ACCESS_KEY variable with a bash export:
export ARM_ACCESS_KEY=<storage access key>
Finally, run the init command:
terraform init
Now, if you run terraform plan you will see the tfstate created in the container. Azure has a file locking feature built in, in case anyone tries to update the state file at the same time.