Hashicorp Vault Required Provider Configuration in Terraform - gitlab

My GitLab CI pipeline terraform configuration requires a couple of required_provider blocks to be declared. These are "hashicorp/azuread" and "hashicorp/vault" and so in my provider.tf file, I have given the below declaration:
terraform {
required_providers {
azuread = {
source = "hashicorp/azuread"
version = "~> 2.0.0"
}
vault = {
source = "hashicorp/vault"
version = "~> 3.0.0"
}
}
}
When my GitLab pipeline runs the terraform plan stage however, it throws the following error:
Error: Invalid provider configuration
Provider "registry.terraform.io/hashicorp/vault" requires explicit configuraton.
Add a provider block to the root module and configure the providers required
arguments as described in the provider documentation.
I realise my required provider block for hashicorp/vault is incomplete/not properly configured but despite all my efforts to find an example of how it should be configured, I have simply run into a brick wall.
Any help with a very basic example would be greatly appreciated.

It depends on the version of Terraform you are using. However, for each provider there is (in the top right corner) a Use Provider button which explains how to add the required blocks of code to your files.
Each provider has some additional configuration parameters which could be added and some are required.
So based on the error, I think you are missing the second part of the configuration:
provider "vault" {
# Configuration options
}
There is also an explanation on how to upgrade to version 3.0 of the provider. You might also want to take a look at Hashicorp Learn examples and Github repo with example code.

Related

Manage multiple workspaces from a single configuration file in terraform

I want to create two different workspaces on Terraform Cloud: One for DEV environment, the other for PROD environment.
I am trying to create them hust using a single configuration file. The infrastructure will be the same just in two different Azure subscriptions with different credentials.
Here the code I am trying:
terraform {
required_version = ">= 1.1.0"
required_providers {
#https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.40.0"
}
}
cloud {
organization = "mycompany"
workspaces {
tags = ["dev", "prod"]
}
}
}
I am watching the documentantion. It seems like inside the cloud -> workspace command I just can use either name or tags attributes. It is required I have at least one of them in my configuration.
Now in my Terraform Cloud account, I have two workspaces: 1 with the tag prod and one with the tag dev.
I set the envinroment variable:
$Env:TF_WORKSPACE="mycompany-infrastructure-dev"
And I try to initialize Terraform Cloud:
terraform init
But I get this error:
Error: Invalid workspace selection
Terraform failed to find workspace "mycompany-infrastructure-dev" with the tags specified in your configuration: │ [dev, prod]
How can I create one configuration that I can use with different environment/workspaces?
Thank you
First, I ran the similar code as yours in my environment and received an error shown below: It prompted me to use terraform login to generate a token for accessing the organization on Terraform Cloud.
The login was successful, and the browser generated an API token.
Token received and entered.
Logged into Terraform cloud as shown:
In Terraform Cloud -> Organizations, I created a new organization:
Script for creating different workspaces from a single configuration file:
cloud {
organization = "mycompanyone"
workspaces {
tags = ["dev", "prod"]
}
}
Taken your script and made a few changes as seen below:
Terraform will prompt for basic concerns while initializing, as shown here.
Now run terraform init or terraform init -upgrade.
terraform initialized successfully:

How can I use Terraform Cloud to manage itself?

I created this simple example:
terraform {
required_version = "~> 1.0"
required_providers {
tfe = {
source = "hashicorp/tfe"
version = "~> 0.40"
}
}
cloud {
organization = "myorg"
workspaces {
name = "test-management"
}
}
}
resource "tfe_workspace" "test" {
organization = "myorg"
name = "test"
}
When I run the init command, it creates the test-management workspace as expected. The problem is that when I run the apply I get this error:
│ Error: Error creating workspace test for organization myorg: resource not found
│
│ with tfe_workspace.test,
│ on main.tf line 17, in resource "tfe_workspace" "test":
│ 17: resource "tfe_workspace" "test" {
Is it impossible to have Terraform Cloud manage its resources?
It's a must to define the API token. At first, I thought it would be done automatically by Terraform Cloud, but it isn't.
You can set the token variable in the provider "tfe" {} or the TFE_TOKEN environment variable. I recommend the environment variable using the sensitive mode so you don't leak the API token.
(I'm assuming that the problem is credentials-related due to the other existing answer.)
The hashicorp/tfe provider is (like all providers) a separate plugin program from Terraform itself, but it supports mostly the same methods for searching for credentials that Terraform CLI does.
If you use one of the following options then the credentials will work across both Terraform CLI and the hashicorp/tfe provider at the same time:
Run terraform login to generate a credentials file for the host app.terraform.io (Terraform Cloud's hostname) in your home directory.
Set the environment variable TF_TOKEN_app_terraform_io to an existing Terraform Cloud API token. Terraform CLI and hashicorp/terraform both treat this environment variable as equivalent to a credentials configuration like what terraform login would create.
(You only need to do one of these things; they all achieve a similar result.)

How to fix Provider configuration not present error in Terraform plan

I have moved Terraform configuration from one Git repo to other.
Then I ran Terraform init and it completed successfully.
When I run Terraform plan, I find below issue.
Terraform plan
╷
│ Error: Provider configuration not present
│
│ To work with data.aws_acm_certificate.cloudfront_wildcard_product_env_certificate its original provider
│ configuration at provider["registry.terraform.io/hashicorp/aws"].cloudfront-acm-us-east-1 is required, but it
│ has been removed. This occurs when a provider configuration is removed while objects created by that provider
│ still exist in the state. Re-add the provider configuration to destroy
│ data.aws_acm_certificate.cloudfront_wildcard_product_env_certificate, after which you can remove the provider
│ configuration again.
The data resource looks like this,
data "aws_acm_certificate" "cloudfront_wildcard_product_env_certificate" {
provider = aws.cloudfront-acm-us-east-1
domain = "*.${var.product}.${var.environment}.xyz.com"
statuses = ["ISSUED"]
}
After further research I found that by removing below line, it works as expected.
provider = aws.cloudfront-acm-us-east-1
Not sure what is the reason.
It appears that you were using a multi-provider configuration in the former repo. I.e. you were probably using one provider block like
provider "aws" {
region = "some-region"
access_key = "..."
secret_key = "..."
}
and a second like
provider "aws" {
alias = "cloudfront-acm-us-east-1"
region = "us-east-1"
access_key = "..."
secret_key = "..."
}
Such a setup can be used if you need to create or access resources in multiple regions or multiple accounts.
Terraform will use the first provider by default to create resources (or to lookup in case of data sources) if there is no provider specified in the resource block or data source block.
With the provider argument in
data "aws_acm_certificate" "cloudfront_wildcard_product_env_certificate" {
provider = aws.cloudfront-acm-us-east-1
domain = "*.${var.product}.${var.environment}.xyz.com"
statuses = ["ISSUED"]
}
you tell Terraform to use a specific provider.
I assume you did not move the second provider config to the new repo, but you still tell Terraform to use a specific provider which is not there. By removing the provider argument, Terraform will use the default provider for aws.
Further possible reason for this error message
Just for completeness:
The same error message can appear also in a slightly different setting, where you have a multi-provider config with resources created via the second provider. If you now remove the resource config of these resources from the Terraform config and at the same time remove the specific provider config, then Terraform will not be able to destroy the resources via the specific provider and thus show the error message like in your post.
Literally, the error message indicates this second setting, but it does not fit exactly to your problem description.

Why does terraform solve optional attribute value drift in some cases and not others?

Take 2 cases.
The AWS terraform provider
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.14.0"
}
}
}
provider "aws"{
region = "us-east-1"
access_key = "<insert here>"
secret_key = "<insert here>"
}
resource "aws_instance" "my_ec2" {
ami = "ami-0022f774911c1d690"
instance_type = "t2.micro"
}
After running a terraform apply on the above, if one manually creates a custom security group and assigns it to the EC2 instance created above, then a terraform apply thereafter will update the terraform.tfstate file to contain the custom security group in its security group section. However, it will NOT put back the previous "default" security group.
This is what I would expect as well, since my tf code did not explicitly want to anchor down a certain security group.
The github terraform provider
terraform {
required_providers {
github = {
source = "integrations/github"
version = "~> 4.0"
}
}
}
# Configure the GitHub Provider
provider "github" {
token = var.github-token
}
resource "github_repository" "example" {
name = "tfstate-demo-1"
//description = "My awesome codebase" <------ NOTE THAT WE DO NOT SPECIFY A DESCRIPTION
auto_init = true
visibility = "public"
}
In this case, the repo is created without a description. Thereafter, if one updates the description via github.com and re-runs terraform apply on the above code, terraform will
a) put the new description into its tf state file during the refresh stage and
b) remove the description from the terraform.tfstate file as well as the repo in github.com.
A message on the terraform command line does allude to this confusing behavior:
Unless you have made equivalent changes to your configuration,. or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these changes.
May include? Why the ambiguity?
And why does tf enforce the blank description in this case when I have not specified anything about the optional description field in my tf code? And why does this behavior vary across providers? Shouldnt optional arguments be left alone unenforced, the way the AWS provider does not undo a custom security group attached to an EC2 instance outside of terraform? What is the design thinking behind this?

Using databricks workspace in the same configuration as the databricks provider

I'm having some trouble getting the azurerm & databricks provider to work together.
With the azurerm provider, setup my workspace
resource "azurerm_databricks_workspace" "ws" {
name = var.workspace_name
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku = "premium"
managed_resource_group_name = "${azurerm_resource_group.rg.name}-mng-rg"
custom_parameters {
virtual_network_id = data.azurerm_virtual_network.vnet.id
public_subnet_name = var.public_subnet
private_subnet_name = var.private_subnet
}
}
No matter how I structure this, I can't say seem to get the azurerm_databricks_workspace.ws.id to work in the provider statement for databricks in the the same configuration. If it did work, the above workspace would be defined in the same configuration and I'd have a provider statement that looks like this:
provider "databricks" {
azure_workspace_resource_id = azurerm_databricks_workspace.ws.id
}
Error:
I have my ARM_* environment variables set to identify as a Service Principal with Contributor on the subscription.
I've tried in the same configuration & in a module and consuming outputs. The only way I can get it to work is by running one configuration for the workspace and a second configuration to consume the workspace.
This is super suboptimal in that I have a fair amount of repeating values across those configurations and it would be ideal just to have one.
Has anyone been able to do this?
Thank you :)
I've had the exact same issue with a not working databricks provider because I was working with modules. I separated the databricks infra (Azure) with databricks application (databricks provider).
In my databricks module I added the following code at the top, otherwise it would use my azure setup:
terraform {
required_providers {
databricks = {
source = "databrickslabs/databricks"
version = "0.3.1"
}
}
}
In my normal provider setup I have the following settings for databricks:
provider "databricks" {
azure_workspace_resource_id = module.databricks_infra.databricks_workspace_id
azure_client_id = var.ARM_CLIENT_ID
azure_client_secret = var.ARM_CLIENT_SECRET
azure_tenant_id = var.ARM_TENANT_ID
}
And of course I have the azure one. Let me know if it worked :)
If you experience technical difficulties with rolling out resources in this example, please make sure that environment variables don't conflict with other provider block attributes. When in doubt, please run TF_LOG=DEBUG terraform apply to enable debug mode through the TF_LOG environment variable. Look specifically for Explicit and implicit attributes lines, that should indicate authentication attributes used. The other common reason for technical difficulties might be related to missing alias attribute in provider "databricks" {} blocks or provider attribute in resource "databricks_..." {} blocks. Please make sure to read alias: Multiple Provider Configurations documentation article.
From the error message, it looks like Authentication is not configured for provider could you please configure it through the one of following options mentioned above.
For more details, refer Databricks provider - Authentication.
For passing the custom_parameters, you may checkout the SO thread which addressing the similar issue.
In case if you need more help on this issue, I would suggest to open an issue here: https://github.com/terraform-providers/terraform-provider-azurerm/issues

Resources