I am working on self-development to better see how I can implement Infrastructure as Code (Terraform) for a Snowflake Environment.
I have a GitHub repo with GitHub actions configured workflow that does the following:
setups up terraform cloud alongside the following
Setups up terraform v1.1.2
Runs Terraform fmt -check
Terraform validate
Terraform plan
Terraform apply
Public Repo https://github.com/waynetaylor/sfguide-terraform-sample/blob/main/.github/workflows/actions.yml here which pretty much is following github actions for terraform cloud steps.
I have configured TF cloud and if I run the terraform validate step this fails with environment variables for snowflake - whether I run locally or remotely via actions. However, if I run a terraform plan and apply and exclude the terraform validate it works.
Example error
Error: Missing required argument
│
│ on main.tf line 27, in provider "snowflake":
│ 27: provider "snowflake" {
│
│ The argument "account" is required, but no definition was found.
The snowflake provider documentation suggests that there are three required values: username, account, and region.
Where you call your provider in your code you'll need to provide those values.
e.g.
from
provider "snowflake" {
alias = "sys_admin"
role = "SYSADMIN"
}
to
provider "snowflake" {
// required
username = "..."
account = "..."
region = "..."
alias = "sys_admin"
role = "SYSADMIN"
}
Related
I want to create two different workspaces on Terraform Cloud: One for DEV environment, the other for PROD environment.
I am trying to create them hust using a single configuration file. The infrastructure will be the same just in two different Azure subscriptions with different credentials.
Here the code I am trying:
terraform {
required_version = ">= 1.1.0"
required_providers {
#https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.40.0"
}
}
cloud {
organization = "mycompany"
workspaces {
tags = ["dev", "prod"]
}
}
}
I am watching the documentantion. It seems like inside the cloud -> workspace command I just can use either name or tags attributes. It is required I have at least one of them in my configuration.
Now in my Terraform Cloud account, I have two workspaces: 1 with the tag prod and one with the tag dev.
I set the envinroment variable:
$Env:TF_WORKSPACE="mycompany-infrastructure-dev"
And I try to initialize Terraform Cloud:
terraform init
But I get this error:
Error: Invalid workspace selection
Terraform failed to find workspace "mycompany-infrastructure-dev" with the tags specified in your configuration: │ [dev, prod]
How can I create one configuration that I can use with different environment/workspaces?
Thank you
First, I ran the similar code as yours in my environment and received an error shown below: It prompted me to use terraform login to generate a token for accessing the organization on Terraform Cloud.
The login was successful, and the browser generated an API token.
Token received and entered.
Logged into Terraform cloud as shown:
In Terraform Cloud -> Organizations, I created a new organization:
Script for creating different workspaces from a single configuration file:
cloud {
organization = "mycompanyone"
workspaces {
tags = ["dev", "prod"]
}
}
Taken your script and made a few changes as seen below:
Terraform will prompt for basic concerns while initializing, as shown here.
Now run terraform init or terraform init -upgrade.
terraform initialized successfully:
I created this simple example:
terraform {
required_version = "~> 1.0"
required_providers {
tfe = {
source = "hashicorp/tfe"
version = "~> 0.40"
}
}
cloud {
organization = "myorg"
workspaces {
name = "test-management"
}
}
}
resource "tfe_workspace" "test" {
organization = "myorg"
name = "test"
}
When I run the init command, it creates the test-management workspace as expected. The problem is that when I run the apply I get this error:
│ Error: Error creating workspace test for organization myorg: resource not found
│
│ with tfe_workspace.test,
│ on main.tf line 17, in resource "tfe_workspace" "test":
│ 17: resource "tfe_workspace" "test" {
Is it impossible to have Terraform Cloud manage its resources?
It's a must to define the API token. At first, I thought it would be done automatically by Terraform Cloud, but it isn't.
You can set the token variable in the provider "tfe" {} or the TFE_TOKEN environment variable. I recommend the environment variable using the sensitive mode so you don't leak the API token.
(I'm assuming that the problem is credentials-related due to the other existing answer.)
The hashicorp/tfe provider is (like all providers) a separate plugin program from Terraform itself, but it supports mostly the same methods for searching for credentials that Terraform CLI does.
If you use one of the following options then the credentials will work across both Terraform CLI and the hashicorp/tfe provider at the same time:
Run terraform login to generate a credentials file for the host app.terraform.io (Terraform Cloud's hostname) in your home directory.
Set the environment variable TF_TOKEN_app_terraform_io to an existing Terraform Cloud API token. Terraform CLI and hashicorp/terraform both treat this environment variable as equivalent to a credentials configuration like what terraform login would create.
(You only need to do one of these things; they all achieve a similar result.)
I was following the tutorial with this video: https://www.youtube.com/watch?v=Ff0DoAmpv6w&t=5905s (Azure DevOps: Provision API Infrastructure using Terraform) T
This is his github code, and mine was very similar: https://github.com/binarythistle/S03E03---Azure-Devops-and-Terraform
The problem was when the resource group doesn't exist on azure, say I deleted it manually, running the pipeine creates it as well as my container instance. But when I execute the pipline again when I try to commit some code change and push to github, it shows
azurerm_resource_group.rg: Creating...
╷
│ Error: A resource with the ID "/subscriptions/xxxxx/resourceGroups/xxx" already exists
│
│ with azurerm_resource_group.rg,
│ on main.tf line 30, in resource "azurerm_resource_group" "rg":
│ 30: resource "azurerm_resource_group" "rg" {
Shouldn't it remember that a resource HAS been created before and skip this step - or perform some other action?
My observations
It looked like when the first time run, log shows there were some extra steps exceuted
azurerm_resource_group.rg: Refreshing state... [id=/subscriptions/xxxxx/resourceGroups/xxx]
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the
last "terraform apply" which may have affected this plan:
# azurerm_resource_group.rg has been deleted
- resource "azurerm_resource_group" "rg" {
id = "/subscriptions/xxxxx/resourceGroups/xxx"
- location = "australiaeast" -> null
- name = "myTFResourceGroup" -> null
}
Unless you have made equivalent changes to your configuration, or ignored the
relevant attributes using ignore_changes, the following plan may include
actions to undo or respond to these changes.
The second time pipeline was ran, the log shows:
Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.
but then the error mentioned above was shown.
deleting the .terraform local folder to clean the cache, then run terraform init again and retry running the pipeline.
I have moved Terraform configuration from one Git repo to other.
Then I ran Terraform init and it completed successfully.
When I run Terraform plan, I find below issue.
Terraform plan
╷
│ Error: Provider configuration not present
│
│ To work with data.aws_acm_certificate.cloudfront_wildcard_product_env_certificate its original provider
│ configuration at provider["registry.terraform.io/hashicorp/aws"].cloudfront-acm-us-east-1 is required, but it
│ has been removed. This occurs when a provider configuration is removed while objects created by that provider
│ still exist in the state. Re-add the provider configuration to destroy
│ data.aws_acm_certificate.cloudfront_wildcard_product_env_certificate, after which you can remove the provider
│ configuration again.
The data resource looks like this,
data "aws_acm_certificate" "cloudfront_wildcard_product_env_certificate" {
provider = aws.cloudfront-acm-us-east-1
domain = "*.${var.product}.${var.environment}.xyz.com"
statuses = ["ISSUED"]
}
After further research I found that by removing below line, it works as expected.
provider = aws.cloudfront-acm-us-east-1
Not sure what is the reason.
It appears that you were using a multi-provider configuration in the former repo. I.e. you were probably using one provider block like
provider "aws" {
region = "some-region"
access_key = "..."
secret_key = "..."
}
and a second like
provider "aws" {
alias = "cloudfront-acm-us-east-1"
region = "us-east-1"
access_key = "..."
secret_key = "..."
}
Such a setup can be used if you need to create or access resources in multiple regions or multiple accounts.
Terraform will use the first provider by default to create resources (or to lookup in case of data sources) if there is no provider specified in the resource block or data source block.
With the provider argument in
data "aws_acm_certificate" "cloudfront_wildcard_product_env_certificate" {
provider = aws.cloudfront-acm-us-east-1
domain = "*.${var.product}.${var.environment}.xyz.com"
statuses = ["ISSUED"]
}
you tell Terraform to use a specific provider.
I assume you did not move the second provider config to the new repo, but you still tell Terraform to use a specific provider which is not there. By removing the provider argument, Terraform will use the default provider for aws.
Further possible reason for this error message
Just for completeness:
The same error message can appear also in a slightly different setting, where you have a multi-provider config with resources created via the second provider. If you now remove the resource config of these resources from the Terraform config and at the same time remove the specific provider config, then Terraform will not be able to destroy the resources via the specific provider and thus show the error message like in your post.
Literally, the error message indicates this second setting, but it does not fit exactly to your problem description.
Using Terraform, I'm creating a Kubernetes cluster, installing the nginx-ingress-controller Helm chart, and then adding a Route53 hosted zone for my domain (including a wildcard record pointing to the load balancer created by the ingress Helm chart.
To do this I use two separate Terraform files and my process should be as follows -
Use Terraform file 1 to apply a VPC, EKS cluster and node group.
Use the Helm CLI to install the nginx-ingress-controller chart (there is an additional requirement not related to this issue that means the Helm chart cannot be installed by Terraform).
Import the namespace to which the nginx-ingress-controller chart was deployed into the state for Terraform file 2
Use Terraform file 2 to apply the Route53 hosted zone and record required for ingress.
I thought this was going to work, but the Terraform import command has a severe limitation -
The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs. For example, a provider configuration cannot depend on a data source.
Because I'm using a Kubernetes provider that relies on data sources, I'm falling foul of this limitation.
data "aws_eks_cluster" "cluster" {
name = var.cluster.name
}
data "aws_eks_cluster_auth" "cluster" {
name = var.cluster.name
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
Taking the following in to account, is there a way of using Terraform that would work?
I need to output the value of the NS record for the Route53 hosted zone created by the apply of Terraform file 2. Because of this, I can't include those resources in Terraform file 1 as the apply would fail if an output exists for a module/resource that won't yet exist.
The namespace must be imported so that it is destroyed when destroy is run for Terraform file 2. If it is not, when destroy is run for Terraform file 1 it will fail because the VPC won't be able to delete due to the network interface and security group created by the nginx-ingress-controller Helm chart.
The token provided by the aws_eks_cluster_auth data source only lasts for 15 minutes (the aws-iam-authenticator cannot provide a longer token), so it is inappropriate to output the token from Terraform file 1 because it's likely to have expired by the time it is used by Terraform file 2.
Update
I've tried to use an exec-based credential plugin as that means no data source would be required, but this causes Terraform to fail right out of the gate. It seems that in this case Terraform tries to create a config file before module.kubernetes-cluster has been created, so the cluster doesn't exist.
This provider configuration -
provider "kubernetes" {
host = module.kubernetes-cluster.endpoint
insecure = true
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["eks", "get-token", "--region", var.cluster.region, "--cluster-name", var.cluster.name]
command = "aws"
}
}
Produces this error -
╷
│ Error: Provider configuration: cannot load Kubernetes client config
│
│ with provider["registry.terraform.io/hashicorp/kubernetes"],
│ on main.tf line 73, in provider "kubernetes":
│ 73: provider "kubernetes" {
│
│ invalid configuration: default cluster has no server defined
╵
I got this error when updating my google_container_cluster in GKE with a new network configuration. To resolve it, I had to revert to the older network configuration, remove the provider "kubernetes", add the new network configuration and then add provider "kubernetes" back.