Terraform reports error "Failed to query available provider packages" - terraform

I have created main.tf file as below for Mongodb terraform module.
resource "mongodbatlas_teams" "test" {
org_id = null
name = "MVPAdmin_Team"
usernames = ["user1#email.com", "user2#email.com", "user3#email.com"]
}
resource "mongodbatlas_project" "test" {
name = "MVP_Project"
org_id = null
teams {
team_id = null
role_names = ["GROUP_CLUSTER_MANAGER"]
}
}
resource "mongodbatlas_project_ip_access_list" "test" {
project_id = null
ip_address = null
comment = "IP address for MVP Dev cluster testing"
}
resource "mongodbatlas_cluster" "test" {
name = "MVP_DevCluster"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
cluster_type = REPLICASET
state_name = var.state_name
replication specs {
num_shards= var.num_shards
region_config {
region_name = "AU-EA"
electable_nodes = var.electable_nodes
priority = var.priority
read_only_nodes = var.read_only_nodes
}
}
provider_backup_enabled = var.provider_backup_enabled
auto_scaling_disk_gb_enabled = var.auto_scaling_disk_gb_enabled
mongo_db_major_version = var.mongo_db_major_version
provider_name = "Azure"
provider_disk_type_name = var.provider_disk_type_name
provider_instance_size_name = var.provider_instance_size_name
mongodbatlas_database_user {
username = var.username
password = var.password
auth_database_name = var.auth_database_name
role_name = var.role_name
database_name = var.database_name
}
mongodbatlas_database_snapshot_backup_policy {
policy_item = var.policy_item
frequency_type = var.frequency_type
retention_value = var.retention_value
}
advanced_configuration {
minimum_enabled_tls_protocol = var.minimum_enabled_tls_protocol
no_table_scan = var.no_table_scan
connection_string = var.connection_string
}
}
However, terraform init reports as below:
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/mongodbatlas...
Error: Failed to query available provider packages
Could not retrieve the list of available versions for provider
hashicorp/mongodbatlas: provider registry registry.terraform.io does not have
a provider named registry.terraform.io/hashicorp/mongodbatlas
If you have just upgraded directly from Terraform v0.12 to Terraform v0.14
then please upgrade to Terraform v0.13 first and follow the upgrade guide for
that release, which might help you address this problem.
Did you intend to use mongodb/mongodbatlas? If so, you must specify that
source address in each module which requires that provider. To see which
modules are currently depending on hashicorp/mongodbatlas, run the following
command:
terraform providers
Any idea as to what is going wrong?

The error message explains the most likely reason for seeing this error message: you've upgraded directly from Terraform v0.12 to Terraform v0.14 without running through the Terraform v0.13 upgrade steps.
If you upgrade to Terraform v0.13 first and follow those instructions then the upgrade tool should be able to give more specific instructions on what to change here, and may even be able to automatically upgrade your configuration for you.
However, if you wish then you can alternatively manually add the configuration block that the v0.13 upgrade tool would've inserted, to specify that you intend to use the mongodb/mongodbatlas provider as "mongodbatlas" in this module:
terraform {
required_providers {
mongodbatlas = {
source = "mongodb/mongodbatlas"
}
}
}
There are some other considerations in the v0.13 upgrade guide that the above doesn't address, so you may still need to perform the steps described in that upgrade guide if you see different error messages after trying what I showed above.

Related

3.13.0 New Relic Provider Crashing on Terraform

I am running into an issue with a terraform provider, the new relic plugin keeps crashing for some reason and I don't know why. I'm trying to build a simple alerting script on terraform to create an alerting policy + conditions on the new relic UI. Here is the code below that I'm trying to run;
`terraform {
required_version = "~> 1.3.7"
required_providers{
newrelic = {
source = "newrelic/newrelic"
version = "~> 3.13"
}
}
}
locals{
splitList = [for url in var.urlList: split(".", url)[1]]
finishedList = [for split in local.splitList: join("-", [split, "Cert Check"])]
}
resource "newrelic_alert_policy" "certChecks" {
name = "SSL Cert Check Expirations"
incident_preference = "PER_POLICY"
}
resource "newrelic_alert_channel" "SSL_Alert" {
name = "SSL Expiration Alert"
type = "email"
config {
recipients = "foo.com"
include_json_attachment = "true"
}
}
resource "newrelic_synthetics_alert_condition" "foo" {
policy_id = newrelic_alert_policy.certChecks.id
count = length(var.urlList)
name = "SSL Expiration"
monitor_id = local.finishedList[count.index]
}
resource "newrelic_synthetics_cert_check_monitor" "monitor"{
count = length(var.urlList)
name = local.finishedList[count.index]
domain = var.urlList[count.index]
locations_public = ["US_EAST_1"]
certificate_expiration = "350"
period = "EVERY_DAY"
status = "ENABLED"
}`
It plans but won't apply, it errors out right before. Here is my error message:
Any help would be useful, thank you!
Honestly much hasn't been tried, I tried looking for more information on the terraform community but that search pulled up no results. The only thing I found was changing the location the test would be running from, but I was already in the location needed.

Terraform does not remember several ElasticBeanstalk settings

I have an environment created with the resource aws_elastic_beanstalk_environment. Unfortunately, Terraform shows me with each plan and apply that several settings have to be added, including the VPCId.
I got the settings using the AWS CLI describe-configuration-settings and they match what I specified, but Terraform says the settings need to be re-added each time.
I have tried both this statement
setting {
name = "VPCId"
namespace = "aws:ec2:vpc"
value = var.vpc_id
resource = "AWSEBSecurityGroup"
}
and this one.
setting {
name = "VPCId"
namespace = "aws:ec2:vpc"
value = var.vpc_id
resource = ""
}
unfortunately without success, does anyone have an idea?
I am using Terraform version 0.14.11 and the AWS Provider in version 3.74.3

Azure DNS - Terraform - Ignore TXT Value

I have some terrform code which works, but i want to able to ignore the DNS TXT Record value as this is updated externally using another tool (acme.sh), I have tried multiple differnt types of HCL to ignore the value, the terraform HCL does not fail, just set's the value back to the original value
Any help would be appreciated.
resource "azurerm_resource_group" "mydomain-co-uk-dns" {
name = "mydomain.co.uk-dns"
location = "North Europe"
}
resource "azurerm_dns_zone" "mydomaindns" {
name = "mydomain.co.uk"
resource_group_name = azurerm_resource_group.mydomain-co-uk.name
}
resource "azurerm_dns_txt_record" "_acme-challenge-api" {
name = "_acme-challenge.api"
zone_name = azurerm_dns_zone.mydomaindns.name
resource_group_name = azurerm_resource_group.mydomain-co-uk-dns.name
ttl = 300
record {
value = "randomkey-that-changes externally"
}
tags = {
Environment = "acmesh"
}
lifecycle {
ignore_changes = [
record
]
}
}
Thanks
I tried testing using the same code that you have provided and was successfully able to deploy the resources , then manually changed the value of record for portal and applied the terraform code again and it didn't do any changes just changed the value of the previous record to the newer value changes from portal in the terraform state file.
Note: I used Terraform v1.0.5 on windows_amd64 + provider registry.terraform.io/hashicorp/azurerm v2.83.0.
As confirmed by #Lain , the issue was resolved after upgrading the azurerm from 2.70.0 to latest.

While creating Azure App service via terraform throwing an error An argument named "zone_redundant" is not expected here

I'm trying to create a zone redundant azure app service for high availability, but terraform validate throwing an error An argument named "zone_redundant" is not expected here.
My configuration looks like below
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.46.0"
}
}
}
resource "azurerm_app_service_plan" "example" {
name = "app-demo"
location = "Australia East"
resource_group_name = "rg-app-service"
kind = "Linux"
reserved = true
zone_redundant = true
sku {
tier = "PremiumV2"
size = "P1v2"
capacity = "3"
}
}
I'm not sure what I'm missing here. Can anyone please advise me on this ?
Reference
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/app_service_plan#zone_redundant
You are using Terraform azurerm provider version 2.46.0
zone_redundant option in azurerm_app_service_plan Terraform resources was added in Terraform azurerm provider version 2.74.0, that's why you are getting error "An argument named "zone_redundant" is not expected here."
Please update Terraform azurerm provider version in your code:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.74.0"
}
}
}

Switch terraform 0.12.6 to 0.13.0 gives me provider["registry.terraform.io/-/null"] is required, but it has been removed

I manage state in remote terraform-cloud
I have downloaded and installed the latest terraform 0.13 CLI
Then I removed the .terraform.
Then I ran terraform init and got no error
then I did
➜ terraform apply -var-file env.auto.tfvars
Error: Provider configuration not present
To work with
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0] its
original provider configuration at provider["registry.terraform.io/-/null"] is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0],
after which you can remove the provider configuration again.
Releasing state lock. This may take a few moments...
This is the content of the module/kubernetes/main.tf
###################################################################################
# EKS CLUSTER #
# #
# This module contains configuration for EKS cluster running various applications #
###################################################################################
module "eks_label" {
source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=master"
namespace = var.project
environment = var.environment
attributes = [var.component]
name = "eks"
}
#
# Local computed variables
#
locals {
names = {
secretmanage_policy = "secretmanager-${var.environment}-policy"
}
}
data "aws_eks_cluster" "cluster" {
name = module.eks-cluster.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-cluster.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
version = "~> 1.9"
}
module "eks-cluster" {
source = "terraform-aws-modules/eks/aws"
cluster_name = module.eks_label.id
cluster_version = var.cluster_version
subnets = var.subnets
vpc_id = var.vpc_id
worker_groups = [
{
instance_type = var.cluster_node_type
asg_max_size = var.cluster_node_count
}
]
tags = var.tags
}
# Grant secretmanager access to all pods inside kubernetes cluster
# TODO:
# Adjust implementation so that the policy is template based and we only allow
# kubernetes access to a single key based on the environment.
# we should export key from modules/secrets and then grant only specific ARN access
# so that only production cluster is able to read production secrets but not dev or staging
# https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_identity-based-policies.html#permissions_grant-get-secret-value-to-one-secret
resource "aws_iam_policy" "secretmanager-policy" {
name = local.names.secretmanage_policy
description = "allow to read secretmanager secrets ${var.environment}"
policy = file("modules/kubernetes/policies/secretmanager.json")
}
#
# Attache the policy to k8s worker role
#
resource "aws_iam_role_policy_attachment" "attach" {
role = module.eks-cluster.worker_iam_role_name
policy_arn = aws_iam_policy.secretmanager-policy.arn
}
#
# Attache the S3 Policy to Workers
# So we can use aws commands inside pods easily if/when needed
#
resource "aws_iam_role_policy_attachment" "attach-s3" {
role = module.eks-cluster.worker_iam_role_name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}
All credits for this fix go to the one mentioning this on the cloudposse slack channel:
terraform state replace-provider -auto-approve -- -/null registry.terraform.io/hashicorp/null
This fixed my issue with this error, on to the next error. All to upgrade a version on terraform.
For us we updated all the provider URLs which we were using in the code like below:
terraform state replace-provider 'registry.terraform.io/-/null' \
'registry.terraform.io/hashicorp/null'
terraform state replace-provider 'registry.terraform.io/-/archive' \
'registry.terraform.io/hashicorp/archive'
terraform state replace-provider 'registry.terraform.io/-/aws' \
'registry.terraform.io/hashicorp/aws'
I would like to be very specific with replacement so I used the broken URL while replacing the new one.
To be more specific this is only with terraform 13
https://www.terraform.io/docs/providers/index.html#providers-in-the-terraform-registry
This error arises when there’s an object in the latest Terraform state that is no longer in the configuration but Terraform can’t destroy it (as would normally be expected) because the provider configuration for doing so also isn’t present.
Solution:
This should arise only if you’ve recently removed object
"data.null_data_source" along with the provider "null" block. To
proceed with this you’ll need to temporarily restore that provider "null" block, run terraform apply to have Terraform destroy object data "null_data_source", and then you can remove the provider "null"
block because it’ll no longer be needed.

Resources