Terraform does not remember several ElasticBeanstalk settings - terraform

I have an environment created with the resource aws_elastic_beanstalk_environment. Unfortunately, Terraform shows me with each plan and apply that several settings have to be added, including the VPCId.
I got the settings using the AWS CLI describe-configuration-settings and they match what I specified, but Terraform says the settings need to be re-added each time.
I have tried both this statement
setting {
name = "VPCId"
namespace = "aws:ec2:vpc"
value = var.vpc_id
resource = "AWSEBSecurityGroup"
}
and this one.
setting {
name = "VPCId"
namespace = "aws:ec2:vpc"
value = var.vpc_id
resource = ""
}
unfortunately without success, does anyone have an idea?
I am using Terraform version 0.14.11 and the AWS Provider in version 3.74.3

Related

A resource with the ID "/subscriptions/.../resourceGroups/rgaks/providers/Microsoft.Storage/storageAccounts/aksbackupstorage" already exists

I have created storage account and container inside it to store my aks backup using terraform. I have created child module for the storage account and container.I am creating the storage account and continer calling it from root module from "main.tf".i have created two modules such as module Ex:"module aks_backup_storage" and "module aks_backup_conatiner". The module have been created successfully after applying the terraform command "terraform apply" but at the end it is raising the following errors are mentioned bellow in the console.
A resource with the ID "/subscriptions/...../resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_storage_account" for more information.
failed creating container: failed creating container: containers.Client#Create: Failure sending request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="ContainerAlreadyExists" Message="The specified container already exists.\nRequestId:f.........\nTime:2022-12-28T12:52:08.2075701Z"
root module
module "aks_backup_storage" {
source = "../modules/aks_pv_storage_container"
rg_aks_backup_storage = var.rg_aks_backup_storage
aks_backup_storage_account = var.aks_backup_storage_account
aks_backup_container = var.aks_backup_container
rg_aks_backup_storage_location = var.rg_aks_backup_storage_location
aks_backup_retention_days = var.aks_backup_retention_days
}
Child module
resource "azurerm_resource_group" "rg_aksbackup" {
name = var.rg_aks_backup_storage
location = var.rg_aks_backup_storage_location
}
resource "azurerm_storage_account" "aks_backup_storage" {
name = var.aks_backup_storage_account
resource_group_name = var.rg_aks_backup_storage
location = var.rg_aks_backup_storage_location
account_kind = "StorageV2"
account_tier = "Standard"
account_replication_type = "ZRS"
access_tier = "Hot"
enable_https_traffic_only = true
min_tls_version = "TLS1_2"
#allow_blob_public_access = false
allow_nested_items_to_be_public = false
is_hns_enabled = false
blob_properties {
container_delete_retention_policy {
days = var.aks_backup_retention_days
}
delete_retention_policy {
days = var.aks_backup_retention_days
}
}
}
# Different container can be created for the different backup level such as cluster, Namespace, PV
resource "azurerm_storage_container" "aks_backup_container" {
#name = "aks-backup-container"
name = var.aks_backup_container
#storage_account_name = azurerm_storage_account.aks_backup_storage.name
storage_account_name= var.aks_backup_storage_account
}
I have also try to import the resource using the bellow command
terraform import ['azurerm_storage_account.aks_backup_storage /subscriptions/a3ae2713-0218-47a2-bb72-c6198f50c56f/resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage']
But it also saying ZSH command not found
zsh: no matches found: [azurerm_storage_account.aks_backup_storage /subscriptions/a3ae2713-0218-47a2-bb72-c6198f50c56f/resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage/]
I had no issue when i was creating the resources using the same code without declaring any module.
Now, I have several modules in root module in the main.tf file
here is my project directory structure
I really appreciate any suggestions thanks in advance
variable.tf
variable "rg_aks_backup_storage" {
type = string
description = "storage account name for the backup"
default = "rg-aks-backup-storage"
}
variable "aks_backup_storage_account" {
type = string
description = "storage account name for the backup"
default = "aksbackupstorage"
}
variable "aks_backup_container" {
type = string
description = "storage container name "
#default = "aks-storage-container"
default = "aksbackupstoragecontaine"
}
variable "rg_aks_backup_storage_location" {
type = string
default = "westeurope"
}
variable "aks_backup_retention_days" {
type = number
default = 90
}
The storage account name that you use must be unique within Azure (see naming restrictions). I checked, and the default storage account name that you are using is already taken. Have you tried changing the name to something you know is unique?
A way to consistently do this would be to add a random suffix at the end of the name, eg:
resource "random_string" "random_suffix" {
length = 6
special = false
upper = false
}
resource "azurerm_storage_account" "aks_backup_storage" {
name = join("", tolist([var.aks_backup_storage_account, random_string.random_suffix.result]))
...
}
I also received the same error when I tried to run terraform apply while creating container registry.
It usually occurs when the terraform state file (running locally) does not match the Portal terraform state file resources.
Even if a resource with the same name does not exist in the portal or resource group, it will appear in terraform state files if it was deployed previously. If you've received these types of issues, verify the tf state file in portal. If the resource is not existent, use the following command to import it.
Note: Validate that the terraform state files are identical. Run terraform init & terraform apply once you are done with the changes.
To resolve this error, Use terraform import .
Here I tried to import the container registry (let's say) and it imported successfully.
terraform import azurerm_container_registry.acr "/subscriptions/<subscriptionID>/resourceGroups/<resourceGroup>/providers/Microsoft.ContainerRegistry/registries/xxxxcontainerRegistry1"
Output:
After that I applied terraform apply and successfully deployed the resource without any errors.
Deployed successfully in Portal:

Azure DNS - Terraform - Ignore TXT Value

I have some terrform code which works, but i want to able to ignore the DNS TXT Record value as this is updated externally using another tool (acme.sh), I have tried multiple differnt types of HCL to ignore the value, the terraform HCL does not fail, just set's the value back to the original value
Any help would be appreciated.
resource "azurerm_resource_group" "mydomain-co-uk-dns" {
name = "mydomain.co.uk-dns"
location = "North Europe"
}
resource "azurerm_dns_zone" "mydomaindns" {
name = "mydomain.co.uk"
resource_group_name = azurerm_resource_group.mydomain-co-uk.name
}
resource "azurerm_dns_txt_record" "_acme-challenge-api" {
name = "_acme-challenge.api"
zone_name = azurerm_dns_zone.mydomaindns.name
resource_group_name = azurerm_resource_group.mydomain-co-uk-dns.name
ttl = 300
record {
value = "randomkey-that-changes externally"
}
tags = {
Environment = "acmesh"
}
lifecycle {
ignore_changes = [
record
]
}
}
Thanks
I tried testing using the same code that you have provided and was successfully able to deploy the resources , then manually changed the value of record for portal and applied the terraform code again and it didn't do any changes just changed the value of the previous record to the newer value changes from portal in the terraform state file.
Note: I used Terraform v1.0.5 on windows_amd64 + provider registry.terraform.io/hashicorp/azurerm v2.83.0.
As confirmed by #Lain , the issue was resolved after upgrading the azurerm from 2.70.0 to latest.

Terraform reports error "Failed to query available provider packages"

I have created main.tf file as below for Mongodb terraform module.
resource "mongodbatlas_teams" "test" {
org_id = null
name = "MVPAdmin_Team"
usernames = ["user1#email.com", "user2#email.com", "user3#email.com"]
}
resource "mongodbatlas_project" "test" {
name = "MVP_Project"
org_id = null
teams {
team_id = null
role_names = ["GROUP_CLUSTER_MANAGER"]
}
}
resource "mongodbatlas_project_ip_access_list" "test" {
project_id = null
ip_address = null
comment = "IP address for MVP Dev cluster testing"
}
resource "mongodbatlas_cluster" "test" {
name = "MVP_DevCluster"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
cluster_type = REPLICASET
state_name = var.state_name
replication specs {
num_shards= var.num_shards
region_config {
region_name = "AU-EA"
electable_nodes = var.electable_nodes
priority = var.priority
read_only_nodes = var.read_only_nodes
}
}
provider_backup_enabled = var.provider_backup_enabled
auto_scaling_disk_gb_enabled = var.auto_scaling_disk_gb_enabled
mongo_db_major_version = var.mongo_db_major_version
provider_name = "Azure"
provider_disk_type_name = var.provider_disk_type_name
provider_instance_size_name = var.provider_instance_size_name
mongodbatlas_database_user {
username = var.username
password = var.password
auth_database_name = var.auth_database_name
role_name = var.role_name
database_name = var.database_name
}
mongodbatlas_database_snapshot_backup_policy {
policy_item = var.policy_item
frequency_type = var.frequency_type
retention_value = var.retention_value
}
advanced_configuration {
minimum_enabled_tls_protocol = var.minimum_enabled_tls_protocol
no_table_scan = var.no_table_scan
connection_string = var.connection_string
}
}
However, terraform init reports as below:
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/mongodbatlas...
Error: Failed to query available provider packages
Could not retrieve the list of available versions for provider
hashicorp/mongodbatlas: provider registry registry.terraform.io does not have
a provider named registry.terraform.io/hashicorp/mongodbatlas
If you have just upgraded directly from Terraform v0.12 to Terraform v0.14
then please upgrade to Terraform v0.13 first and follow the upgrade guide for
that release, which might help you address this problem.
Did you intend to use mongodb/mongodbatlas? If so, you must specify that
source address in each module which requires that provider. To see which
modules are currently depending on hashicorp/mongodbatlas, run the following
command:
terraform providers
Any idea as to what is going wrong?
The error message explains the most likely reason for seeing this error message: you've upgraded directly from Terraform v0.12 to Terraform v0.14 without running through the Terraform v0.13 upgrade steps.
If you upgrade to Terraform v0.13 first and follow those instructions then the upgrade tool should be able to give more specific instructions on what to change here, and may even be able to automatically upgrade your configuration for you.
However, if you wish then you can alternatively manually add the configuration block that the v0.13 upgrade tool would've inserted, to specify that you intend to use the mongodb/mongodbatlas provider as "mongodbatlas" in this module:
terraform {
required_providers {
mongodbatlas = {
source = "mongodb/mongodbatlas"
}
}
}
There are some other considerations in the v0.13 upgrade guide that the above doesn't address, so you may still need to perform the steps described in that upgrade guide if you see different error messages after trying what I showed above.

Switch terraform 0.12.6 to 0.13.0 gives me provider["registry.terraform.io/-/null"] is required, but it has been removed

I manage state in remote terraform-cloud
I have downloaded and installed the latest terraform 0.13 CLI
Then I removed the .terraform.
Then I ran terraform init and got no error
then I did
➜ terraform apply -var-file env.auto.tfvars
Error: Provider configuration not present
To work with
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0] its
original provider configuration at provider["registry.terraform.io/-/null"] is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0],
after which you can remove the provider configuration again.
Releasing state lock. This may take a few moments...
This is the content of the module/kubernetes/main.tf
###################################################################################
# EKS CLUSTER #
# #
# This module contains configuration for EKS cluster running various applications #
###################################################################################
module "eks_label" {
source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=master"
namespace = var.project
environment = var.environment
attributes = [var.component]
name = "eks"
}
#
# Local computed variables
#
locals {
names = {
secretmanage_policy = "secretmanager-${var.environment}-policy"
}
}
data "aws_eks_cluster" "cluster" {
name = module.eks-cluster.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-cluster.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
version = "~> 1.9"
}
module "eks-cluster" {
source = "terraform-aws-modules/eks/aws"
cluster_name = module.eks_label.id
cluster_version = var.cluster_version
subnets = var.subnets
vpc_id = var.vpc_id
worker_groups = [
{
instance_type = var.cluster_node_type
asg_max_size = var.cluster_node_count
}
]
tags = var.tags
}
# Grant secretmanager access to all pods inside kubernetes cluster
# TODO:
# Adjust implementation so that the policy is template based and we only allow
# kubernetes access to a single key based on the environment.
# we should export key from modules/secrets and then grant only specific ARN access
# so that only production cluster is able to read production secrets but not dev or staging
# https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_identity-based-policies.html#permissions_grant-get-secret-value-to-one-secret
resource "aws_iam_policy" "secretmanager-policy" {
name = local.names.secretmanage_policy
description = "allow to read secretmanager secrets ${var.environment}"
policy = file("modules/kubernetes/policies/secretmanager.json")
}
#
# Attache the policy to k8s worker role
#
resource "aws_iam_role_policy_attachment" "attach" {
role = module.eks-cluster.worker_iam_role_name
policy_arn = aws_iam_policy.secretmanager-policy.arn
}
#
# Attache the S3 Policy to Workers
# So we can use aws commands inside pods easily if/when needed
#
resource "aws_iam_role_policy_attachment" "attach-s3" {
role = module.eks-cluster.worker_iam_role_name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}
All credits for this fix go to the one mentioning this on the cloudposse slack channel:
terraform state replace-provider -auto-approve -- -/null registry.terraform.io/hashicorp/null
This fixed my issue with this error, on to the next error. All to upgrade a version on terraform.
For us we updated all the provider URLs which we were using in the code like below:
terraform state replace-provider 'registry.terraform.io/-/null' \
'registry.terraform.io/hashicorp/null'
terraform state replace-provider 'registry.terraform.io/-/archive' \
'registry.terraform.io/hashicorp/archive'
terraform state replace-provider 'registry.terraform.io/-/aws' \
'registry.terraform.io/hashicorp/aws'
I would like to be very specific with replacement so I used the broken URL while replacing the new one.
To be more specific this is only with terraform 13
https://www.terraform.io/docs/providers/index.html#providers-in-the-terraform-registry
This error arises when there’s an object in the latest Terraform state that is no longer in the configuration but Terraform can’t destroy it (as would normally be expected) because the provider configuration for doing so also isn’t present.
Solution:
This should arise only if you’ve recently removed object
"data.null_data_source" along with the provider "null" block. To
proceed with this you’ll need to temporarily restore that provider "null" block, run terraform apply to have Terraform destroy object data "null_data_source", and then you can remove the provider "null"
block because it’ll no longer be needed.

How to add resource dependencies in terraform

I have created a gcp kubernetes cluster using terraform and configured a few kubernetes resources such as namespaces and helm releases. I would like terraform to automatically destroy/recreate all the kubernetes cluster resources if the gcp cluster is destroyed/recreated but I cant seem to figure out how to do it.
The behavior I am trying to recreate is similar to what you would get if you used triggers with null_resources. Is this possible with normal resources?
resource "google_container_cluster" "primary" {
name = "marcellus-wallace"
location = "us-central1-a"
initial_node_count = 3
resource "kubernetes_namespace" "example" {
metadata {
annotations = {
name = "example-annotation"
}
labels = {
mylabel = "label-value"
}
name = "terraform-example-namespace"
#Something like this, but this only works with null_resources
triggers {
cluster_id = "${google_container_cluster.primary.id}"
}
}
}
In your specific case, you don't need to specify any explicit dependencies. They will be set automatically because you have cluster_id = "${google_container_cluster.primary.id}" in your second resource.
In case when you need to set manual dependency you can use depends_on meta-argument.

Resources