Terraform Azure Application Gateway unable to associate with certificate in key vault - azure

I'm trying to install a certificate into an Application Gateway.
Following the documentation I have used key_vault_secret_id in the ssl_certificate block.
Here is a simplified (all the code works its just this one block that has issues so this helps to highlight the problem) version of the code:
resource "azurerm_application_gateway" "npfs_application_gateway" {
name = local.appgateway_name
resource_group_name = data.azurerm_resource_group.rg_core.name
location = data.azurerm_resource_group.rg_core.location
### This is a standard V2
sku {
name = var.gw_sku["name"]
tier = var.gw_sku["tier"]
capacity = var.gw_sku["capacity"]
}
ssl_certificate {
name = var.pfx_certificate_name
key_vault_secret_id = "[REDACTED]"
password = data.azurerm_key_vault_secret.cert-password.value
}
}
}
When I run this as a terraform plan I get the following error:
The argument "data" is required, but no definition was found.
An argument named "key_vault_secret_id" is not expected here.
This is weird because the docs state that the data argument is optional if a key_vault_secret_id is set, but it doesn't recognise it.
I am using the following versions:
Terraform v0.12.26
provider.azuread v0.8.0
provider.azurerm v1.44.0
provider.null v2.1.2
provider.random v2.2.1
provider.template v2.1.2
Anybody come across this before? Is one of my versions wrong?

I was able to solve this problem by upgrading to the latest azurerm terraform provider, but that wasn't the only thing I needed to do. In addition do this:
Go to the Subscription you are working in, to the Resource providers.
See if you have a Provider "Microsoft.DataProtection" with Status "NotRegistered".
Register it.
Seems that the new terraform code is leveraging this additional provider within Azure.

I find when you get these types of issues, it's best to look in the source.
According to: https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/azurerm/internal/services/network/application_gateway_resource.go
You can only have 'key_vault_secret_id' inside a 'ssl_certificate' block, which is what you have. But note that is the latest version of the provider, on version 2. You are on 1.44.0, so we need to look at that source...
https://github.com/terraform-providers/terraform-provider-azurerm/blob/v1.44.0/azurerm/internal/services/network/resource_arm_application_gateway.go
And in this version the only mentions of 'key_vault_secret_id' are commented out.
I suggest you upgrade to the lastest version of the provider.

Related

Deploying and configuring Postgres server in Azure with Terraform

I'm deploying Azure Postgres Flexible Server with Terraform as described in GitHub. The configuration works as expected, no issues there. The only divination from that GitHub template is that I want to configure pgBouncer for Postgres which is now supported natively. I don't see a way how I can create this configuration (i.e., enable this feature).
I've done some research and discovered the configuration feature is not quite available (at least according to the open ticket in GitHub here). At the same time, one of the published replies suggests using azurerm_postgresql_flexible_server_configuration and this resource type is available in Terraform indeed. However, Microsoft documentation states that to enable and configure pgBouncer I need to introduce 7 parameters. I thought that to make the code tidier, I can use a list and foreach loop, like this:
locals {
pg_config = {
"pgbouncer.default_pool_size" : "50",
"pgBouncer.max_client_conn" : "5000",
"pgBouncer.pool_mode" : "TRANSACTION"
etc...
}
}
resource "azurerm_postgresql_flexible_server_configuration" "postgres_fs_config" {
for_each = local.pg_config
name = each.key
value = each.value
server_id = azurerm_postgresql_flexible_server.postgres-fs.id
}
Is this the best way to configure Postgres (without involving CDK)? Did I miss something?
Ok, I verified this approach and it works like a charm. Will stick to it for now.

Error: Provider configuration not present

Im trying to update terraform version from 0.12 to 0.13. While updating the terraform I came across on an issue during plan
Error: Provider configuration not present
To work with
aws_sns_topic_subscription.sns_s3_raw_parquet_sqs_user_cleansing_monet_service_subscription
its original provider configuration at provider["registry.terraform.io/-/aws"]
is required, but it has been removed. This occurs when a provider
configuration is removed while objects created by that provider still exist in
the state. Re-add the provider configuration to destroy
aws_sns_topic_subscription.sns_s3_raw_parquet_sqs_user_cleansing_monet_service_subscription,
after which you can remove the provider configuration again.
Could someone please help?
Most likely you have not proceeded with the migration to Terraform v.013 fully.
Make a backup of your current state with terraform state pull then try to execute the following:
terraform state replace-provider 'registry.terraform.io/-/aws' 'registry.terraform.io/hashicorp/aws'
This should amend your state to the newer Terraform version.
Most of the time you'll have a global.tf file in your directory that lists some things that might not actually be resources. This is where you'd normally have a block like this:
provider "aws" {
region = "REGION"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
Looks like that block, whatever file it was in, got deleted. Add it back and then try again. Note you'll need to change REGION to whatever region you put your resources in. In lieu of access_key and secret_key, some people use a profile and put it in ~/.aws/credentials.

Terraform unable to use third party providers

Description:
I am trying to use an Elasticsearch provider for Terraform. Since there is no official one from Elastic or from Hashicorp I am trying to use a community one "https://registry.terraform.io/providers/phillbaker/elasticsearch/latest".
Terraform version: Terraform v0.14.4
Code:
I tried to put everything in 1 .tf file. I also tried to create a separate module for the resources like Hashicorp recommends. Both methods generate the same error message.
terraform {
required_providers {
elk = {
source = "phillbaker/elasticsearch"
version = "1.5.1"
}
}
}
provider "elk" {
url = "https://<my_elk_server>"
}
resource "elasticsearch_index" "index" {
name = var.elasticsearch_index_name
}
Problem:
terraform init isn't able to find the appropriate provider in the Terraform Registry for some reason.
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/elasticsearch...
- Finding phillbaker/elasticsearch versions matching "1.5.1"...
- Installing phillbaker/elasticsearch v1.5.1...
- Installed phillbaker/elasticsearch v1.5.1 (self-signed, key ID 02AD42CD82B6A957)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
Error: Failed to query available provider packages
https://www.terraform.io/docs/plugins/signing.html
Could not retrieve the list of available versions for provider
hashicorp/elasticsearch: provider registry registry.terraform.io does not have
a provider named registry.terraform.io/hashicorp/elasticsearch
If you have just upgraded directly from Terraform v0.12 to Terraform v0.14
then please upgrade to Terraform v0.13 first and follow the upgrade guide for
that release, which might help you address this problem.
No tfstate files are being generated.
How do I use third party providers from the Terraform Registry ?
In your required_providers block you've told Terraform that you intend to refer to this provider as "elk" within this module:
elk = {
source = "phillbaker/elasticsearch"
version = "1.5.1"
}
Typically you'd set the local name of the provider to be the same as the "type" portion of the provider source address, like this:
elasticsearch = {
source = "phillbaker/elasticsearch"
version = "1.5.1"
}
If you change the local name in this way then use references to elasticsearch elsewhere in the module should then refer to the community provider as you intended.
Note that means you'll also need to change the provider block so it has a matching local name:
provider "elasticsearch" {
url = "https://<my_elk_server>"
}
A different approach here would be to continue to use elk as the name and then change the rest of the configuration to properly refer to that non-default name. I don't recommend doing this, because typically I'd expect the local name to only mismatch the type in the unusual case where your module depends on two providers with the same type name, but I'm mentioning this in the hope that it helps to understand how the Terraform language infers provider dependencies when not given explicitly:
terraform {
required_providers {
elk = {
source = "phillbaker/elasticsearch"
version = "1.5.1"
}
}
}
# "elk" here is matched with the local names in the
# required_providers block, so this will work.
provider "elk" {
url = "https://<my_elk_server>"
}
# This "elasticsearch_" prefix causes Terraform to look
# for a provider with the local name "elasticsearch"
# by default...
resource "elasticsearch_index" "index" {
# ...so if you've given the provider a different local
# name then you need to associate the resource with
# the provider configuration explicitly:
provider = elk
name = var.elasticsearch_index_name
}
I expect most Terraform users would find the above approach surprising, so in the interests of using familiar Terraform idiom I'd suggest instead following my first suggestion of renaming the local name to elasticsearch, which will then allow the automatic resource-to-provider association to work.
So, after testing it seems putting the whole code in the same .tf file does the job.
terraform {
required_providers {
elasticsearch = {
source = "phillbaker/elasticsearch"
version = "1.5.1"
}
}
}
provider "elasticsearch" {
url = "http://127.0.0.1:9200"
}
resource "elasticsearch_index" "index" {
name = var.index_name
}
If you want to create a separate module for it you can just source it from another module:
module "elastic" {
index_name = var.index_name
source = "./modules/elastic"
}
Check Martin's answer for more information.

The argument "storage_connection_string" is required, but no definition was found

I'm currently trying to set up an Azure Function app using Terraform.
Using the documentation from Hasihcorp found here.
However when running a terraform plan I'm getting the following error: The argument "storage_connection_string" is required, but no definition was found.
According to the documentation there is no such valid parameter and as such I've not included it. I've only found one entry on this while looking about and it was only a question, with no response. I'm not well versed in Azure so don't know if I need the storage_connection_string or if it's the API that is messing with me.
The resource snippet:
resource "azurerm_function_app" "this" {
name = "function-name"
resource_group_name = "resource-group"
location = "location"
app_service_plan_id = "id"
storage_account_name = "name"
storage_account_access_key = "key"
Formatting and referencing of values are set up but I don't have the code on this computer so made more sense to just post it like this.
This most likely arises from using an outdated version of the azure provider. E.g. version 2.0.0 has a required storage_connection_string. That got removed in some version.
Solution: upgrade your used provider version. Somewhere you should have declared that you want to use the azure provider. At that place you should specify a version constraint as well, e.g.:
terraform {
required_providers {
azure = {
version = "~> 2.40.0"
}
}
}
Or alternatively you should only look at the documentation matching your current provider + terraform version.

Terraform zone : [DEPRECATED] Use location instead, but location results in another error. Any solution?

When I try with zone, I get "Zone deprecated, use location instead", but, location is not recognized.
Is there any workaround?
provider "google" {
credentials = file("gcp-terra-flask-CREDENTIALS_FILE.json")
project = "gcp-terra-flask"
region = "us-west1"
zone = "us-west1-a"
version = "~> 2.17.0"
}
provider "google" {
credentials = file("gcp-terra-flask-CREDENTIALS_FILE.json")
project = "gcp-terra-flask"
region = "us-west1"
location = "us-west1-a"
version = "~> 2.17.0"
}
I tried the below example. Used brew "upgrade terraform". I need to figure out what changes I need to make so this one runs without any Warning, without any errors (assuming all gce permissions are in line).
https://cloud.google.com/community/tutorials/getting-started-on-gcp-with-terraform
Hi your problem is with the version 3..
See https://www.terraform.io/docs/providers/google/guides/version_3_upgrade.html
That is in beta and all of us that only set a minimal version are caching the beta version for this provider.
For bigquery I need to set the provider for the last 2.. Version :(
See as example:
https://github.com/forseti-security/terraform-google-forseti/issues/303

Resources