Azure: Web Application Firewall Policy was not found - terraform

I have a simple data source block to retrieve a WAF policy resource but it doesn't find the resource. If I comment out the data block there is no problem. The WAF policy created by terraform can be applied and destroyed.
data "azurerm_web_application_firewall_policy" "example" {
name = "a205555-az-waf-policy"
resource_group_name = "eastus2-204161-platform-resources"
}
output "azurerm_web_application_firewall_policy" {
value = data.azurerm_web_application_firewall_policy.example.id
}
When I try to get the resource data I get error:
│ Error: Error: Web Application Firewall Policy "a205555-az-waf-policy" was not found
│ with data.azurerm_web_application_firewall_policy.example,
│ on output.tf line 40, in data "azurerm_web_application_firewall_policy" "example":
│ 40: data "azurerm_web_application_firewall_policy" "example" {
│
I've tried removing the resource from tfstate, importing the resource, deleting and recreating it, upgrading the provider in case there was a bug. Any ideas?

Related

Create multiple subscriptions with for_each in terraform returns context deadline exceeded

I have created a module that creates a subscription and other resources via terraform.
As you can see it iterates over this two times. One time where env is "dev" and one time where env is "prod".
the problem is: It creates the subscription "-prod", but it will not create "-dev". The pipeline says "Error: context deadline exceeded" after 30 minutes.
In terraform plan it says that it is all good and it will add the subscription, but it will not do it.
module "wmind_subscription" {
source = "./subscription"
for_each = toset(["dev", "prod"])
env = each.key
name = "wmind-it"
management_group_id = module.wmind_management_group.management_group_id
repo_name = module.wmind_github.repo_name
}
The terraform resource thats creates the subscription:
resource "azurerm_subscription" "this" {
subscription_name = "${var.name}-${var.env}"
billing_scope_id = data.azurerm_billing_mca_account_scope.this.id
}
full error message:
╷
│ Error: creating new Subscription (Alias "***"): subscription.AliasClient#Create: Failure sending request: StatusCode=0 -- Original Error: context deadline exceeded
│
│ with module.wmind_subscription["dev"].azurerm_subscription.this,
│ on subscription/subscription.tf line 10, in resource "azurerm_subscription" "this":
│ 10: resource "azurerm_subscription" "this" {
│
╵
hope someone can help me, thank you

Terraform plan throws Provider Plugin Error

I am doing a small POC with terraform and I am unable to run terraform plan
My code:
terraform {
backend "azurerm" {
storage_account_name = "appngqastorage"
container_name = "terraform"
key = "qa.terraform.tfstate"
access_key = "my access key here"
}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 2.77"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "qa_resource_group" {
location = "East US"
name = "namehere"
}
My execution:
terraform init = success
terraform validate = configuration is valid
terraform plan = throws exception
Error:
│ Error: Plugin error
│
│ with provider["registry.terraform.io/hashicorp/azurerm"],
│ on main.tf line 15, in provider "azurerm":
│ 15: provider"azurerm"{
│
│ The plugin returned an unexpected error from plugin.(*GRPCProvider).ConfigureProvider: rpc error: code = Internal desc = grpc: error while marshaling: string field contains invalid UTF-8
After digging a little deeper I was able to figure out what was the issue.
Current project I am working on is been utilized within multiple regions. So, when testing work I am swapping my region in order to properly test the data that displays in specific region. This time when running terraform apply my windows configuration was pointing to another region not the US.
This thread helped me understand.
Upgrade azure CLI to the latest version using az upgrade
Log into your azure account using az login
Re-run your terraform commands!

Terraform aks module - get cluster name and resource group name via remote state

Hi I am trying to follow this offical guide to manage aks resources. There terraform_remote_state is used to get the resource_group_name and kubernetes_cluster_name.
data "terraform_remote_state" "aks" {
backend = "local"
config = {
path = "/path/to/base/project/terraform.tfstate"
}
}
# Retrieve AKS cluster information
provider "azurerm" {
features {}
}
data "azurerm_kubernetes_cluster" "cluster" {
name = data.terraform_remote_state.aks.outputs.kubernetes_cluster_name
resource_group_name = data.terraform_remote_state.aks.outputs.resource_group_name
}
I have created the inital aks cluster with the aks module. Looking at its output in the documentation, it doesnt export the resource group name or cluster name.
Now I wonder how I can get the information. I have tried the below in the base project.
module "aks" {
...
}
output "resource_group_name" {
value = module.aks.resource_group_name
}
output "kubernetes_cluster_name" {
value = module.aks.cluster_name
}
But I get erros when trying terraform plan
Error: Unsupported attribute
│
│ on main.tf line 59, in output "resource_group_name":
│ 59: value = module.aks.resource_group_name
│ ├────────────────
│ │ module.aks is a object, known only after apply
│
│ This object does not have an attribute named "resource_group_name".
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 63, in output "kubernetes_cluster_name":
│ 63: value = module.aks.cluster_name
│ ├────────────────
│ │ module.aks is a object, known only after apply
│
│ This object does not have an attribute named "cluster_name".
Those are listed under inputs for that module though. Now I dont have an idea know how to get those values from the terraform_remote_state.
As the module itself doesn’t have name and resource group as output , we have to declare outputs there first and then call it while deploying or in remote state as well.
So we have to add 2 outputs in output.tf for aks module after doing terraform init.
output "kubernetes_cluster_name" {
value = azurerm_kubernetes_cluster.main.name
}
output "resource_group_name" {
value = azurerm_kubernetes_cluster.main.resource_group_name
}
Then call outputs in main.tf after defining the modules i.e. network and aks , you can see your Kubernetes cluster name in plan as well and after applying it.
output "kuberneteclustername" {
value = module.aks.kubernetes_cluster_name
}
output "resourcegroupname" {
value = module.aks.resource_group_name
}
Now lets test it from the remote state :
data "terraform_remote_state" "aks" {
backend = "local"
config = {
path = "path/to/terraform/aksmodule/terraform.tfstate"
}
}
# Retrieve AKS cluster information
provider "azurerm" {
features {}
}
data "azurerm_kubernetes_cluster" "cluster" {
name = data.terraform_remote_state.aks.outputs.kuberneteclustername
resource_group_name = data.terraform_remote_state.aks.outputs.resourcegroupname
}
output "aks" {
value = data.azurerm_kubernetes_cluster.cluster.name
}
output "rg" {
value = data.azurerm_kubernetes_cluster.cluster.resource_group_name
}

using terraform with helm on minikube gives error

I have the following .tf file
provider "kubernetes" {
config_context_cluster = "minikube"
}
resource "kubernetes_namespace" "user-namespace" {
metadata {
name = "user-namespace"
}
}
provider "helm" {
kubernetes {
config_context_cluster = "minikube"
}
}
resource "helm_release" "local" {
name = "user-server-chart"
chart = "./user-server"
}
When I run terraform apply I get the following error
kubernetes_namespace.brw-user-namespace: Creating...
helm_release.local: Creating...
Error code explanation: 501 = Server does not support this operation.\n") has prevented the request from succeeding (post namespaces)
│
│ with kubernetes_namespace.user-namespace,
│ on main.tf line 5, in resource "kubernetes_namespace" "user-namespace":
│ 5: resource "kubernetes_namespace" "user-namespace" {
│
╵
Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
│
│ with helm_release.local,
│ on main.tf line 17, in resource "helm_release" "local":
│ 17: resource "helm_release" "local" {
Step 1: kubeconfig is not having the correct context and add a line with kubeconfig to your ~/.bashrc file
kubectl config set-context ~/.kube/kubeconfig1.yml
kubectl config use-context ~/.kube/kubeconfig1.yml
or export KUBECONFIG=~/.kube/<kubeconfig_env>.yml
Step 2: A Helm Release resource can be imported using its namespace and name
e.g. terraform import helm_release.example default/example-name
Since the repository attribute is not being continued as metadata by helm, it will not be set to any value by default. All other provider specific attributes will be set to their default values and they can be overridden after running apply using the resource definition configuration.
You may refer to the document [1] [2] for additional information.
[1] https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release
[2] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs

Terraform Error: Failed to query available provider packages, pagerduty provider

I'm on TF version v1.0.0(latest) and am trying to make use of the pagerduty Tf provider and the error log says could not retrieve teh list of available versions. Below is the code snippet and complete error log.
Code:
terraform {
required_providers {
pagerduty = {
source = "PagerDuty/pagerduty"
version = "~> 1.9.8"
}
}
}
provider "pagerduty" {
token = var.token
}
resource "pagerduty_service" "example" {
name = "My Web App"
auto_resolve_timeout = 14400
acknowledgement_timeout = 600
escalation_policy = var.policy
}
resource "pagerduty_service_integration" "apiv2" {
name = "API V2"
type = "events_api_v2_inbound_integration"
service = pagerduty_service.example.id
}
Error:
- Finding latest version of hashicorp/pagerduty...
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/pagerduty: provider registry
│ registry.terraform.io does not have a provider named registry.terraform.io/hashicorp/pagerduty
│
│ Did you intend to use pagerduty/pagerduty? If so, you must specify that source address in each module which
│ requires that provider. To see which modules are currently depending on hashicorp/pagerduty, run the following
│ command:
│ terraform providers
Answering my question.
Separating the first terraform required_providers blocks into its own versions.tf file has solved the issue.

Resources