I'm just starting with terraform and I just don't grasp some of the basics.
I have two problems
I have a main.tf file where I initialise Terraform, declare 2 providers for Azure and Kubernetes, and define two modules.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.2"
}
}
required_version = ">= 1.1.0"
}
provider "kubernetes" {
host = module.cluster.host
client_certificate = module.cluster.client_certificate
client_key = module.cluster.client_key
cluster_ca_certificate = module.cluster.cluster_ca_certificate
}
provider "azurerm" {
features {}
}
module "cluster" {
source = "./modules/cluster"
location = var.location
kubernetes_version = var.kubernetes_version
ssh_key = var.ssh_key
}
module "network" {
source = "./modules/network"
}
then in modules/cluster/cluster.tf I setup the resource group and the cluster.
when I define a new resource all azure and kubernetes providers modules are available.
I'm trying to add two new resources azurem_public_ip and an azurem_lb in modules/network/network.tf but the azurerm modules are not available to select
If I try and create those two resources in `modules/cluster/cluster.tf they are available
and I can write the so:
resource "azurerm_resource_group" "resource_group" {
name = var.resource_group_name
location = var.location
tags = {
Environment = "Production"
Team = "DevOps"
}
}
resource "azurerm_kubernetes_cluster" "server_cluster" {
...
}
resource "azurerm_public_ip" "public-ip" {
name = "PublicIPForLB"
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
allocation_method = "Static"
}
resource "azurerm_lb" "load-balancer" {
name = "TestLoadBalancer"
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
frontend_ip_configuration {
name = "PublicIPAddress"
public_ip_address_id = azurerm_public_ip.public-ip.id
}
}
Why aren't the providers available in network.tf?
Shouldn't they be available in the network module as they are in the cluster module?
From the docs network module should inherit the azurerm provider from main.tf
If not specified, the child >module inherits all of the >default (un-aliased) provider >configurations from the calling >module.
Many thanks for your help.
Tried to replicate the same scenario using below mentioned code. Everything is working as expected. Hope issue may cause because of module declaration.
Main tf code as follows:
provider "azurerm" {
features {}
}
provider "kubernetes" {
host = module.cluster.host
client_certificate = module.cluster.client_certificate
client_key = module.cluster.client_key
cluster_ca_certificate = module.cluster.cluster_ca_certificate
}
module "cluster" {
source = "./modules/cluster"
}
module "network" {
source = "./modules/network"
}
NOTE: No more parameters require on provider tf file. Just declaration of module is sufficient.
Provider tf file as follows:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.2"
}
}
required_version = ">= 1.1.0"
}
Cluster file as follows:
resource "azurerm_kubernetes_cluster" "example" {
name = "swarnaexample-aks1"
location = "East Us"
resource_group_name = "*********"
dns_prefix = "exampleaks1"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
Environment = "Production"
}
}
output "client_certificate" {
value = azurerm_kubernetes_cluster.example.kube_config.0.client_certificate
sensitive = true
}
output "kube_config" {
value = azurerm_kubernetes_cluster.example.kube_config_raw
sensitive = true
}
network tf file as follows:
resource "azurerm_resource_group" "example" {
name = "***********"
location = "East Us"
}
resource "azurerm_public_ip" "example" {
name = "swarnapIPForLB"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
allocation_method = "Static"
}
resource "azurerm_lb" "example" {
name = "swarnaLB"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
frontend_ip_configuration {
name = "PublicIPadd"
public_ip_address_id = azurerm_public_ip.example.id
}
}
from the code
Upon running of plan and apply
Related
I'm trying to create a Terraform project to create everything I need in an Azure subscription, so resource groups, vnets, subnets and VM's.
However when I've run this once and try again, it states that it cannot delete a subnet that is in use. I haven't changed anything about the subnet or the VM connected to it.
Error: creating/updating Virtual Network: (Name "" / Resource Group ""): network.VirtualNetworksClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InUseSubnetCannotBeDeleted" Message="Subnet build-agent is in use by /subscriptions/mysub/resourceGroups/myrg/providers/Microsoft.Network/networkInterfaces/mynic/ipConfigurations/internal and cannot be deleted. In order to delete the subnet, delete all the resources within the subnet. See aka.ms/deletesubnet." Details=[]
terraform {
required_version = ">= 1.1.0"
backend "azurerm" {
}
required_providers {
azurerm = {
version = "=3.5.0"
source = "hashicorp/azurerm" # https://registry.terraform.io/providers/hashicorp/azurerm/latest
}
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
locals {
name_suffix = "<mysuffix>"
}
resource "azurerm_resource_group" "rg-infra" {
name = "rg-${local.name_suffix}"
location = "UK South"
}
resource "azurerm_virtual_network" "vnet-mgmt" {
name = "vnet-${local.name_suffix}"
location = azurerm_resource_group.rg-infra.location
resource_group_name = azurerm_resource_group.rg-infra.name
address_space = ["<myiprange>"]
subnet {
name = "virtual-machines"
address_prefix = "<myiprange>"
}
subnet {
name = "databases"
address_prefix = "<myiprange>"
}
}
data "azurerm_virtual_network" "network" {
name = "vnet-${local.name_suffix}"
resource_group_name = azurerm_resource_group.rg-infra.name
}
resource "azurerm_subnet" "sb-ansible" {
name = "build-agent"
resource_group_name = azurerm_resource_group.rg-infra.name
virtual_network_name = data.azurerm_virtual_network.network.name
address_prefixes = ["<myiprange>"]
depends_on = [azurerm_virtual_network.vnet-mgmt]
}
data "azurerm_subnet" "prd-subnet" {
name = "build-agent"
virtual_network_name = data.azurerm_virtual_network.network.name
resource_group_name = azurerm_resource_group.rg-infra.name
depends_on = [azurerm_subnet.sb-ansible]
}
resource "azurerm_network_interface" "ni-ansible" {
name = "nic-ansible-${local.name_suffix}"
location = azurerm_resource_group.rg-infra.location
resource_group_name = azurerm_resource_group.rg-infra.name
ip_configuration {
name = "internal"
subnet_id = data.azurerm_subnet.prd-subnet.id
private_ip_address_allocation = "Dynamic"
}
lifecycle {
ignore_changes = ["ip_configuration"]
}
depends_on = [azurerm_subnet.sb-ansible]
}
resource "azurerm_linux_virtual_machine" "ansible-vm" {
name = "ansible-build-agent"
resource_group_name = azurerm_resource_group.rg-infra.name
location = azurerm_resource_group.rg-infra.location
size = "Standard_D2as_v4"
admin_username = "myadminuser"
network_interface_ids = [
azurerm_network_interface.ni-ansible.id,
]
admin_ssh_key {
username = "myadminuser"
public_key = ""
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
lifecycle {
ignore_changes = ["source_image_reference"]
}
depends_on = [azurerm_network_interface.ni-ansible]
}
Any help on why it's behaving like this, or a workaround would be greatly appreciated!
Many thanks
Turns out you can't mix nested subnets in the vnet block with an explicitly defined azurerm_subnet
I am Following this docs page to deploy azure function with app settings https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/function_app
My terraform file looks like :
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.10.0"
}
}
}
provider "azurerm" {
}
resource "azurerm_resource_group" "example" {
name = "azure-functions-test-rg"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "funcdemo123shafiq"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_app_service_plan" "example" {
name = "azure-functions-test-service-plan"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
sku {
tier = "Standard"
size = "S1"
}
}
resource "azurerm_function_app" "example" {
name = "test-azure-shafiq123"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
app_service_plan_id = azurerm_app_service_plan.example.id
storage_account_name = azurerm_storage_account.example.name
storage_account_access_key = azurerm_storage_account.example.primary_access_key
os_type = "linux"
version = "~4"
app_settings {
FUNCTIONS_WORKER_RUNTIME = "python"
TESTING_KEY = "TESTING_VALUE"
}
site_config {
linux_fx_version = "python|3.9"
}
}
When try to deploy this through terraform apply command , I am getting this error.
│ Error: Unsupported block type
│
│ on main.tf line 46, in resource "azurerm_function_app" "example":
│ 46: app_settings {
│
│ Blocks of type "app_settings" are not expected here. Did you mean to define argument "app_settings"? If so, use the equals sign to assign it a value.
app_setting is supported on specific version of Terraform AzureRM provider. There is bug fixed availble for those version. I have used 3.3.0 provider version and it is working for me as expected and also you can't configure the value of site_config.Its value will be decide automatically based on the result of applying this configuration, same you can check in the updated document of Terraform
main.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.3.0"
}
}
}
provider "azurerm" {
features{}
}
data "azurerm_resource_group" "example" {
name = "v-rXXXXXree"
#location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "funcdemo123shafiq4535"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_service_plan" "example" {
name = "azure-functions-test-service-plan1"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
os_type = "Linux"
sku_name = "Y1"
}
resource "azurerm_linux_function_app" "example" {
name = "test-azure-shafi4353"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
service_plan_id = azurerm_service_plan.example.id
storage_account_name = azurerm_storage_account.example.name
storage_account_access_key = azurerm_storage_account.example.primary_access_key
#os_type = "linux"
#version = "~3"
app_settings={
FUNCTIONS_WORKER_RUNTIME = "python"
TESTING_KEY = "TESTING_VALUE"
}
site_config {
#linux_fx_version = "python|3.9"
}
}
I'm trying to build an AKS cluster in Azure using Terraform. However, I do not want AKS deployed into its own VNET and Subnet, I already have built a subnet within a vnet that I want it to use. When trying to just give it the subnet ID, I get an overlapping CIDER issue. My networking is:
VNET: 10.0.0.0/16
Subnets: 10.0.1.0/24, 10.0.2.0/24, and 10.0.3.0/24. I need AKS to use the 10.0.1.0./24 subnet within this VNET. However, my Terraform config is trying to use a CIDR of 10.0.0.0/16, which is an obviouis conflict. I don't know how to fix this issue inside of Terraform, with the portal I can just choose the vnet/subnet for AKS. Below is my Terraform configuration which generates the error:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.46.0"
}
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
subscription_id = "####"
tenant_id = "####"
}
locals {
azure_location = "East US"
azure_location_short = "eastus"
}
resource "azurerm_resource_group" "primary_vnet_resource_group" {
name = "vnet-prod-002-eastus-001"
location = local.azure_location
}
resource "azurerm_virtual_network" "primary_vnet_virtual_network" {
name = "vnet_primary_eastus-001"
location = local.azure_location
resource_group_name = azurerm_resource_group.primary_vnet_resource_group.name
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "aks-subnet" {
name = "snet-aks-prod-002-eastus-001"
# location = local.azure_location
virtual_network_name = azurerm_virtual_network.primary_vnet_virtual_network.name
resource_group_name = azurerm_resource_group.primary_vnet_resource_group.name
address_prefixes = ["10.0.1.0/24"]
}
output "aks_subnet_id" {
value = azurerm_subnet.aks-subnet.id
}
resource "azurerm_subnet" "application-subnet" {
name = "snet-app-prod-002-eastus-001"
# location = local.azure_location
virtual_network_name = azurerm_virtual_network.primary_vnet_virtual_network.name
resource_group_name = azurerm_resource_group.primary_vnet_resource_group.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_subnet" "postgres-subnet" {
name = "snet-postgres-prod-002-eastus-001"
# location = local.azure_location
virtual_network_name = azurerm_virtual_network.primary_vnet_virtual_network.name
resource_group_name = azurerm_resource_group.primary_vnet_resource_group.name
address_prefixes = ["10.0.3.0/24"]
}
output "postgres_subnet_id" {
value = azurerm_subnet.postgres-subnet.id
}
resource "azurerm_kubernetes_cluster" "aks-prod-002-eastus-001" {
name = "aks-prod-002-eastus-001"
location = local.azure_location
resource_group_name = azurerm_resource_group.primary_vnet_resource_group.name
dns_prefix = "aks-prod-002-eastus-001"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_DS2_v2"
vnet_subnet_id = azurerm_subnet.aks-subnet.id
}
network_profile {
network_plugin = "azure"
}
identity {
type = "SystemAssigned"
}
addon_profile {
aci_connector_linux {
enabled = false
}
azure_policy {
enabled = false
}
http_application_routing {
enabled = false
}
oms_agent {
enabled = false
}
}
}
I'm not a Terraform expert and really need a hand with this if anyone knows how to accomplish this. I've been up and down the documentation and I can find a way to specify the subnet id but that's about all I can do. If I don't specify the subnet id then everything is built, but there is a new vnet created which is what I don't want.
Thanks in advance
All the following properties need to be set under network_profile as following:
network_profile {
network_plugin = "azure"
network_policy = "azure"
service_cidr = "10.0.4.0/24"
dns_service_ip = "10.0.4.10"
docker_bridge_cidr = "172.17.0.1/16"
}
These were missed, I hope this helps anyone who is having problems similar to mine.
More info about this block can be found here: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#network_plugin
The error message is:
There are some problems with the configuration, described below. The Terraform configuration must be valid before initialization so that Terraform can determine which modules and providers need to be installed.
Here's what I'm doing:
Written a terraform script in azvm.tf to create a VM.
Defined the variables resourcegroup, location, and pub_key in the variables.tf file, and call those variables in the azvm.tf file using string interpolation syntax.
Created a VM with the following features:
a) Ubuntu 18.04 server
b) VM name : any custom name
c) Admin_username : Any custom name
d) disable password authentication
e) size : Standard DS1_v2
f) Allow traffic for ssh, http, https
g) use public ssh key generated in the above step
With help of vi command we created variables.tf file as:
variable "resourcegroup"
{
default = "user-pbtwiiiuofyu"
}
variable "location"
{
default = ["East US"]
}
variable "pub_key"
{
default = ["ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDH7+uZHdf3SXSfz3Z80FCahr1Bt2t2u+RBlgE/QPN5Qy/CdrpcNH/8oncuDLKW5r+Ie+J5yNVwJz96gdLAQPNMmAYEPGjNct63kQp8hZPlWFdQIEyvHZwwP4YWf5vyRjozQ1PZN/0WyBjqTYmJh3z6haFxqnmIf6a9G8JldjtTPuF2oAnHaUDpqD4qBy/+OSN7CX1Lowny83NLNFoTgmm2npU+RIS2FDYpf1z30tejZSO0V3Z+iT3Xddlzh2NwunzOPv9avzwrLbSSvdbugcXTSPURMLgXwCapH9oHt4RCB/2mgsrRcLQb14FKG9UVBwzh07v2w8fDAHLnv2g4bpnhIyZMpX3zLO9mtQ4ix9DpyqPMqLV7b1uTSyj9t3UEcfprGWRPDJgkpGXEzP0iQgrGlfpctL/6/Hy009123uzb4ugXwndPbuQnPtj45PX2D4zEOFPrlvFBofazuGp65wxdzlzZpGsgqxiZQbdFCU4eQV2Y2kBPj/uv/3DOI2x27ML6i9D/shiVz/sSpryjKRHIl9bU7rU4vClgED/1NuhhTGLk5qPFOjsvvHpIORu5zFKNmeGFHW6e0TJnJMsckszseQuyjqUMkBoFV0NIOyS/mGLrxxzOTpzQyM9duat4T3kGLjjy+ozpHRaahXO79I91pDP66enitmj861kS6G5chQ== root#6badb6ae71d1"]
}
And vi command we created azvm.tf file as
az vm create -n Myuyuuy5Vm -g var.resourcegroup --ssh-key-values var.pub_key
We have another similar task but less easy in compare to it i.e.-
Write a terraform script in azmonitor.tf to create a storage account,
storage container, storage blob to monitor and send log reports
everyday.
Define the variables resourcegroup and location in the variables.tf
file, and call those variables in the azmonitor.tf file using string
interpolation syntax
variables.tf as-
variable "resourcegroup"
{
default = "user-pbtwiiiuofyu"
}
variable "location"
{
default = ["East US"]
}
azmonitor.tf we had created as-
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = var.resourcegroup
location = var.location
}
resource "azurerm_storage_account" "example" {
name = "sa123321123"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.location
sku = "Standard_LRS"
}
resource "azurerm_storage_container" "example" {
name = "sc123321123"
resource_group_name = azurerm_resource_group.example.name
account_key = azurerm_storage_account.eample.name
public_access = "blob"
}
resource "azurerm_storage_blob" "example" {
name = "sb123321123"
resource_group_name = azurerm_resource_group.example.name
account_key = azurerm_storage_account.eample.name
source = azurerm_storage_container.name
}
So as mentioned above, you're using a mix of Terraform and Az CLI here - this is not right. You should use one or the other.
It seems you've been tasked to create a Linux VM using Terraform. You need a 'main' Terraform file for your main code/terraform objects and then a 'variable' Terraform file for your variables. Technically, Terraform will flatten every Terraform file that are in the same directory when you do a Terraform init/plan - this means you could put everything in just one single .tf file.
However, for the purpose of the tutorial, it's a good practice to split both so they can be managed easily. This is the Terraform code that you would want to use - it will help you create a Linux VM.
For simplicity, I'm putting a sample here that links back to the variables in your variables.tf
terraform {
required_version = "= 0.14.10" //change this accordingly
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.55.0"
}
}
}
resource "azurerm_resource_group" "example" {
name = var.resourcegroup
location = var.location
}
resource "azurerm_virtual_network" "example" {
name = "example-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "example" {
name = "internal"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "example" {
name = "example-nic"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_linux_virtual_machine" "example" {
name = "example-machine"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
size = "Standard_DS1_v2"
admin_username = "Admin_username"
network_interface_ids = [
azurerm_network_interface.example.id,
]
admin_ssh_key {
username = "adminuser"
public_key = file("~/.ssh/id_rsa.pub") //put your public key here
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
}
Once you're done with validating the code above, you'll need two more objects,
The azurerm_network_security_group that will help you create the rules to allow inbound connectivity to the VM
The azurerm_subnet_network_security_group_association that will help you attach the subnet to the NSG.
Once you have the complete code, then you can run a terraform init to initialise the modules, followed by terraform plan to verify your plan and finally terraform apply to deploy the VM.
For the blob part:
variable.tf
variable "resourcegroup" {
default = "user-pbtwiiiuofyu"
}
variable "location" {
default = "East US"
}
azmonitor.tf
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = var.resourcegroup
location = var.location
}
resource "azurerm_storage_account" "example" {
name = "examplestoracc"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "example" {
name = "content"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
resource "azurerm_storage_blob" "example" {
name = "my-awesome-content.zip"
storage_account_name = azurerm_storage_account.example.name
storage_container_name = azurerm_storage_container.example.name
type = "Block"
source = "./some-local-file.zip"
}
I need some help with setting Advanced Networking for azurerm_kubernetes_cluster
I've been using code from this page as an example
https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster.html
The only difference is that I make a module from it for my purpose. All there rest is pretty the same
Problem is with
network_profile {
network_plugin = "azure"
}
After run terraform plan I received the error below:
Error: module.aks.azurerm_kubernetes_cluster.aks: : invalid or unknown key: network_profile
I will be glad for any help thx
ask.tf
resource "azurerm_resource_group" "aks" {
name = “name-rg”
location = “East US”
}
resource azurerm_network_security_group "aks_nsg" {
name = “name-nsg"
location = "${azurerm_resource_group.aks.location}"
resource_group_name = "${azurerm_resource_group.aks.name}"
}
resource "azurerm_virtual_network" "aks_vnet" {
name = “name-vnet"
location = "${azurerm_resource_group.aks.location}"
resource_group_name = "${azurerm_resource_group.aks.name}"
address_space = ["10.2.0.0/16"]
}
resource "azurerm_subnet" "aks_subnet" {
name = “name-subnet"
resource_group_name = "${azurerm_resource_group.aks.name}"
network_security_group_id = "${azurerm_network_security_group.aks_nsg.id}"
address_prefix = "10.2.0.0/24"
virtual_network_name = "${azurerm_virtual_network.aks_vnet.name}"
}
resource "azurerm_kubernetes_cluster" "aks" {
name = "aks-name"
location = "${azurerm_resource_group.aks.location}"
resource_group_name = "${azurerm_resource_group.aks.name}"
dns_prefix = “dns-name”
linux_profile {
admin_username = "${var.aks_admin_username}"
ssh_key {
key_data = "${var.aks_ssh_public_key_path}"
}
}
agent_pool_profile {
name = "default"
count = "${var.aks_agent_count}"
vm_size = "${var.aks_vm_size}"
os_type = "${var.aks_os_type}"
os_disk_size_gb = "${var.aks_os_disk_size_gb}"
vnet_subnet_id = "${azurerm_subnet.aks_subnet.id}"
}
service_principal {
client_id = "${var.aks_client_id}"
client_secret = "${var.aks_client_secret}"
}
network_profile {
network_plugin = "azure"
}
}
Update:
Terraform v0.11.8
+ provider.azurerm v1.5.0 <---- Wrong version, should be v1.15.0
provider.azurerm v1.5.0 <---- Wrong version, should be v1.15.0
Check your main.tf, if you have fixed the provider version, you need to take it out (see below) and test again.
provider "azurerm" {
#version = "=1.5.0"
}
currently it should be 1.20.0