Attach an AKS Cluster to an existing VNET using Terraform - azure

I am new to the DevOps and Terraform domain, and I would like to ask the following. I have already create a VNET (using portal) which called "myVNET" in the resource group "Networks". I am trying to implement a AKS cluster using Terraform. My main.tf file is below
provider "azurerm" {
subscription_id = var.subscription_id
client_id = var.client_id
client_secret = var.client_secret
tenant_id = var.tenant_id
features {}
}
resource "azurerm_resource_group" "MyPlatform" {
name = var.resourcename
location = var.location
}
resource "azurerm_kubernetes_cluster" "aks-cluster" {
name = var.clustername
location = azurerm_resource_group.MyPlatform.location
resource_group_name = azurerm_resource_group.MyPlatform.name
dns_prefix = var.dnspreffix
default_node_pool {
name = "default"
node_count = var.agentnode
vm_size = var.size
}
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
network_profile {
network_plugin = "azure"
load_balancer_sku = "standard"
network_policy = "calico"
}
}
My question is the following, how can I attach my cluster to my VNET?

You do that by assigning the subnet ID to the node pool vnet_subnet_id.
data "azurerm_subnet" "subnet" {
name = "<name of the subnet to run in>"
virtual_network_name = "MyVNET"
resource_group_name = "Networks"
}
...
resource "azurerm_kubernetes_cluster" "aks-cluster" {
...
default_node_pool {
name = "default"
...
vnet_subnet_id = data.azurerm_subnet.subnet.id
}
...
You can reference this existing module to build your own module if not use it directly.

Related

Deploying a VM with managed identity using Terraform on Azure fails

I am currently working on deploying a VM on Azure using Terraform. The VM deployed correctly when using client_id, subscription_id, client_secret and tenant_id in the AzureRM provider block. However, I want to make use of managed identities so I don't have to expose the client_secret.
Things I tried:
For this, I followed this guide
I included the azuread provider block, used the "use_msi = true" to indicate that managed identities should be used. Also included the azurerm_subscription block,azurerm_Client_config, as well as a resource definition. Then added the role assignment to the VM.
Code:
terraform {
required_providers {
azuread = {
source = "hashicorp/azuread"
}
}
}
provider "azurerm" {
features {}
//client_id = "XXXXXXXXXXXXXX"
//client_secret = "XXXXXXXXXXXXXX"
//subscription_id = "XXXXXXXXXXXXXX"
tenant_id = "TENANT_ID"
//use_msi = true
}
provider "azuread" {
use_msi = true
tenant_id = "TENANT_ID"
}
#Resource group definition
resource "azurerm_resource_group" "myVMachineRG" {
name = "testnew-resources"
location = "westus2"
}
resource "azurerm_virtual_network" "myVNet" {
name = "testnew-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.myVMachineRG.location
resource_group_name = azurerm_resource_group.myVMachineRG.name
}
resource "azurerm_subnet" "mySubnet" {
name = "testnew-internal-subnet"
resource_group_name = azurerm_resource_group.myVMachineRG.name
virtual_network_name = azurerm_virtual_network.myVNet.name
#256 total IPs
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "myNIC" {
name = "testnew-nic"
location = azurerm_resource_group.myVMachineRG.location
resource_group_name = azurerm_resource_group.myVMachineRG.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.mySubnet.id
private_ip_address_allocation = "Dynamic"
}
}
#ADDED HERE:
data "azurerm_subscription" "current" {}
data "azurerm_client_config" "example" {
}
resource "azurerm_virtual_machine" "example" {
name = "testnew-vm"
location = azurerm_resource_group.myVMachineRG.location
resource_group_name = azurerm_resource_group.myVMachineRG.name
network_interface_ids = ["${azurerm_network_interface.myNIC.id}"]
vm_size = "Standard_F2"
#Option to delete disks when Terraform destroy is performed.
#This is to ensure that we don't keep wasting balance
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "OSDISK"
caching = "Readwrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
#Just for testing purposes, would be better to use a KeyVault reference here instead.
os_profile {
computer_name = "XXXXXXXXXXXXXX"
admin_username = "XXXXXXXXXXXXXX"
admin_password = "XXXXXXXXXXXXXX"
}
#Force password to authenticate
os_profile_linux_config {
disable_password_authentication = false
}
identity {
type = "SystemAssigned"
}
}
data "azurerm_role_definition" "contributor" {
name = "Contributor"
}
resource "azurerm_role_assignment" "example" {
//name = azurerm_virtual_machine.example.name
scope = data.azurerm_subscription.current.id
role_definition_name = "Contributor"
//role_definition_id = "${data.azurerm_subscription.current.id}${data.azurerm_role_definition.contributor.id}"
//principal_id = azurerm_virtual_machine.example.identity[0].principal_id
principal_id = data.azurerm_client_config.example.object_id
}
Error:
Error: building AzureRM Client: obtain subscription() from Azure CLI: parsing json result from the Azure CLI: waiting for the Azure CLI: exit status 1: ERROR: Please run 'az login' to setup account.
│
│ with provider["registry.terraform.io/hashicorp/azurerm"],
│ on main.tf line 9, in provider "azurerm":
│ 9: provider "azurerm" {
I dont understand why it's still asking to use az login when I am trying to use Managed Identity for log-in.
Redacted the tenantID for security purposes.
Any help would be greatly appreciated :)
I tried to reproduce the same requirement in my environment and was able to deploy it successfully.
Need to provide name of the managed identity if you are authenticating via managed identities in terraform.
Add msi_name under azuread provider.
Note: As you have given, make sure that managed identities should have enough permissions (contributor role) to authenticate and create resources otherwise deployment will fail.
main.tf
data "azurerm_subscription" "current" {}
variable "subscription_id" {
default = "xxxxxxxxxxxx"
}
provider "azurerm"{
features{}
subscription_id = var.subscription_id
}
provider "azuread"{
features{}
use_msi = true
msi-name = "jahnaviidentity" //Give Name of the Managed identity
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_virtual_network" "main" {
name = "main-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "main" {
name = "main-nic"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "<configurationname>"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "main" {
name = "main-vm"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.main.id]
vm_size = "Standard_DS1_v2"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "osdisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "<computername>"
admin_username = "<admin/username>"
admin_password = "xxxxxx"
}
os_profile_linux_config {
disable_password_authentication = false
}
identity {
type = "SystemAssigned"
}
}
Output:
terraform init:
terraform plan:
terraform apply:
Deployed successfully in Azure Portal:
your provider block has usemsi commented out for azurerm (the one that's failing.) Is that just a code transfer mistake to this question? Would have put this in comments but my reputation is not high enough.
Looks like azurerm might also need the subscription id (unlike azuread)
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/managed_service_identity
The use_msi property should be in azurerm as well. From the above link:
enter image description here
Also, just ot be sure, you've already configured the managed identity to use for this purpose, right?

Private Endpoint between AKS and ACR

I want to create AKS and ACR resources in my Azure environment. The script is able to create the two resources, and I am able to connect to each of them. But the AKS node cannot pull images from the ACR. After some research, I found I need to create a Private Endpoint between the AKS and ACR.
The strange thing is that if I create the PE using Terraform the AKS and ACR still cannot communicate. If I create the PE manually, they can communicate. I compared the parameters of the two PEs on the UI and they look the same.
Could someone help me define the PE using the following script? Or let me know what I did wrong?
Thanks!
Full TF script without the Private Endpoint
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.97.0"
}
}
required_version = ">= 1.1.7"
}
provider "azurerm" {
features {}
subscription_id = "xxx"
}
resource "azurerm_resource_group" "rg" {
name = "aks-rg"
location = "East US"
}
resource "azurerm_kubernetes_cluster" "aks" {
name = "my-aks"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = "myaks"
default_node_pool {
name = "default"
node_count = 2
vm_size = "Standard_B2s"
}
identity {
type = "SystemAssigned"
}
}
resource "azurerm_container_registry" "acr" {
name = "my-aks-acr-123"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
sku = "Premium"
admin_enabled = true
network_rule_set {
default_action = "Deny"
}
}
resource "azurerm_role_assignment" "acrpull" {
principal_id = azurerm_kubernetes_cluster.aks.kubelet_identity[0].object_id
role_definition_name = "AcrPull"
scope = azurerm_container_registry.acr.id
skip_service_principal_aad_check = true
}
Then you need to create a VNET, a Subnet (no part of this code ) plus a private DNS zone:
Private DNS zone:
resource "azurerm_private_dns_zone" "example" {
name = "mydomain.com"
resource_group_name = azurerm_resource_group.example.name
}
AKS Part:
resource "azurerm_kubernetes_cluster" "aks" {
name = "my-aks"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
dns_prefix = "myaks"
private_cluster_enabled = true
default_node_pool {
name = "default"
node_count = 2
vm_size = "Standard_B2s"
}
identity {
type = "SystemAssigned"
}
}
You need to create the ACR and a private endpoint for the ACR:
resource "azurerm_container_registry" "acr" {
name = "my-aks-acr-123"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
public_network_access_enabled = false
sku = "Premium"
admin_enabled = true
}
resource "azurerm_private_endpoint" "acr" {
name = "pvep-acr"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
subnet_id = YOUR_SUBNET
private_service_connection {
name = "example-acr"
private_connection_resource_id = azurerm_container_registry.acr.id
is_manual_connection = false
subresource_names = ["registry"]
}
private_dns_zone_group {
name = data.azurerm_private_dns_zone.example.name
private_dns_zone_ids = [data.azurerm_private_dns_zone.example.id]
}
}
resource "azurerm_role_assignment" "acrpull" {
principal_id = azurerm_kubernetes_cluster.aks.kubelet_identity[0].object_id
role_definition_name = "AcrPull"
scope = azurerm_container_registry.acr.id
skip_service_principal_aad_check = true
}

AKS via Terraform Error: Code="CustomRouteTableWithUnsupportedMSIType"

trying to create private aks via terraform using existing vnet and subnet, was able to create cluster suddenly below error came.
│ Error: creating Managed Kubernetes Cluster "demo-azwe-aks-cluster" (Resource Group "demo-azwe-aks-rg"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="CustomRouteTableWithUnsupportedMSIType" Message="Clusters using managed identity type SystemAssigned do not support bringing your own route table. Please see https://aka.ms/aks/customrt for more information"
│
│ with azurerm_kubernetes_cluster.aks_cluster,
│ on aks_cluster.tf line 30, in resource "azurerm_kubernetes_cluster" "aks_cluster":
│ 30: resource "azurerm_kubernetes_cluster" "aks_cluster" {
# Provision AKS Cluster
resource "azurerm_kubernetes_cluster" "aks_cluster" {
name = "${var.global-prefix}-${var.cluster-id}-${var.environment}-azwe-aks-cluster"
location = "${var.location}"
resource_group_name = azurerm_resource_group.aks_rg.name
dns_prefix = "${var.global-prefix}-${var.cluster-id}-${var.environment}-azwe-aks-cluster"
kubernetes_version = data.azurerm_kubernetes_service_versions.current.latest_version
node_resource_group = "${var.global-prefix}-${var.cluster-id}-${var.environment}-azwe-aks-nrg"
private_cluster_enabled = true
default_node_pool {
name = "dpool"
vm_size = "Standard_DS2_v2"
orchestrator_version = data.azurerm_kubernetes_service_versions.current.latest_version
availability_zones = [1, 2, 3]
enable_auto_scaling = true
max_count = 2
min_count = 1
os_disk_size_gb = 30
type = "VirtualMachineScaleSets"
vnet_subnet_id = data.azurerm_subnet.aks.id
node_labels = {
"nodepool-type" = "system"
"environment" = "${var.environment}"
"nodepoolos" = "${var.nodepool-os}"
"app" = "system-apps"
}
tags = {
"nodepool-type" = "system"
"environment" = "dev"
"nodepoolos" = "linux"
"app" = "system-apps"
}
}
# Identity (System Assigned or Service Principal)
identity {
type = "SystemAssigned"
}
# Add On Profiles
addon_profile {
azure_policy {enabled = true}
oms_agent {
enabled = true
log_analytics_workspace_id = azurerm_log_analytics_workspace.insights.id
}
}
# Create Azure AD Group in Active Directory for AKS Admins
resource "azuread_group" "aks_administrators" {
name = "${azurerm_resource_group.aks_rg.name}-cluster-administrators"
description = "Azure AKS Kubernetes administrators for the ${azurerm_resource_group.aks_rg.name}-cluster."
}
RBAC and Azure AD Integration Block
role_based_access_control {
enabled = true
azure_active_directory {
managed = true
admin_group_object_ids = [azuread_group.aks_administrators.id]
}
}
# Linux Profile
linux_profile {
admin_username = "ubuntu"
ssh_key {
key_data = file(var.ssh_public_key)
}
}
# Network Profile
network_profile {
network_plugin = "kubenet"
load_balancer_sku = "Standard"
}
tags = {
Environment = "prod"
}
}
You are trying to create a Private AKS cluster with existing Vnet and existing subnet for both AKS and firewall ,So as per the error "CustomRouteTableWithUnsupportedMSIType" you need a managed identity to create a route table and a role assigned to it i.e. Network Contributor.
Network profile will be azure instead of kubenet as you are using azure vnet and its subnet.
Add on's you can use as per your requirement but please ensure you have used data block for workspace otherwise you can directly give the resourceID. So, instead of
log_analytics_workspace_id = azurerm_log_analytics_workspace.insights.id
you can use
log_analytics_workspace_id = "/subscriptions/SubscriptionID/resourcegroups/resourcegroupname/providers/microsoft.operationalinsights/workspaces/workspacename"
Example to create private cluster with existng vnet and subnets (I haven't added add on's):
provider "azurerm" {
features {}
}
#resource group as this will be referred to in managed identity creation
data "azurerm_resource_group" "base" {
name = "resourcegroupname"
}
#exisiting vnet
data "azurerm_virtual_network" "base" {
name = "ansuman-vnet"
resource_group_name = data.azurerm_resource_group.base.name
}
#exisiting subnets
data "azurerm_subnet" "aks" {
name = "akssubnet"
resource_group_name = data.azurerm_resource_group.base.name
virtual_network_name = data.azurerm_virtual_network.base.name
}
data "azurerm_subnet" "firewall" {
name = "AzureFirewallSubnet"
resource_group_name = data.azurerm_resource_group.base.name
virtual_network_name = data.azurerm_virtual_network.base.name
}
#user assigned identity required to create route table
resource "azurerm_user_assigned_identity" "base" {
resource_group_name = data.azurerm_resource_group.base.name
location = data.azurerm_resource_group.base.location
name = "mi-name"
}
#role assignment required to create route table
resource "azurerm_role_assignment" "base" {
scope = data.azurerm_resource_group.base.id
role_definition_name = "Network Contributor"
principal_id = azurerm_user_assigned_identity.base.principal_id
}
#route table
resource "azurerm_route_table" "base" {
name = "rt-aksroutetable"
location = data.azurerm_resource_group.base.location
resource_group_name = data.azurerm_resource_group.base.name
}
#route
resource "azurerm_route" "base" {
name = "dg-aksroute"
resource_group_name = data.azurerm_resource_group.base.name
route_table_name = azurerm_route_table.base.name
address_prefix = "0.0.0.0/0"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = azurerm_firewall.base.ip_configuration.0.private_ip_address
}
#route table association
resource "azurerm_subnet_route_table_association" "base" {
subnet_id = data.azurerm_subnet.aks.id
route_table_id = azurerm_route_table.base.id
}
#firewall
resource "azurerm_public_ip" "base" {
name = "pip-firewall"
location = data.azurerm_resource_group.base.location
resource_group_name = data.azurerm_resource_group.base.name
allocation_method = "Static"
sku = "Standard"
}
resource "azurerm_firewall" "base" {
name = "fw-akscluster"
location = data.azurerm_resource_group.base.location
resource_group_name = data.azurerm_resource_group.base.name
ip_configuration {
name = "ip-firewallakscluster"
subnet_id = data.azurerm_subnet.firewall.id
public_ip_address_id = azurerm_public_ip.base.id
}
}
#kubernetes_cluster
resource "azurerm_kubernetes_cluster" "base" {
name = "testakscluster"
location = data.azurerm_resource_group.base.location
resource_group_name = data.azurerm_resource_group.base.name
dns_prefix = "dns-testakscluster"
private_cluster_enabled = true
network_profile {
network_plugin = "azure"
outbound_type = "userDefinedRouting"
}
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
vnet_subnet_id = data.azurerm_subnet.aks.id
}
identity {
type = "UserAssigned"
user_assigned_identity_id = azurerm_user_assigned_identity.base.id
}
depends_on = [
azurerm_route.base,
azurerm_role_assignment.base
]
}
Output:
(Terraform Plan)
(Terraform Apply)
(Azure portal)
Note: Its bydefault that azure requires the subnet name for firewall to be AzureFirewallSubnet. If you are using subnet with any other name for firewall creation then it will error out. So, Please ensure to name the existing subnet to be used by firewall to be AzureFirewallSubnet.

Failed to create aks using existing vnet

I'm trying to create aks using terraform; the catch is I have already vnet and subnet created, I need to have the cluster created in that network.
When executing this code I'm getting an error:
locals {
environment = "prod"
resource_group = "hnk_rg_poc"
vnet_subnet_cidr = ["10.3.1.0/24"]
}
#Existing vnet with address space "10.3.1.0/24"
data "azurerm_virtual_network" "existing-vnet" {
name = "${var.vnet}"
resource_group_name = local.resource_group
}
#subnets
resource "azurerm_subnet" "vnet_subnet_id" {
name = "${var.vnet_subnet_id}"
resource_group_name = local.resource_group
address_prefixes = local.vnet_subnet_cidr
virtual_network_name = data.azurerm_virtual_network.existing-vnet.name
}
vnet_subnet_id = data.azurerm_subnet.vnet_subnet_id.id
As you are already having a existing Vnet and Subnet to be used by the AKS cluster , you have to use data block instead of resource block for the subnet.
You can use the below to create a basic aks cluster using your existing Vnet and Subnet:
provider "azurerm" {
features {}
}
#local vars
locals {
environment = "test"
resource_group = "resource_group_name"
name_prefix = "name-aks"
}
#Existing vnet with address space
data "azurerm_virtual_network" "base" {
name = "existing-vnet"
resource_group_name = local.resource_group
}
#existing subnet to be used by aks
data "azurerm_subnet" "aks" {
name = "existing-subnet"
resource_group_name = local.resource_group
virtual_network_name = data.azurerm_virtual_network.base.name
}
#kubernetes_cluster
resource "azurerm_kubernetes_cluster" "base" {
name = "${local.name_prefix}-${local.environment}"
location = data.azurerm_virtual_network.base.location
resource_group_name = data.azurerm_virtual_network.base.resource_group_name
dns_prefix = "dns-${local.name_prefix}-${local.environment}"
network_profile {
network_plugin = "azure"
}
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
vnet_subnet_id = data.azurerm_subnet.aks.id
}
identity {
type = "SystemAssigned"
}
}
Output: (Terraform Plan)

Unable to create Azure AKS Cluster using existing VNET and Subnets

I'm trying to build an AKS cluster in Azure using Terraform. However, I do not want AKS deployed into its own VNET and Subnet, I already have built a subnet within a vnet that I want it to use. When trying to just give it the subnet ID, I get an overlapping CIDER issue. My networking is:
VNET: 10.0.0.0/16
Subnets: 10.0.1.0/24, 10.0.2.0/24, and 10.0.3.0/24. I need AKS to use the 10.0.1.0./24 subnet within this VNET. However, my Terraform config is trying to use a CIDR of 10.0.0.0/16, which is an obviouis conflict. I don't know how to fix this issue inside of Terraform, with the portal I can just choose the vnet/subnet for AKS. Below is my Terraform configuration which generates the error:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.46.0"
}
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
subscription_id = "####"
tenant_id = "####"
}
locals {
azure_location = "East US"
azure_location_short = "eastus"
}
resource "azurerm_resource_group" "primary_vnet_resource_group" {
name = "vnet-prod-002-eastus-001"
location = local.azure_location
}
resource "azurerm_virtual_network" "primary_vnet_virtual_network" {
name = "vnet_primary_eastus-001"
location = local.azure_location
resource_group_name = azurerm_resource_group.primary_vnet_resource_group.name
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "aks-subnet" {
name = "snet-aks-prod-002-eastus-001"
# location = local.azure_location
virtual_network_name = azurerm_virtual_network.primary_vnet_virtual_network.name
resource_group_name = azurerm_resource_group.primary_vnet_resource_group.name
address_prefixes = ["10.0.1.0/24"]
}
output "aks_subnet_id" {
value = azurerm_subnet.aks-subnet.id
}
resource "azurerm_subnet" "application-subnet" {
name = "snet-app-prod-002-eastus-001"
# location = local.azure_location
virtual_network_name = azurerm_virtual_network.primary_vnet_virtual_network.name
resource_group_name = azurerm_resource_group.primary_vnet_resource_group.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_subnet" "postgres-subnet" {
name = "snet-postgres-prod-002-eastus-001"
# location = local.azure_location
virtual_network_name = azurerm_virtual_network.primary_vnet_virtual_network.name
resource_group_name = azurerm_resource_group.primary_vnet_resource_group.name
address_prefixes = ["10.0.3.0/24"]
}
output "postgres_subnet_id" {
value = azurerm_subnet.postgres-subnet.id
}
resource "azurerm_kubernetes_cluster" "aks-prod-002-eastus-001" {
name = "aks-prod-002-eastus-001"
location = local.azure_location
resource_group_name = azurerm_resource_group.primary_vnet_resource_group.name
dns_prefix = "aks-prod-002-eastus-001"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_DS2_v2"
vnet_subnet_id = azurerm_subnet.aks-subnet.id
}
network_profile {
network_plugin = "azure"
}
identity {
type = "SystemAssigned"
}
addon_profile {
aci_connector_linux {
enabled = false
}
azure_policy {
enabled = false
}
http_application_routing {
enabled = false
}
oms_agent {
enabled = false
}
}
}
I'm not a Terraform expert and really need a hand with this if anyone knows how to accomplish this. I've been up and down the documentation and I can find a way to specify the subnet id but that's about all I can do. If I don't specify the subnet id then everything is built, but there is a new vnet created which is what I don't want.
Thanks in advance
All the following properties need to be set under network_profile as following:
network_profile {
network_plugin = "azure"
network_policy = "azure"
service_cidr = "10.0.4.0/24"
dns_service_ip = "10.0.4.10"
docker_bridge_cidr = "172.17.0.1/16"
}
These were missed, I hope this helps anyone who is having problems similar to mine.
More info about this block can be found here: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#network_plugin

Resources