unable to use for_each and count in terraform - terraform

I have .tfvars file below with below contents as the the input variables
aks_configuration = {
aks1 = {
name = "cnitest"
location = "westeurope"
kubernetes_version = "1.22.4"
dns_prefix = "cnitest"
default_nodepool_name = "general"
default_nodepool_size = "Standard_B2s"
default_nodepool_count = 2
default_node_pool_autoscale = true
default_node_pool_autoscale_min_count = 1
default_node_pool_autoscale_max_count = 2
aks_zones = null
network_plugin = null
network_policy = null
vnet_name = null
vnet_enabled = false
subnet_name = null
objectID= ["*********]
nodepool = [
{
name = "dts"
vm_size = "Standard_B2s"
enable_auto_scaling = true
mode = "user"
node_count = 1
max_count = 2
min_count = 1
}
]
}
}
Now I need to create an aks cluster with conditions to choose azurecni or kubenet as the part of the network configuration.
if the vnet_enabled is false it should disable data resource in the terraform and give the value as null for the below configuration
#get nodepool vnet subnet ID
data "azurerm_subnet" "example" {
for_each = local.aks_config.vnet_enabled
name = each.value.subnet_name
virtual_network_name = each.value.vnet_name
resource_group_name = var.rg-name
}
resource "azurerm_kubernetes_cluster" "example" {
for_each = local.aks_config
name = each.value.name
location = each.value.location
resource_group_name = var.rg-name
dns_prefix = each.value.dns_prefix
default_node_pool {
name = each.value.default_nodepool_name
node_count = each.value.default_nodepool_count
vm_size = each.value.default_nodepool_size
enable_auto_scaling = each.value.default_node_pool_autoscale
min_count = each.value.default_node_pool_autoscale_min_count
max_count = each.value.default_node_pool_autoscale_max_count
vnet_subnet_id = data.azurerm_subnet.example[each.key].id
zones = each.value.aks_zones
}
identity {
type = "SystemAssigned"
}
network_profile {
network_plugin = each.value.network_plugin
network_policy = each.value.network_policy
}
# azure_active_directory_role_based_access_control {
# managed = true
# admin_group_object_ids = [each.value.objectID]
# }
}

If the vnet integration needs to be done which is the parameter vnet_enabled is set to true then the data block fetches the details of subnet to be used by the AKS Cluster and then uses it and also sets the network plugin to azure . If its false then the data block is not utilized and the subnet is provided null value in AKS cluster and it uses `kubenet. You don't need to add network plugin and network policy in the locals , you can leverage the same in the conditional values.
To achieve the above , You have to use something like below :
data "azurerm_subnet" "example" {
for_each = local.aks_configuration.vnet_enabled ? 1 : 0
name = each.value.subnet_name
virtual_network_name = each.value.vnet_name
resource_group_name = var.rg-name
}
resource "azurerm_kubernetes_cluster" "example" {
for_each = local.aks_configuration
name = each.value.name
location = each.value.location
resource_group_name = var.rg-name
dns_prefix = each.value.dns_prefix
default_node_pool {
name = each.value.default_nodepool_name
node_count = each.value.default_nodepool_count
vm_size = each.value.default_nodepool_size
enable_auto_scaling = each.value.default_node_pool_autoscale
min_count = each.value.default_node_pool_autoscale_min_count
max_count = each.value.default_node_pool_autoscale_max_count
vnet_subnet_id = each.value.vnet_enabled ? data.azurerm_subnet.example[0].id : null
zones = each.value.aks_zones
}
identity {
type = "SystemAssigned"
}
network_profile {
network_plugin = each.value.vnet_enabled ? "azure" : "kubenet"
network_policy = each.value.vnet_enabled ? "azure" : "calico"
}
# azure_active_directory_role_based_access_control {
# managed = true
# admin_group_object_ids = [each.value.objectID]
# }
}

Related

Terraform - How to install AZURE CNI on AKS cluster and set POD IP Range?

I am trying to create an AKS cluster in azure using the terraform. My requirements are as follows:
Create a site-to-site VPN connection where the gateway in the subnet of range 172.30.0.0/16 - This is done
Install Azure AKS cluster with Azure CNI and pod's should be in the range of VPN CIDR (172.30.0.0/16).
Here's my terraform code. I read that if you use azure as your network_policy and network_plugin, you can't set the pod_cidr - source
Then how can I do this so my PODs can reach the on-premise network through the site-to-site vpn?
resource "azurerm_kubernetes_cluster" "k8s_cluster" {
lifecycle {
ignore_changes = [
default_node_pool[0].node_count
]
prevent_destroy = false
}
name = var.cluster_name
location = var.location
resource_group_name = var.rg_name
dns_prefix = var.dns_prefix
kubernetes_version = var.kubernetes_version
# node_resource_group = var.resource_group_name
default_node_pool {
name = var.default_node_pool.name
node_count = var.default_node_pool.node_count
max_count = var.default_node_pool.max_count
min_count = var.default_node_pool.min_count
vm_size = var.default_node_pool.vm_size
os_disk_size_gb = var.default_node_pool.os_disk_size_gb
# vnet_subnet_id = var.vnet_subnet_id
max_pods = var.default_node_pool.max_pods
type = var.default_node_pool.agent_pool_type
enable_node_public_ip = var.default_node_pool.enable_node_public_ip
enable_auto_scaling = var.default_node_pool.enable_auto_scaling
tags = merge(var.common_tags)
}
identity {
type = var.identity
}
network_profile {
network_plugin = var.network_plugin #azure
network_policy = var.network_policy #"azure"
load_balancer_sku = var.load_balancer_sku #"standard"
# pod_cidr = var.pod_cidr | When network_plugin is set to azure - the vnet_subnet_id field in the default_node_pool block must be set and pod_cidr must not be set.
}
tags = merge(var.common_tags)
}
# AKS cluster related variables
cluster_name = "test-cluster"
dns_prefix = "testjana"
kubernetes_version = "1.22.15"
default_node_pool = {
name = "masternp" # for system pods
node_count = 1
vm_size = "standard_e4bds_v5" # 4 vcpu and 32 Gb of memory
enable_auto_scaling = false
enable_node_public_ip = false
min_count = null
max_count = null
max_pods = 100
os_disk_size_gb = 80
agent_pool_type = "VirtualMachineScaleSets"
}
admin_username = "jananathadmin"
ssh_public_key = "public_key"
identity = "SystemAssigned"
network_plugin = "azure"
network_policy = "azure"
load_balancer_sku = "standard"
Default, all PODs in AKS will communicate each other, when we want to restrict the traffic, network policies can be used to allow or deny traffic between pods.
Here is the tutorial link
Reproduced the same via terraform using below code snippet to connect cluster with azure CNI and a vnet gateway which links our on-prem environment to azure via a site-to-site VPN.
Step1:
main tf file as follows
resource "azurerm_resource_group" "example" {
name = "*****-****"
location = "East US"
}
resource "azurerm_role_assignment" "role_acrpull" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = azurerm_kubernetes_cluster.demo.kubelet_identity.0.object_id
}
resource "azurerm_container_registry" "acr" {
name = "acrswarna"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku = "Standard"
admin_enabled = false
}
resource "azurerm_virtual_network" "puvnet" {
name = "Publics_VNET"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
address_space = ["10.19.0.0/16"]
}
resource "azurerm_subnet" "example" {
name = "GatewaySubnet"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.puvnet.name
address_prefixes = ["10.19.3.0/24"]
}
resource "azurerm_subnet" "osubnet" {
name = "Outer_Subnet"
resource_group_name = azurerm_resource_group.example.name
address_prefixes = ["10.19.1.0/24"]
virtual_network_name = azurerm_virtual_network.puvnet.name
}
resource "azurerm_kubernetes_cluster" "demo" {
name = "demo-aksnew"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
dns_prefix = "demo-aks"
default_node_pool {
name = "default"
node_count = 2
vm_size = "standard_e4bds_v5"
type = "VirtualMachineScaleSets"
enable_auto_scaling = false
min_count = null
max_count = null
max_pods = 100
//vnet_subnet_id = azurerm_subnet.osubnet.id
}
identity {
type = "SystemAssigned"
}
network_profile {
network_plugin = "azure"
load_balancer_sku = "standard"
network_policy = "azure"
}
tags = {
Environment = "Development"
}
}
resource "azurerm_public_ip" "example" {
name = "pips-firewall"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
allocation_method = "Static"
sku = "Standard"
}
resource "azurerm_virtual_network_gateway" "example" {
name = "test"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
type = "Vpn"
vpn_type = "RouteBased"
active_active = false
enable_bgp = false
sku = "VpnGw1"
ip_configuration {
name = "vnetGatewayConfig"
public_ip_address_id = azurerm_public_ip.example.id
private_ip_address_allocation = "Dynamic"
subnet_id = azurerm_subnet.example.id
}
vpn_client_configuration {
address_space = ["172.30.0.0/16"]
root_certificate {
name = "******-****-ID-Root-CA"
public_cert_data = <<EOF
**Use certificate here**
EOF
}
revoked_certificate {
name = "*****-Global-Root-CA"
thumbprint = "****************"
}
}
}
NOTE: Update root certificate configuration by own on above code.
Provider tf file as follow
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
}
provider "azurerm" {
features {}
skip_provider_registration = true
}
upon running
terraform plan
terraform apply -auto-approve
Vnet and SubNet configurations
Virtual Network Gateway configuraiton as follows.
Deployed sample pods on cluster

azure terraform: how to attach multiple data disks to multiple VMs

I'm following Neal Shah's instructions for deploying multiple VMs with multiple managed disks (https://www.nealshah.dev/posts/2020/05/terraform-for-azure-deploying-multiple-vms-with-multiple-managed-disks/#deploying-multiple-vms-with-multiple-datadisks)
everything works fine except for the azurerm_virtual_machine_data_disk_attachment resource which fails with the following error
│ Error: Invalid index
│
│ on main.tf line 103, in resource "azurerm_virtual_machine_data_disk_attachment" "managed_disk_attach":
│ 103: virtual_machine_id = azurerm_linux_virtual_machine.vms[element(split("_", each.key), 1)].id
│ ├────────────────
│ │ azurerm_linux_virtual_machine.vms is tuple with 3 elements
│ │ each.key is "datadisk_dca0-apache-cassandra-node0_disk00"
│
│ The given key does not identify an element in this collection value: a number is required.
my code is below:
locals {
vm_datadiskdisk_count_map = { for k in toset(var.nodes) : k => var.data_disk_count }
luns = { for k in local.datadisk_lun_map : k.datadisk_name => k.lun }
datadisk_lun_map = flatten([
for vm_name, count in local.vm_datadiskdisk_count_map : [
for i in range(count) : {
datadisk_name = format("datadisk_%s_disk%02d", vm_name, i)
lun = i
}
]
])
}
# create resource group
resource "azurerm_resource_group" "resource_group" {
name = format("%s-%s", var.dca, var.name)
location = var.location
}
# create availability set
resource "azurerm_availability_set" "vm_availability_set" {
name = format("%s-%s-availability-set", var.dca, var.name)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
}
# create Security Group to access linux
resource "azurerm_network_security_group" "linux_vm_nsg" {
name = format("%s-%s-linux-vm-nsg", var.dca, var.name)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
security_rule {
name = "AllowSSH"
description = "Allow SSH"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
# associate the linux NSG with the subnet
resource "azurerm_subnet_network_security_group_association" "linux_vm_nsg_association" {
subnet_id = "${data.azurerm_subnet.subnet.id}"
network_security_group_id = azurerm_network_security_group.linux_vm_nsg.id
}
# create NICs for apache cassandra hosts
resource "azurerm_network_interface" "vm_nics" {
depends_on = [azurerm_subnet_network_security_group_association.linux_vm_nsg_association]
count = length(var.nodes)
name = format("%s-%s-nic${count.index}", var.dca, var.name)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
ip_configuration {
name = format("%s-%s-apache-cassandra-ip", var.dca, var.name)
subnet_id = "${data.azurerm_subnet.subnet.id}"
private_ip_address_allocation = "Dynamic"
}
}
# create apache cassandra VMs
resource "azurerm_linux_virtual_machine" "vms" {
count = length(var.nodes)
name = element(var.nodes, count.index)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
network_interface_ids = [element(azurerm_network_interface.vm_nics.*.id, count.index)]
availability_set_id = azurerm_availability_set.vm_availability_set.id
size = var.vm_size
admin_username = var.admin_username
disable_password_authentication = true
admin_ssh_key {
username = var.admin_username
public_key = var.ssh_pub_key
}
source_image_id = var.source_image_id
os_disk {
caching = "ReadWrite"
storage_account_type = var.storage_account_type
disk_size_gb = var.os_disk_size_gb
}
}
# create data disk(s) for VMs
resource "azurerm_managed_disk" "managed_disk" {
for_each= toset([for j in local.datadisk_lun_map : j.datadisk_name])
name= each.key
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
storage_account_type = var.storage_account_type
create_option = "Empty"
disk_size_gb = var.disk_size_gb
}
resource "azurerm_virtual_machine_data_disk_attachment" "managed_disk_attach" {
for_each = toset([for j in local.datadisk_lun_map : j.datadisk_name])
managed_disk_id = azurerm_managed_disk.managed_disk[each.key].id
virtual_machine_id = azurerm_linux_virtual_machine.vms[element(split("_", each.key), 1)].id
lun = lookup(local.luns, each.key)
caching = "ReadWrite"
}
anyone know how to accomplish this? thanks!
I've tried several different approaches to this but have been unsuccessful so far, I was expecting it to work as described in Neal's post
I was able to get this working. However, I have not tested adding/removing nodes/disks yet. But this working to create multiple VMs with multiple data disks attached to each VM.
I use a variable file that I source to substitute the variables in the *.tf files.
variables.tf
variable "azure_subscription_id" {
type = string
description = "Azure Subscription ID"
default = ""
}
variable "dca" {
type = string
description = "datacenter [dca0|dca2|dca4|dca6]."
default = ""
}
variable "location" {
type = string
description = "Location of the resource group."
default = ""
}
variable "resource_group" {
type = string
description = "resource group name."
default = ""
}
variable "subnet_name" {
type = string
description = "subnet name"
default = ""
}
variable "vnet_name" {
type = string
description = "vnet name"
default = ""
}
variable "vnet_rg" {
type = string
description = "vnet resource group"
default = ""
}
variable "vm_size" {
type = string
description = "vm size"
default = ""
}
variable "os_disk_size_gb" {
type = string
description = "vm os disk size gb"
default = ""
}
variable "data_disk_size_gb" {
type = string
description = "vm data disk size gb"
default = ""
}
variable "admin_username" {
type = string
description = "admin user name"
default = ""
}
variable "ssh_pub_key" {
type = string
description = "public key for admin user"
default = ""
}
variable "source_image_id" {
type = string
description = "image id"
default = ""
}
variable "os_disk_storage_account_type" {
type = string
description = ""
default = ""
}
variable "data_disk_storage_account_type" {
type = string
description = ""
default = ""
}
variable "vm_list" {
type = map(object({
hostname = string
}))
default = {
vm0 ={
hostname = "${dca}-${name}-node-0"
},
vm1 = {
hostname = "${dca}-${name}-node-1"
}
vm2 = {
hostname = "${dca}-${name}-node-2"
}
}
}
variable "disks_per_instance" {
type = string
description = ""
default = ""
}
terraform.tfvars
# subscription
azure_subscription_id = "${azure_subscription_id}"
# name and location
resource_group = "${dca}-${name}"
location = "${location}"
dca = "${dca}"
# Network
subnet_name = "${subnet_name}"
vnet_name = "${dca}vnet"
vnet_rg = "th-${dca}-vnet"
# VM
vm_size = "${vm_size}"
os_disk_size_gb = "${os_disk_size_gb}"
os_disk_storage_account_type = "${os_disk_storage_account_type}"
source_image_id = "${source_image_id}"
# User/key info
admin_username = "${admin_username}"
ssh_pub_key = ${ssh_pub_key}
# data disk info
data_disk_storage_account_type = "${data_disk_storage_account_type}"
data_disk_size_gb = "${data_disk_size_gb}"
disks_per_instance= "${disks_per_instance}"
main.tf
# set locals for multi data disks
locals {
vm_datadiskdisk_count_map = { for k, query in var.vm_list : k => var.disks_per_instance }
luns = { for k in local.datadisk_lun_map : k.datadisk_name => k.lun }
datadisk_lun_map = flatten([
for vm_name, count in local.vm_datadiskdisk_count_map : [
for i in range(count) : {
datadisk_name = format("datadisk_%s_disk%02d", vm_name, i)
lun = i
}
]
])
}
# create resource group
resource "azurerm_resource_group" "resource_group" {
name = format("%s", var.resource_group)
location = var.location
}
# create data disk(s)
resource "azurerm_managed_disk" "managed_disk" {
for_each = toset([for j in local.datadisk_lun_map : j.datadisk_name])
name = each.key
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
storage_account_type = var.data_disk_storage_account_type
create_option = "Empty"
disk_size_gb = var.data_disk_size_gb
}
# create availability set
resource "azurerm_availability_set" "vm_availability_set" {
name = format("%s-availability-set", var.resource_group)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
}
# create Security Group to access linux
resource "azurerm_network_security_group" "linux_vm_nsg" {
name = format("%s-linux-vm-nsg", var.resource_group)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
security_rule {
name = "AllowSSH"
description = "Allow SSH"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
# associate the linux NSG with the subnet
resource "azurerm_subnet_network_security_group_association" "linux_vm_nsg_association" {
subnet_id = "${data.azurerm_subnet.subnet.id}"
network_security_group_id = azurerm_network_security_group.linux_vm_nsg.id
}
# create NICs for vms
resource "azurerm_network_interface" "nics" {
depends_on = [azurerm_subnet_network_security_group_association.linux_vm_nsg_association]
for_each = var.vm_list
name = "${each.value.hostname}-nic"
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
ip_configuration {
name = format("%s-proxy-ip", var.resource_group)
subnet_id = "${data.azurerm_subnet.subnet.id}"
private_ip_address_allocation = "Dynamic"
}
}
# create VMs
resource "azurerm_linux_virtual_machine" "vms" {
for_each = var.vm_list
name = each.value.hostname
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
network_interface_ids = [azurerm_network_interface.nics[each.key].id]
availability_set_id = azurerm_availability_set.vm_availability_set.id
size = var.vm_size
source_image_id = var.source_image_id
custom_data = filebase64("cloud-init.sh")
admin_username = var.admin_username
disable_password_authentication = true
admin_ssh_key {
username = var.admin_username
public_key = var.ssh_pub_key
}
os_disk {
caching = "ReadWrite"
storage_account_type = var.os_disk_storage_account_type
disk_size_gb = var.os_disk_size_gb
}
}
# attache data disks vms
resource "azurerm_virtual_machine_data_disk_attachment" "managed_disk_attach" {
for_each = toset([for j in local.datadisk_lun_map : j.datadisk_name])
managed_disk_id = azurerm_managed_disk.managed_disk[each.key].id
virtual_machine_id = azurerm_linux_virtual_machine.vms[element(split("_", each.key), 1)].id
lun = lookup(local.luns, each.key)
caching = "ReadWrite"
}

How to implement in terraform azure for redis with private endpoint?

I need help with terraform. I need deploy azure for redis cache using private endpoint. My code:
resource "azurerm_redis_cache" "redis_cache_example" {
name = "redis-cache-ex"
location = var.location
resource_group_name = var.resource_group_name
capacity = var.redis_plan_capacity
family = var.redis_plan_family
sku_name = var.redis_plan_sku_name
enable_non_ssl_port = false
minimum_tls_version = "1.2"
public_network_access_enabled = false
}
resource "azurerm_private_dns_zone" "private_dns_zone_example" {
name = "example.redis-ex.azure.com"
resource_group_name = var.resource_group_name
}
resource "azurerm_private_dns_zone_virtual_network_link" "virtual_network_link_example" {
name = "exampleVnet.com"
private_dns_zone_name = azurerm_private_dns_zone.private_dns_zone_example.name
virtual_network_id = var.vnet_id
resource_group_name = var.resource_group_name
}
resource "azurerm_private_endpoint" "redis_pe_example" {
name = "redis-private-endpoint-ex"
location = var.location
resource_group_name = var.resource_group_name
subnet_id = var.subnet_id
private_dns_zone_group {
name = "privatednsrediszonegroup"
private_dns_zone_ids = [azurerm_private_dns_zone.private_dns_zone_example.id]
}
private_service_connection {
name = "peconnection-example"
private_connection_resource_id = azurerm_redis_cache.redis_cache_example.id
is_manual_connection = false
subresource_names = ["redisCache"]
}
}
After deploying my redis doesn't ping within vnet. What's wrong with my terraform?
You can also add an azurerm_private_endpoint resource and link it to azurerm_redis_cache (or i guess other resource as well).
resource "azurerm_redis_cache" "default" {
...
}
resource "azurerm_private_endpoint" "default" {
count = 1
name = format("%s-redis%d", var.env, count.index + 1)
resource_group_name = data.azurerm_resource_group.default.name
location = data.azurerm_resource_group.default.location
subnet_id = data.azurerm_subnet.default.id
private_service_connection {
name = format("%s-redis%d-pe", var.env, count.index + 1)
private_connection_resource_id = azurerm_redis_cache.default[count.index].id
is_manual_connection = false
subresource_names = ["redisCache"]
}
}
You can find list of other private resources on AZ docs.

unable to create AKS cluster using "UserDefinedRouting" using terraform

I'm Setting up AKS cluster using userDefinedRouting with existing subnet and route table which are associated with network security group. Here is my code snippet.
provider "azurerm" {
version = "~> 2.25"
features {}
}
data "azurerm_resource_group" "aks" {
name = var.resource_group
}
#fetch existing subnet
data "azurerm_subnet" "aks" {
name = var.subnetname
virtual_network_name = var.virtual_network_name
resource_group_name = var.vnet_resource_group
}
resource "azurerm_network_interface" "k8svmnic" {
name = "k8svmnic"
resource_group_name = data.azurerm_resource_group.aks.name
location = data.azurerm_resource_group.aks.location
ip_configuration {
name = "internal"
subnet_id = data.azurerm_subnet.aks.id
private_ip_address_allocation = "Static"
private_ip_address = var.k8svmip #"10.9.56.10"
}
}
resource "azurerm_availability_set" "k8svmavset" {
name = "k8svmavset"
location = data.azurerm_resource_group.aks.location
resource_group_name = data.azurerm_resource_group.aks.name
platform_fault_domain_count = 3
platform_update_domain_count = 3
managed = true
}
resource "azurerm_network_security_group" "k8svmnsg" {
name = "k8vm-nsg"
resource_group_name = data.azurerm_resource_group.aks.name
location = data.azurerm_resource_group.aks.location
security_rule {
name = "allow_kube_tls"
protocol = "Tcp"
priority = 100
direction = "Inbound"
access = "Allow"
source_address_prefix = "VirtualNetwork"
destination_address_prefix = "*"
source_port_range = "*"
#destination_port_range = "443"
destination_port_ranges = ["443"]
description = "Allow kube-apiserver (tls) traffic to master"
}
security_rule {
name = "allow_ssh"
protocol = "Tcp"
priority = 101
direction = "Inbound"
access = "Allow"
source_address_prefix = "*"
destination_address_prefix = "*"
source_port_range = "*"
#destination_port_range = "22"
destination_port_ranges = ["22"]
description = "Allow SSH traffic to master"
}
}
resource "azurerm_network_interface_security_group_association" "k8svmnicnsg" {
network_interface_id = azurerm_network_interface.k8svmnic.id
network_security_group_id = azurerm_network_security_group.k8svmnsg.id
}
resource "azurerm_linux_virtual_machine" "k8svm" {
name = "k8svm"
resource_group_name = data.azurerm_resource_group.aks.name
location = data.azurerm_resource_group.aks.location
size = "Standard_D3_v2"
admin_username = var.admin_username
disable_password_authentication = true
availability_set_id = azurerm_availability_set.k8svmavset.id
network_interface_ids = [
azurerm_network_interface.k8svmnic.id,
]
admin_ssh_key {
username = var.admin_username
public_key = var.ssh_key
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
disk_size_gb = 30
}
source_image_reference {
publisher = "microsoft-aks"
offer = "aks"
sku = "aks-engine-ubuntu-1804-202007"
version = "2020.07.24"
}
}
resource "azurerm_managed_disk" "k8svm-disk" {
name = "${azurerm_linux_virtual_machine.k8svm.name}-disk"
location = data.azurerm_resource_group.aks.location
resource_group_name = data.azurerm_resource_group.aks.name
storage_account_type = "Standard_LRS"
create_option = "Empty"
disk_size_gb = 512
}
resource "azurerm_virtual_machine_data_disk_attachment" "k8svm-disk-attachment" {
managed_disk_id = azurerm_managed_disk.k8svm-disk.id
virtual_machine_id = azurerm_linux_virtual_machine.k8svm.id
lun = 5
caching = "ReadWrite"
}
resource "azurerm_public_ip" "aks" {
name = "akspip"
resource_group_name = data.azurerm_resource_group.aks.name
location = data.azurerm_resource_group.aks.location
allocation_method = "Static"
sku = "Standard"
depends_on = [azurerm_virtual_machine_data_disk_attachment.k8svm-disk-attachment]
}
resource "azurerm_route_table" "aks"{
name = "aks" #var.subnetname
resource_group_name = data.azurerm_resource_group.aks.name
location = data.azurerm_resource_group.aks.location
disable_bgp_route_propagation = false
route {
name = "default_route"
address_prefix = "0.0.0.0/0"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = var.k8svmip
}
route {
name = var.route_name
address_prefix = var.route_address_prefix
next_hop_type = var.route_next_hop_type
}
}
resource "azurerm_subnet_route_table_association" "aks" {
subnet_id = data.azurerm_subnet.aks.id
route_table_id = azurerm_route_table.aks.id
}
resource "azurerm_subnet_network_security_group_association" "aks" {
subnet_id = data.azurerm_subnet.aks.id
network_security_group_id = var.network_security_group
}
resource "null_resource" "previous" {}
resource "time_sleep" "wait_90_seconds" {
depends_on = [null_resource.previous]
create_duration = "90s"
}
# This resource will create (at least) 30 seconds after null_resource.previous
resource "null_resource" "next" {
depends_on = [time_sleep.wait_90_seconds]
}
resource "azurerm_kubernetes_cluster" "aks" {
name = data.azurerm_resource_group.aks.name
resource_group_name = data.azurerm_resource_group.aks.name
location = data.azurerm_resource_group.aks.location
dns_prefix = "akstfelk" #The dns_prefix must contain between 3 and 45 characters, and can contain only letters, numbers, and hyphens. It must start with a letter and must end with a letter or a number.
kubernetes_version = "1.18.8"
private_cluster_enabled = false
node_resource_group = var.node_resource_group
#api_server_authorized_ip_ranges = [] #var.api_server_authorized_ip_ranges
default_node_pool {
enable_node_public_ip = false
name = "agentpool"
node_count = var.node_count
orchestrator_version = "1.18.8"
vm_size = var.vm_size
os_disk_size_gb = var.os_disk_size_gb
vnet_subnet_id = data.azurerm_subnet.aks.id
type = "VirtualMachineScaleSets"
}
linux_profile {
admin_username = var.admin_username
ssh_key {
key_data = var.ssh_key
}
}
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
role_based_access_control {
enabled = true
}
network_profile {
network_plugin = "kubenet"
network_policy = "calico"
dns_service_ip = "172.16.1.10"
service_cidr = "172.16.0.0/16"
docker_bridge_cidr = "172.17.0.1/16"
pod_cidr = "172.40.0.0/16"
outbound_type = "userDefinedRouting"
load_balancer_sku = "Standard"
load_balancer_profile {
outbound_ip_address_ids = [ "${azurerm_public_ip.aks.id}" ]
}
# load_balancer_profile {
# managed_outbound_ip_count = 5
# #effective_outbound_ips = [ azurerm_public_ip.aks.id ]
# outbound_ip_address_ids = []
# outbound_ip_prefix_ids = []
# outbound_ports_allocated = 0
# }
}
addon_profile {
aci_connector_linux {
enabled = false
}
azure_policy {
enabled = false
}
http_application_routing {
enabled = false
}
kube_dashboard {
enabled = false
}
oms_agent {
enabled = false
}
}
depends_on = [azurerm_subnet_route_table_association.aks]
}
According to Azure doc it says: "By default, one public IP will automatically be created in the same resource group as the AKS cluster, if NO public IP, public IP prefix, or number of IPs is specified.
But in my case outbound connection not happening Hence cluster provision getting failed. I've even created another public Ip and trying through Loadbalancer profile but i'm getting below error.
Error: "network_profile.0.load_balancer_profile.0.managed_outbound_ip_count": conflicts with network_profile.0.load_balancer_profile.0.outbound_ip_address_ids
If i've removed loadbalancer_profile from script i'm getting below error
Error: creating Managed Kubernetes Cluster "aks-tf" (Resource Group "aks-tf"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidUserDefinedRoutingWithLoadBalancerProfile" Message="UserDefinedRouting and load balancer profile are mutually exclusive. Please refer to http://aka.ms/aks/outboundtype for more details" Target="networkProfile.loadBalancerProfile"
Kinldy help me where i'm missing .
Any help would be appreciated.
When you use the UserDefineRouting, you need to set the network_plugin as azure and put the AKS cluster inside the subnet with the user-defined router, here is the description:
The AKS cluster must be deployed into an existing virtual network with
a subnet that has been previously configured.
And if the network_plugin is set to azure, then the vnet_subnet_id field in the default_node_pool block must be set and pod_cidr must not be set. You can find this note in azurerm_kubernetes_cluster.
Update:
It's a little more complex than you think, here is the Network Architecture of it and steps to create it via CLI. This architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, proxy or to allow the Network Address Translation (NAT) to be done by a public IP assigned to the standard load balancer or appliance.
For the outbound, instead of a Public Load Balancer you can use an internal Load Balancer for internal traffic.
In addition, some steps you cannot achieve via the Terraform, for example, the Azure Firewall. Take a look at the steps and prepare the resources which you cannot achieve via the CLI.

Terraform - Azure - Create VM in availability set conditionally

Trying to create a VM in Terraform with and without an availability set. The idea is to use a template where if the availability set name is not provided, it defaults to empty, then the VM will not be added to the availability set. I did try using "count", as in 'count = var.avail_set != "" ? 1 : 0', but that did not work exactly as I wanted even though I had two sections that executed conditionally, I needed the name of the VM resource to be the same so I could add log analytics and backup later in the code. . Please see my code below:
name = "${var.resource_group_name}"
}
data azurerm_subnet sndata02 {
name = "${var.subnet_name}"
resource_group_name = "${var.core_resource_group_name}"
virtual_network_name = "${var.virtual_network_name}"
}
data azurerm_availability_set availsetdata02 {
name = "${var.availability_set_name}"
resource_group_name = "${var.resource_group_name}"
}
data azurerm_backup_policy_vm bkpoldata02 {
name = "${var.backup_policy_name}"
recovery_vault_name = "${var.recovery_services_vault_name}"
resource_group_name = "${var.core_resource_group_name}"
}
data azurerm_log_analytics_workspace law02 {
name = "${var.log_analytics_workspace_name}"
resource_group_name = "${var.core_resource_group_name}"
}
#===================================================================
# Create NIC
#===================================================================
resource "azurerm_network_interface" "vmnic02" {
name = "nic${var.virtual_machine_name}"
location = "${data.azurerm_resource_group.rgdata02.location}"
resource_group_name = "${var.resource_group_name}"
ip_configuration {
name = "ipcnfg${var.virtual_machine_name}"
subnet_id = "${data.azurerm_subnet.sndata02.id}"
private_ip_address_allocation = "Static"
private_ip_address = "${var.private_ip}"
}
}
#===================================================================
# Create VM with Availability Set
#===================================================================
resource "azurerm_virtual_machine" "vm02" {
count = var.avail_set != "" ? 1 : 0
depends_on = [azurerm_network_interface.vmnic02]
name = "${var.virtual_machine_name}"
location = "${data.azurerm_resource_group.rgdata02.location}"
resource_group_name = "${var.resource_group_name}"
network_interface_ids = [azurerm_network_interface.vmnic02.id]
vm_size = "${var.virtual_machine_size}"
availability_set_id = "${data.azurerm_availability_set.availsetdata02.id}"
tags = var.tags
# This means the OS Disk will be deleted when Terraform destroys the Virtual Machine
# NOTE: This may not be optimal in all cases.
delete_os_disk_on_termination = true
os_profile {
computer_name = "${var.virtual_machine_name}"
admin_username = "__VMUSER__"
admin_password = "__VMPWD__"
}
os_profile_linux_config {
disable_password_authentication = false
}
storage_image_reference {
id = "${var.image_id}"
}
storage_os_disk {
name = "${var.virtual_machine_name}osdisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
os_type = "Linux"
}
boot_diagnostics {
enabled = true
storage_uri = "${var.boot_diagnostics_uri}"
}
}
#===================================================================
# Create VM without Availability Set
#===================================================================
resource "azurerm_virtual_machine" "vm03" {
count = var.avail_set == "" ? 1 : 0
depends_on = [azurerm_network_interface.vmnic02]
name = "${var.virtual_machine_name}"
location = "${data.azurerm_resource_group.rgdata02.location}"
resource_group_name = "${var.resource_group_name}"
network_interface_ids = [azurerm_network_interface.vmnic02.id]
vm_size = "${var.virtual_machine_size}"
# availability_set_id = "${data.azurerm_availability_set.availsetdata02.id}"
tags = var.tags
# This means the OS Disk will be deleted when Terraform destroys the Virtual Machine
# NOTE: This may not be optimal in all cases.
delete_os_disk_on_termination = true
os_profile {
computer_name = "${var.virtual_machine_name}"
admin_username = "__VMUSER__"
admin_password = "__VMPWD__"
}
os_profile_linux_config {
disable_password_authentication = false
}
storage_image_reference {
id = "${var.image_id}"
}
storage_os_disk {
name = "${var.virtual_machine_name}osdisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
os_type = "Linux"
}
boot_diagnostics {
enabled = true
storage_uri = "${var.boot_diagnostics_uri}"
}
}
#===================================================================
# Set Monitoring and Log Analytics Workspace
#===================================================================
resource "azurerm_virtual_machine_extension" "oms_mma02" {
count = var.bootstrap ? 1 : 0
name = "${var.virtual_machine_name}-OMSExtension"
virtual_machine_id = "${azurerm_virtual_machine.vm02.id}"
publisher = "Microsoft.EnterpriseCloud.Monitoring"
type = "OmsAgentForLinux"
type_handler_version = "1.8"
auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"workspaceId" : "${data.azurerm_log_analytics_workspace.law02.workspace_id}"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"workspaceKey" : "${data.azurerm_log_analytics_workspace.law02.primary_shared_key}"
}
PROTECTED_SETTINGS
}
#===================================================================
# Associate VM to Backup Policy
#===================================================================
resource "azurerm_backup_protected_vm" "vm02" {
count = var.bootstrap ? 1 : 0
resource_group_name = "${var.core_resource_group_name}"
recovery_vault_name = "${var.recovery_services_vault_name}"
source_vm_id = "${azurerm_virtual_machine.vm02.id}"
backup_policy_id = "${data.azurerm_backup_policy_vm.bkpoldata02.id}"
}
The count property only controls the number of resources, for you, it means to create the VM or not, it won't change the configuration of the VM. It's not the right way for your situation.
As I think, you can use the condition expression for the VM property availability_set_id like this:
availability_set_id = var.avail_set != "" ? "${data.azurerm_availability_set.availsetdata02.id}" : ""

Resources