Private Endpoint between AKS and ACR - azure

I want to create AKS and ACR resources in my Azure environment. The script is able to create the two resources, and I am able to connect to each of them. But the AKS node cannot pull images from the ACR. After some research, I found I need to create a Private Endpoint between the AKS and ACR.
The strange thing is that if I create the PE using Terraform the AKS and ACR still cannot communicate. If I create the PE manually, they can communicate. I compared the parameters of the two PEs on the UI and they look the same.
Could someone help me define the PE using the following script? Or let me know what I did wrong?
Thanks!
Full TF script without the Private Endpoint
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.97.0"
}
}
required_version = ">= 1.1.7"
}
provider "azurerm" {
features {}
subscription_id = "xxx"
}
resource "azurerm_resource_group" "rg" {
name = "aks-rg"
location = "East US"
}
resource "azurerm_kubernetes_cluster" "aks" {
name = "my-aks"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = "myaks"
default_node_pool {
name = "default"
node_count = 2
vm_size = "Standard_B2s"
}
identity {
type = "SystemAssigned"
}
}
resource "azurerm_container_registry" "acr" {
name = "my-aks-acr-123"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
sku = "Premium"
admin_enabled = true
network_rule_set {
default_action = "Deny"
}
}
resource "azurerm_role_assignment" "acrpull" {
principal_id = azurerm_kubernetes_cluster.aks.kubelet_identity[0].object_id
role_definition_name = "AcrPull"
scope = azurerm_container_registry.acr.id
skip_service_principal_aad_check = true
}

Then you need to create a VNET, a Subnet (no part of this code ) plus a private DNS zone:
Private DNS zone:
resource "azurerm_private_dns_zone" "example" {
name = "mydomain.com"
resource_group_name = azurerm_resource_group.example.name
}
AKS Part:
resource "azurerm_kubernetes_cluster" "aks" {
name = "my-aks"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
dns_prefix = "myaks"
private_cluster_enabled = true
default_node_pool {
name = "default"
node_count = 2
vm_size = "Standard_B2s"
}
identity {
type = "SystemAssigned"
}
}
You need to create the ACR and a private endpoint for the ACR:
resource "azurerm_container_registry" "acr" {
name = "my-aks-acr-123"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
public_network_access_enabled = false
sku = "Premium"
admin_enabled = true
}
resource "azurerm_private_endpoint" "acr" {
name = "pvep-acr"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
subnet_id = YOUR_SUBNET
private_service_connection {
name = "example-acr"
private_connection_resource_id = azurerm_container_registry.acr.id
is_manual_connection = false
subresource_names = ["registry"]
}
private_dns_zone_group {
name = data.azurerm_private_dns_zone.example.name
private_dns_zone_ids = [data.azurerm_private_dns_zone.example.id]
}
}
resource "azurerm_role_assignment" "acrpull" {
principal_id = azurerm_kubernetes_cluster.aks.kubelet_identity[0].object_id
role_definition_name = "AcrPull"
scope = azurerm_container_registry.acr.id
skip_service_principal_aad_check = true
}

Related

Terraform - How to install AZURE CNI on AKS cluster and set POD IP Range?

I am trying to create an AKS cluster in azure using the terraform. My requirements are as follows:
Create a site-to-site VPN connection where the gateway in the subnet of range 172.30.0.0/16 - This is done
Install Azure AKS cluster with Azure CNI and pod's should be in the range of VPN CIDR (172.30.0.0/16).
Here's my terraform code. I read that if you use azure as your network_policy and network_plugin, you can't set the pod_cidr - source
Then how can I do this so my PODs can reach the on-premise network through the site-to-site vpn?
resource "azurerm_kubernetes_cluster" "k8s_cluster" {
lifecycle {
ignore_changes = [
default_node_pool[0].node_count
]
prevent_destroy = false
}
name = var.cluster_name
location = var.location
resource_group_name = var.rg_name
dns_prefix = var.dns_prefix
kubernetes_version = var.kubernetes_version
# node_resource_group = var.resource_group_name
default_node_pool {
name = var.default_node_pool.name
node_count = var.default_node_pool.node_count
max_count = var.default_node_pool.max_count
min_count = var.default_node_pool.min_count
vm_size = var.default_node_pool.vm_size
os_disk_size_gb = var.default_node_pool.os_disk_size_gb
# vnet_subnet_id = var.vnet_subnet_id
max_pods = var.default_node_pool.max_pods
type = var.default_node_pool.agent_pool_type
enable_node_public_ip = var.default_node_pool.enable_node_public_ip
enable_auto_scaling = var.default_node_pool.enable_auto_scaling
tags = merge(var.common_tags)
}
identity {
type = var.identity
}
network_profile {
network_plugin = var.network_plugin #azure
network_policy = var.network_policy #"azure"
load_balancer_sku = var.load_balancer_sku #"standard"
# pod_cidr = var.pod_cidr | When network_plugin is set to azure - the vnet_subnet_id field in the default_node_pool block must be set and pod_cidr must not be set.
}
tags = merge(var.common_tags)
}
# AKS cluster related variables
cluster_name = "test-cluster"
dns_prefix = "testjana"
kubernetes_version = "1.22.15"
default_node_pool = {
name = "masternp" # for system pods
node_count = 1
vm_size = "standard_e4bds_v5" # 4 vcpu and 32 Gb of memory
enable_auto_scaling = false
enable_node_public_ip = false
min_count = null
max_count = null
max_pods = 100
os_disk_size_gb = 80
agent_pool_type = "VirtualMachineScaleSets"
}
admin_username = "jananathadmin"
ssh_public_key = "public_key"
identity = "SystemAssigned"
network_plugin = "azure"
network_policy = "azure"
load_balancer_sku = "standard"
Default, all PODs in AKS will communicate each other, when we want to restrict the traffic, network policies can be used to allow or deny traffic between pods.
Here is the tutorial link
Reproduced the same via terraform using below code snippet to connect cluster with azure CNI and a vnet gateway which links our on-prem environment to azure via a site-to-site VPN.
Step1:
main tf file as follows
resource "azurerm_resource_group" "example" {
name = "*****-****"
location = "East US"
}
resource "azurerm_role_assignment" "role_acrpull" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = azurerm_kubernetes_cluster.demo.kubelet_identity.0.object_id
}
resource "azurerm_container_registry" "acr" {
name = "acrswarna"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku = "Standard"
admin_enabled = false
}
resource "azurerm_virtual_network" "puvnet" {
name = "Publics_VNET"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
address_space = ["10.19.0.0/16"]
}
resource "azurerm_subnet" "example" {
name = "GatewaySubnet"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.puvnet.name
address_prefixes = ["10.19.3.0/24"]
}
resource "azurerm_subnet" "osubnet" {
name = "Outer_Subnet"
resource_group_name = azurerm_resource_group.example.name
address_prefixes = ["10.19.1.0/24"]
virtual_network_name = azurerm_virtual_network.puvnet.name
}
resource "azurerm_kubernetes_cluster" "demo" {
name = "demo-aksnew"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
dns_prefix = "demo-aks"
default_node_pool {
name = "default"
node_count = 2
vm_size = "standard_e4bds_v5"
type = "VirtualMachineScaleSets"
enable_auto_scaling = false
min_count = null
max_count = null
max_pods = 100
//vnet_subnet_id = azurerm_subnet.osubnet.id
}
identity {
type = "SystemAssigned"
}
network_profile {
network_plugin = "azure"
load_balancer_sku = "standard"
network_policy = "azure"
}
tags = {
Environment = "Development"
}
}
resource "azurerm_public_ip" "example" {
name = "pips-firewall"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
allocation_method = "Static"
sku = "Standard"
}
resource "azurerm_virtual_network_gateway" "example" {
name = "test"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
type = "Vpn"
vpn_type = "RouteBased"
active_active = false
enable_bgp = false
sku = "VpnGw1"
ip_configuration {
name = "vnetGatewayConfig"
public_ip_address_id = azurerm_public_ip.example.id
private_ip_address_allocation = "Dynamic"
subnet_id = azurerm_subnet.example.id
}
vpn_client_configuration {
address_space = ["172.30.0.0/16"]
root_certificate {
name = "******-****-ID-Root-CA"
public_cert_data = <<EOF
**Use certificate here**
EOF
}
revoked_certificate {
name = "*****-Global-Root-CA"
thumbprint = "****************"
}
}
}
NOTE: Update root certificate configuration by own on above code.
Provider tf file as follow
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
}
provider "azurerm" {
features {}
skip_provider_registration = true
}
upon running
terraform plan
terraform apply -auto-approve
Vnet and SubNet configurations
Virtual Network Gateway configuraiton as follows.
Deployed sample pods on cluster

How to setup private-link using Terraform to access storage-account?

I need to test my azure private-endpoint using the following scenario.
We have a virtual netwroks with two sub-nets (vm_subnet and storage_account_subnet)
The virtual-machine (vm) should be able to connect to the storage-account using a private-link.
So, the below image explains the scenario:
and then i need to test my endpoint using the below manual test-case:
Connect to the azure virtual-machine using ssh and putty username: adminuser and password: P#$$w0rd1234!
In the terminal ping formuleinsstorage.blob.core.windows.net (Expect to see the ip of storage account in the range of storage_account_subnet (10.0.2.0/24))
I deploy all the infrastructure using the below Terraform code:
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
resource "azurerm_resource_group" "main_resource_group" {
name = "RG-Terraform-on-Azure"
location = "West Europe"
}
# Create Virtual-Network
resource "azurerm_virtual_network" "virtual_network" {
name = "Vnet"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
}
# Create subnet for virtual-machine
resource "azurerm_subnet" "virtual_network_subnet" {
name = "vm_subnet"
resource_group_name = azurerm_resource_group.main_resource_group.name
virtual_network_name = azurerm_virtual_network.virtual_network.name
address_prefixes = ["10.0.1.0/24"]
}
# Create subnet for storage account
resource "azurerm_subnet" "storage_account_subnet" {
name = "storage_account_subnet"
resource_group_name = azurerm_resource_group.main_resource_group.name
virtual_network_name = azurerm_virtual_network.virtual_network.name
address_prefixes = ["10.0.2.0/24"]
}
# Create Linux Virtual machine
resource "azurerm_linux_virtual_machine" "example" {
name = "example-machine"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
size = "Standard_F2"
admin_username = "adminuser"
admin_password = "14394Las?"
disable_password_authentication = false
network_interface_ids = [
azurerm_network_interface.virtual_machine_network_interface.id,
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
}
resource "azurerm_network_interface" "virtual_machine_network_interface" {
name = "vm-nic"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.virtual_network_subnet.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.vm_public_ip.id
}
}
# Create Network-interface and public-ip for virtual-machien
resource "azurerm_public_ip" "vm_public_ip" {
name = "vm-public-ip-for-rdp"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
allocation_method = "Static"
sku = "Standard"
}
resource "azurerm_network_interface" "virtual_network_nic" {
name = "vm_nic"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.virtual_network_subnet.id
private_ip_address_allocation = "Dynamic"
}
}
# Setup an Inbound rule because we need to connect to the virtual-machine using RDP (remote-desktop-protocol)
resource "azurerm_network_security_group" "traffic_rules" {
name = "vm_traffic_rules"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
security_rule {
name = "virtual_network_permission"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "*"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_subnet_network_security_group_association" "private_nsg_asso" {
subnet_id = azurerm_subnet.virtual_network_subnet.id
network_security_group_id = azurerm_network_security_group.traffic_rules.id
}
# Setup storage_account and its container
resource "azurerm_storage_account" "storage_account" {
name = "storagaccountfortest"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
account_tier = "Standard"
account_replication_type = "LRS"
account_kind = "StorageV2"
is_hns_enabled = "true"
}
resource "azurerm_storage_data_lake_gen2_filesystem" "data_lake_storage" {
name = "rawdata"
storage_account_id = azurerm_storage_account.storage_account.id
lifecycle {
prevent_destroy = false
}
}
# Setup DNS zone
resource "azurerm_private_dns_zone" "dns_zone" {
name = "privatelink.blob.core.windows.net"
resource_group_name = azurerm_resource_group.main_resource_group.name
}
resource "azurerm_private_dns_zone_virtual_network_link" "network_link" {
name = "vnet_link"
resource_group_name = azurerm_resource_group.main_resource_group.name
private_dns_zone_name = azurerm_private_dns_zone.dns_zone.name
virtual_network_id = azurerm_virtual_network.virtual_network.id
}
# Setup private-link
resource "azurerm_private_endpoint" "endpoint" {
name = "storage-private-endpoint"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
subnet_id = azurerm_subnet.storage_account_subnet.id
private_service_connection {
name = "private-service-connection"
private_connection_resource_id = azurerm_storage_account.storage_account.id
is_manual_connection = false
subresource_names = ["blob"]
}
}
resource "azurerm_private_dns_a_record" "dns_a" {
name = "dns-record"
zone_name = azurerm_private_dns_zone.dns_zone.name
resource_group_name = azurerm_resource_group.main_resource_group.name
ttl = 10
records = [azurerm_private_endpoint.endpoint.private_service_connection.0.private_ip_address]
}
The problem is, my endpoint does not work ! But, if i try to add the service-endpoint manually then everything works well like a charm. So,i think, my DNS Zone is correct, also apparently the link to the storage-account is also working well. So, I think there should be something wrong with my private-link ! Any idea ?
Update:
Here are versions:
Terraform v1.2.5
on windows_386
+ provider registry.terraform.io/hashicorp/azurerm v3.30.0
I believe the issue lies in the name of the dns_a_record. This should be the name of the storage account you want to reach via the private link.
The following Terraform code is working for me:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.30.0"
}
}
}
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
resource "azurerm_resource_group" "main_resource_group" {
name = "RG-Terraform-on-Azure"
location = "West Europe"
}
# Create Virtual-Network
resource "azurerm_virtual_network" "virtual_network" {
name = "Vnet"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
}
# Create subnet for virtual-machine
resource "azurerm_subnet" "virtual_network_subnet" {
name = "vm_subnet"
resource_group_name = azurerm_resource_group.main_resource_group.name
virtual_network_name = azurerm_virtual_network.virtual_network.name
address_prefixes = ["10.0.1.0/24"]
}
# Create subnet for storage account
resource "azurerm_subnet" "storage_account_subnet" {
name = "storage_account_subnet"
resource_group_name = azurerm_resource_group.main_resource_group.name
virtual_network_name = azurerm_virtual_network.virtual_network.name
address_prefixes = ["10.0.2.0/24"]
}
# Create Linux Virtual machine
resource "azurerm_linux_virtual_machine" "example" {
name = "example-machine"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
size = "Standard_F2"
admin_username = "adminuser"
admin_password = "14394Las?"
disable_password_authentication = false
network_interface_ids = [
azurerm_network_interface.virtual_machine_network_interface.id,
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
}
resource "azurerm_network_interface" "virtual_machine_network_interface" {
name = "vm-nic"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.virtual_network_subnet.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.vm_public_ip.id
}
}
# Create Network-interface and public-ip for virtual-machien
resource "azurerm_public_ip" "vm_public_ip" {
name = "vm-public-ip-for-rdp"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
allocation_method = "Static"
sku = "Standard"
}
resource "azurerm_network_interface" "virtual_network_nic" {
name = "storage-private-endpoint-nic"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
ip_configuration {
name = "storage-private-endpoint-ip-config"
subnet_id = azurerm_subnet.virtual_network_subnet.id
private_ip_address_allocation = "Dynamic"
}
}
# Setup an Inbound rule because we need to connect to the virtual-machine using RDP (remote-desktop-protocol)
resource "azurerm_network_security_group" "traffic_rules" {
name = "vm_traffic_rules"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
security_rule {
name = "virtual_network_permission"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "*"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_subnet_network_security_group_association" "private_nsg_asso" {
subnet_id = azurerm_subnet.virtual_network_subnet.id
network_security_group_id = azurerm_network_security_group.traffic_rules.id
}
# Setup storage_account and its container
resource "azurerm_storage_account" "storage_account" {
name = <STORAGE_ACCOUNT_NAME>
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
account_tier = "Standard"
account_replication_type = "LRS"
account_kind = "StorageV2"
is_hns_enabled = "true"
}
resource "azurerm_storage_data_lake_gen2_filesystem" "data_lake_storage" {
name = "rawdata"
storage_account_id = azurerm_storage_account.storage_account.id
lifecycle {
prevent_destroy = false
}
}
# Setup DNS zone
resource "azurerm_private_dns_zone" "dns_zone" {
name = "privatelink.blob.core.windows.net"
resource_group_name = azurerm_resource_group.main_resource_group.name
}
resource "azurerm_private_dns_zone_virtual_network_link" "network_link" {
name = "vnet-link"
resource_group_name = azurerm_resource_group.main_resource_group.name
private_dns_zone_name = azurerm_private_dns_zone.dns_zone.name
virtual_network_id = azurerm_virtual_network.virtual_network.id
}
# Setup private-link
resource "azurerm_private_endpoint" "endpoint" {
name = "storage-private-endpoint"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
subnet_id = azurerm_subnet.storage_account_subnet.id
private_service_connection {
name = "storage-private-service-connection"
private_connection_resource_id = azurerm_storage_account.storage_account.id
is_manual_connection = false
subresource_names = ["blob"]
}
}
resource "azurerm_private_dns_a_record" "dns_a" {
name = azurerm_storage_account.storage_account.name
zone_name = azurerm_private_dns_zone.dns_zone.name
resource_group_name = azurerm_resource_group.main_resource_group.name
ttl = 10
records = [azurerm_private_endpoint.endpoint.private_service_connection.0.private_ip_address]
}
Additionally, I'm not sure whether it is possible to ping storage accounts. To test I ran nslookup <STORAGE_ACCOUNT_NAME>.blob.core.windows.net both from my local machine and from the Azure VM. In the former case, I got a public IP while in the latter I got a private IP in the range defined in the Terraform config, which seems to be the behaviour you are looking for.

AKS via Terraform Error: Code="CustomRouteTableWithUnsupportedMSIType"

trying to create private aks via terraform using existing vnet and subnet, was able to create cluster suddenly below error came.
│ Error: creating Managed Kubernetes Cluster "demo-azwe-aks-cluster" (Resource Group "demo-azwe-aks-rg"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="CustomRouteTableWithUnsupportedMSIType" Message="Clusters using managed identity type SystemAssigned do not support bringing your own route table. Please see https://aka.ms/aks/customrt for more information"
│
│ with azurerm_kubernetes_cluster.aks_cluster,
│ on aks_cluster.tf line 30, in resource "azurerm_kubernetes_cluster" "aks_cluster":
│ 30: resource "azurerm_kubernetes_cluster" "aks_cluster" {
# Provision AKS Cluster
resource "azurerm_kubernetes_cluster" "aks_cluster" {
name = "${var.global-prefix}-${var.cluster-id}-${var.environment}-azwe-aks-cluster"
location = "${var.location}"
resource_group_name = azurerm_resource_group.aks_rg.name
dns_prefix = "${var.global-prefix}-${var.cluster-id}-${var.environment}-azwe-aks-cluster"
kubernetes_version = data.azurerm_kubernetes_service_versions.current.latest_version
node_resource_group = "${var.global-prefix}-${var.cluster-id}-${var.environment}-azwe-aks-nrg"
private_cluster_enabled = true
default_node_pool {
name = "dpool"
vm_size = "Standard_DS2_v2"
orchestrator_version = data.azurerm_kubernetes_service_versions.current.latest_version
availability_zones = [1, 2, 3]
enable_auto_scaling = true
max_count = 2
min_count = 1
os_disk_size_gb = 30
type = "VirtualMachineScaleSets"
vnet_subnet_id = data.azurerm_subnet.aks.id
node_labels = {
"nodepool-type" = "system"
"environment" = "${var.environment}"
"nodepoolos" = "${var.nodepool-os}"
"app" = "system-apps"
}
tags = {
"nodepool-type" = "system"
"environment" = "dev"
"nodepoolos" = "linux"
"app" = "system-apps"
}
}
# Identity (System Assigned or Service Principal)
identity {
type = "SystemAssigned"
}
# Add On Profiles
addon_profile {
azure_policy {enabled = true}
oms_agent {
enabled = true
log_analytics_workspace_id = azurerm_log_analytics_workspace.insights.id
}
}
# Create Azure AD Group in Active Directory for AKS Admins
resource "azuread_group" "aks_administrators" {
name = "${azurerm_resource_group.aks_rg.name}-cluster-administrators"
description = "Azure AKS Kubernetes administrators for the ${azurerm_resource_group.aks_rg.name}-cluster."
}
RBAC and Azure AD Integration Block
role_based_access_control {
enabled = true
azure_active_directory {
managed = true
admin_group_object_ids = [azuread_group.aks_administrators.id]
}
}
# Linux Profile
linux_profile {
admin_username = "ubuntu"
ssh_key {
key_data = file(var.ssh_public_key)
}
}
# Network Profile
network_profile {
network_plugin = "kubenet"
load_balancer_sku = "Standard"
}
tags = {
Environment = "prod"
}
}
You are trying to create a Private AKS cluster with existing Vnet and existing subnet for both AKS and firewall ,So as per the error "CustomRouteTableWithUnsupportedMSIType" you need a managed identity to create a route table and a role assigned to it i.e. Network Contributor.
Network profile will be azure instead of kubenet as you are using azure vnet and its subnet.
Add on's you can use as per your requirement but please ensure you have used data block for workspace otherwise you can directly give the resourceID. So, instead of
log_analytics_workspace_id = azurerm_log_analytics_workspace.insights.id
you can use
log_analytics_workspace_id = "/subscriptions/SubscriptionID/resourcegroups/resourcegroupname/providers/microsoft.operationalinsights/workspaces/workspacename"
Example to create private cluster with existng vnet and subnets (I haven't added add on's):
provider "azurerm" {
features {}
}
#resource group as this will be referred to in managed identity creation
data "azurerm_resource_group" "base" {
name = "resourcegroupname"
}
#exisiting vnet
data "azurerm_virtual_network" "base" {
name = "ansuman-vnet"
resource_group_name = data.azurerm_resource_group.base.name
}
#exisiting subnets
data "azurerm_subnet" "aks" {
name = "akssubnet"
resource_group_name = data.azurerm_resource_group.base.name
virtual_network_name = data.azurerm_virtual_network.base.name
}
data "azurerm_subnet" "firewall" {
name = "AzureFirewallSubnet"
resource_group_name = data.azurerm_resource_group.base.name
virtual_network_name = data.azurerm_virtual_network.base.name
}
#user assigned identity required to create route table
resource "azurerm_user_assigned_identity" "base" {
resource_group_name = data.azurerm_resource_group.base.name
location = data.azurerm_resource_group.base.location
name = "mi-name"
}
#role assignment required to create route table
resource "azurerm_role_assignment" "base" {
scope = data.azurerm_resource_group.base.id
role_definition_name = "Network Contributor"
principal_id = azurerm_user_assigned_identity.base.principal_id
}
#route table
resource "azurerm_route_table" "base" {
name = "rt-aksroutetable"
location = data.azurerm_resource_group.base.location
resource_group_name = data.azurerm_resource_group.base.name
}
#route
resource "azurerm_route" "base" {
name = "dg-aksroute"
resource_group_name = data.azurerm_resource_group.base.name
route_table_name = azurerm_route_table.base.name
address_prefix = "0.0.0.0/0"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = azurerm_firewall.base.ip_configuration.0.private_ip_address
}
#route table association
resource "azurerm_subnet_route_table_association" "base" {
subnet_id = data.azurerm_subnet.aks.id
route_table_id = azurerm_route_table.base.id
}
#firewall
resource "azurerm_public_ip" "base" {
name = "pip-firewall"
location = data.azurerm_resource_group.base.location
resource_group_name = data.azurerm_resource_group.base.name
allocation_method = "Static"
sku = "Standard"
}
resource "azurerm_firewall" "base" {
name = "fw-akscluster"
location = data.azurerm_resource_group.base.location
resource_group_name = data.azurerm_resource_group.base.name
ip_configuration {
name = "ip-firewallakscluster"
subnet_id = data.azurerm_subnet.firewall.id
public_ip_address_id = azurerm_public_ip.base.id
}
}
#kubernetes_cluster
resource "azurerm_kubernetes_cluster" "base" {
name = "testakscluster"
location = data.azurerm_resource_group.base.location
resource_group_name = data.azurerm_resource_group.base.name
dns_prefix = "dns-testakscluster"
private_cluster_enabled = true
network_profile {
network_plugin = "azure"
outbound_type = "userDefinedRouting"
}
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
vnet_subnet_id = data.azurerm_subnet.aks.id
}
identity {
type = "UserAssigned"
user_assigned_identity_id = azurerm_user_assigned_identity.base.id
}
depends_on = [
azurerm_route.base,
azurerm_role_assignment.base
]
}
Output:
(Terraform Plan)
(Terraform Apply)
(Azure portal)
Note: Its bydefault that azure requires the subnet name for firewall to be AzureFirewallSubnet. If you are using subnet with any other name for firewall creation then it will error out. So, Please ensure to name the existing subnet to be used by firewall to be AzureFirewallSubnet.

How do I create a Container Registry on Azure with a resource?

My terraform script is giving error like below;
Error: Error creating Container Registry "containerRegistry1" (Resource Group "aks-cluster"): containerregistry.RegistriesClient#Create: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="AlreadyInUse" Message="The registry DNS name containerregistry1.azurecr.io is already in use. You can check if the name is already claimed using following API: https://learn.microsoft.com/en-us/rest/api/containerregistry/registries/checknameavailability"
on terra.tf line 106, in resource "azurerm_container_registry" "acr":
106: resource "azurerm_container_registry" "acr" {
Whole script is below;
I'm beginner at Terraform and tried different combinations but didn't worked. Not sure what can be the problem, is it possible to help?
variable "prefix" {
default = "tfvmex"
}
provider "azurerm" {
version = "=1.28.0"
}
resource "azurerm_resource_group" "rg" {
name = "aks-cluster"
location = "West Europe"
}
resource "azurerm_virtual_network" "network" {
name = "aks-vnet"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
address_space = ["10.1.0.0/16"]
}
resource "azurerm_subnet" "subnet" {
name = "aks-subnet"
resource_group_name = azurerm_resource_group.rg.name
address_prefix = "10.1.1.0/24"
virtual_network_name = azurerm_virtual_network.network.name
}
resource "azurerm_kubernetes_cluster" "cluster" {
name = "aks"
location = azurerm_resource_group.rg.location
dns_prefix = "aks"
resource_group_name = azurerm_resource_group.rg.name
kubernetes_version = "1.17.3"
agent_pool_profile {
name = "aks"
count = 1
vm_size = "Standard_D2s_v3"
os_type = "Linux"
vnet_subnet_id = azurerm_subnet.subnet.id
}
service_principal {
client_id = "dxxxx"
client_secret = "xxxx"
}
network_profile {
network_plugin = "azure"
}
}
resource "azurerm_network_interface" "rg" {
name = "${var.prefix}-nic"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.subnet.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "rg" {
name = "${var.prefix}-vm"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
network_interface_ids = [azurerm_network_interface.rg.id]
vm_size = "Standard_DS1_v2"
# Uncomment this line to delete the OS disk automatically when deleting the VM
# delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
# delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "testtest"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
resource "azurerm_container_registry" "acr" {
name = "containerRegistry1"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku = "Premium"
admin_enabled = false
georeplication_locations = ["West Europe"]
}
resource "azurerm_network_security_group" "example" {
name = "acceptanceTestSecurityGroup1"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
security_rule {
name = "test123"
priority = 100
direction = "Outbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags = {
environment = "Test"
}
}
Thanks!
How do I create a Container Registry on Azure with a resource?
From your error message, it appears like you are using a non-unique ACR name in the below pasted terraform resource declaration:
resource "azurerm_container_registry" "acr" {
**name = "containerRegistry1"**
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku = "Premium"
admin_enabled = false
georeplication_locations = ["West Europe"]
}
Azure CLI has az acr check-name to ensure that ACR name is globally unique.
I think your issue is because your acr name is already globally taken. Not within your subscription, but GLOBALLY.
This is because ACR need a url to be accessed, and the url needs to be unique. That means some other Azure account has taken that name.
Error: Error creating Container Registry "containerRegistry1" (Resource Group "aks-cluster"): containerregistry.RegistriesClient#Create: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="AlreadyInUse" Message="The registry DNS name containerregistry1.azurecr.io is already in use. You can check if the name is already claimed using following API: https://learn.microsoft.com/en-us/rest/api/containerregistry/registries/checknameavailability"
containerregistry1.azurecr.io >>>> this url needs to be globally unique.

Create Virtual network gateway in an existing Vnet using terraform

I would like to create Virtual network gateway for an existing Vnet using terraform. Can someone help me with Code.
try updating your azurerm Terraform provider
resource "azurerm_subnet" "test" {
name = "GatewaySubnet"
resource_group_name = "${var.resource_group_name}"
virtual_network_name = "${var.virtual_network_name}"
address_prefix = "192.168.2.0/24"
}
resource "azurerm_public_ip" "test" {
name = "test"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
public_ip_address_allocation = "Dynamic"
}
resource "azurerm_virtual_network_gateway" "test" {
name = "test"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
type = "Vpn"
vpn_type = "RouteBased"
active_active = false
enable_bgp = false
sku = "Basic"
ip_configuration {
name = "vnetGatewayConfig"
public_ip_address_id = "${azurerm_public_ip.test.id}"
private_ip_address_allocation = "Dynamic"
subnet_id = "${azurerm_subnet.test.id}"
}
vpn_client_configuration {
address_space = [ "10.2.0.0/24" ]
root_certificate {
name = "DigiCert-Federated-ID-Root-CA"
public_cert_data = <<EOF
MIIDuzCCAqOgAwIBAgIQCHTZWCM+IlfFIRXIvyKSrjANBgkqhkiG9w0BAQsFADBn
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMSYwJAYDVQQDEx1EaWdpQ2VydCBGZWRlcmF0ZWQgSUQg
Um9vdCBDQTAeFw0xMzAxMTUxMjAwMDBaFw0zMzAxMTUxMjAwMDBaMGcxCzAJBgNV
BAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdp
Y2VydC5jb20xJjAkBgNVBAMTHURpZ2lDZXJ0IEZlZGVyYXRlZCBJRCBSb290IENB
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvAEB4pcCqnNNOWE6Ur5j
QPUH+1y1F9KdHTRSza6k5iDlXq1kGS1qAkuKtw9JsiNRrjltmFnzMZRBbX8Tlfl8
zAhBmb6dDduDGED01kBsTkgywYPxXVTKec0WxYEEF0oMn4wSYNl0lt2eJAKHXjNf
GTwiibdP8CUR2ghSM2sUTI8Nt1Omfc4SMHhGhYD64uJMbX98THQ/4LMGuYegou+d
GTiahfHtjn7AboSEknwAMJHCh5RlYZZ6B1O4QbKJ+34Q0eKgnI3X6Vc9u0zf6DH8
Dk+4zQDYRRTqTnVO3VT8jzqDlCRuNtq6YvryOWN74/dq8LQhUnXHvFyrsdMaE1X2
DwIDAQABo2MwYTAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBhjAdBgNV
HQ4EFgQUGRdkFnbGt1EWjKwbUne+5OaZvRYwHwYDVR0jBBgwFoAUGRdkFnbGt1EW
jKwbUne+5OaZvRYwDQYJKoZIhvcNAQELBQADggEBAHcqsHkrjpESqfuVTRiptJfP
9JbdtWqRTmOf6uJi2c8YVqI6XlKXsD8C1dUUaaHKLUJzvKiazibVuBwMIT84AyqR
QELn3e0BtgEymEygMU569b01ZPxoFSnNXc7qDZBDef8WfqAV/sxkTi8L9BkmFYfL
uGLOhRJOFprPdoDIUBB+tmCl3oDcBy3vnUeOEioz8zAkprcb3GHwHAK+vHmmfgcn
WsfMLH4JCLa/tRYL+Rw/N3ybCkDp00s0WUZ+AoDywSl0Q/ZEnNY0MsFiw6LyIdbq
M/s/1JRtO3bDSzD9TazRVzn2oBqzSa8VgIo5C1nOnoAKJTlsClJKvIhnRlaLQqk=
EOF
}
revoked_certificate {
name = "Verizon-Global-Root-CA"
thumbprint = "912198EEF23DCAC40939312FEE97DD560BAE49B1"
}
}
}
Terraform v0.11.7
provider.azurerm v1.13.0
It works well on my site and you can change the resources into yours. For more details about virtual network gateway with Terraform, see azurerm_virtual_network_gateway.

Resources