i would like to deploy Azure landingzone using terraform in multiple subscriptions, Hub network should have azure firewall in subscription1 and each spoke have different subscriptions, i need 4 spokes which would be deployed in 4 separate subscriptions.
can some one help me with logic, how to write terraform.
For your requirements, here is the architecture that you can follow. The Hub and the spoke are connected via the VNet Peering. According to the description:
The virtual networks can be in the same, or different subscriptions.
When you peer virtual networks in different subscriptions, both
subscriptions can be associated to the same or different Azure Active
Directory tenant.
So you can peer VNets in two different subscriptions. I assume you use the Azure CLI as the authentication your account already login and has enough permission in both two subscriptions. Here is an example code:
provider "azurerm" {
features {}
alias = "subscription1"
subscription_id = "xxxxxxx"
}
provider "azurerm" {
features {}
alias = "subscription2"
subscription_id = "xxxxxxx"
}
data "azurerm_virtual_network" "remote" {
provider = azurerm.subscription1
name = "remote_vnet_name"
resource_group_name = "remote_group_name"
}
data "azurerm_virtual_network" "vnet" {
provider = azurerm.subscription2
name = "vnet_name"
resource_group_name = "group_name"
}
resource "azurerm_virtual_network_peering" "peering" {
provider = azurerm.subscription2
name = "${data.azurerm_virtual_network.vnet.name}-to-${data.azurerm_virtual_network.remote.name}"
resource_group_name = "group_name"
virtual_network_name = data.azurerm_virtual_network.vnet.name
remote_virtual_network_id = data.azurerm_virtual_network.remote.id
allow_virtual_network_access = true
allow_forwarded_traffic = true
# `allow_gateway_transit` must be set to false for vnet Global Peering
allow_gateway_transit = false
}
resource "azurerm_virtual_network_peering" "peering1" {
provider = azurerm.subscription1
name = "${data.azurerm_virtual_network.remote.name}-to-${data.azurerm_virtual_network.vnet.name}"
resource_group_name = "remote_group_name"
virtual_network_name = data.azurerm_virtual_network.remote.name
remote_virtual_network_id = data.azurerm_virtual_network.vnet.id
allow_virtual_network_access = true
allow_forwarded_traffic = true
# `allow_gateway_transit` must be set to false for vnet Global Peering
allow_gateway_transit = false
}
The VNet peering always comes with a pair. So you need to create a peering for each VNet that in the different subscriptions in a peering. This example just shows you how to create a peering for the two VNets in different subscriptions. Then you can complete the whole architecture as you wish in Terraform.
Related
I have setup a AKS cluster through terraform and in the same Resource group, I have create an Application gateway. As such
resource "azurerm_kubernetes_cluster" "cluster" {
name = var.aks_name
tags = var.tags
location = var.vnet_location
resource_group_name = local.aks_resource_group
dns_prefix = "k8s"
oidc_issuer_enabled = true
role_based_access_control_enabled = true
azure_policy_enabled = true
#workload_identity_enabled = true
default_node_pool {
name = var.node_pool_name
node_count = var.node_count
vm_size = var.node_type
os_disk_size_gb = var.os_disk_size_gb
vnet_subnet_id = local.aks_subnet_id
}
identity {
type = "SystemAssigned"
}
ingress_application_gateway {
gateway_id = var.application_gateway_id
}
...
During creation, Azure creates a MC_******* resource group for the managed cluster and I use a service principal to deploy all Terraform resources
The ingress controller for the application "ingress_application_gateway" acquires automatically a managed entity deployed inside the MC_**** resource group.
I need to access that managed entity in order to set some needed access policies on dependent resources.
Ingress controller need access to both the application gateway aswell as the original Resource group, as such;
#----------------------------------------------------------------
# Ingress controller running on AKS
#-----------------------------------------------------------------
data "azurerm_resource_group" "aks_resource_group" {
name = local.aks_resource_group
}
#Gateway ingress controller should have reader access on resource group
resource "azurerm_role_assignment" "igc_reader_access" {
scope = data.azurerm_resource_group.aks_resource_group.id
role_definition_name = "Reader"
principal_id = data.azurerm_user_assigned_identity.ingress_controller_identity.principal_id
}
#Gateway ingress controller should have contributor access on application gateway
resource "azurerm_role_assignment" "ag_contributor_access" {
scope = var.application_gateway_id
role_definition_name = "Contributor"
principal_id = data.azurerm_user_assigned_identity.ingress_controller_identity.principal_id
}
data "azurerm_user_assigned_identity" "ingress_controller_identity" {
name = "ingressapplicationgateway-qa"
resource_group_name = "MC_RG-Digital-Service-qa_westeurope"
}
To the question, how can I setup so my Service principal (that is used to create the azurerm_kubernetes_cluster resource) to have reader access on the MC_****** resource group created by azure for aks.
I dont want to give complete subscription contributor access to my Service Principal.
You could do smtg like this, when your execute Terraform as the Service Principal (We could reduce the data resources if the Application Gateway would be in the same Terraform project):
# get current client ( your Service principal)
data "azurerm_client_config" "current" {
}
# get MC Resource Group
data "azurerm_resource_group" "aks_mc_rg" {
depends_on = [azurerm_kubernetes_cluster.cluster]
name = azurerm_kubernetes_cluster.aks.node_resource_group
}
# get Application Gateway
data "azurerm_application_gateway" "aks_igc" {
name = "existing-app-gateway"
resource_group_name = "existing-resources"
}
# read Managed identity
data "azurerm_user_assigned_identity" "ingress_controller_identity" {
name = "ingressapplicationgateway-qa"
resource_group_name = data.azurerm_resource_group.aks_mc_rg.id
}
# assign reader to current client
resource "azurerm_role_assignment" "rg_aks_nodes_owners" {
scope = data.azurerm_resource_group.aks_mc_rg.id
role_definition_name = "Reader"
principal_id = data.azurerm_client_config.current.object_id
}
#Gateway ingress controller should have reader access on resource group
resource "azurerm_role_assignment" "igc_reader_access" {
scope = data.azurerm_resource_group.aks_mc_rg.id
role_definition_name = "Reader"
principal_id = data.azurerm_user_assigned_identity.ingress_controller_identity.principal_id
}
#Gateway ingress controller should have contributor access on application gateway
resource "azurerm_role_assignment" "ag_contributor_access" {
scope = data.azurerm_application_gateway.aks_igc.id
role_definition_name = "Contributor"
principal_id = data.azurerm_user_assigned_identity.ingress_controller_identity.principal_id
}
I deployed vnet peerings with terraform. But it was stuck on initiated status. When i tried manually with same values there was no problem. How can i fix it?
resource "azurerm_virtual_network_peering" "spoke_aks_peering" {
virtual_network_name = azurerm_virtual_network.virtual_network_spoke.name
resource_group_name = azurerm_resource_group.resource-group_spoke.name
remote_virtual_network_id = azurerm_virtual_network.virtual_network_aks.id
name = "peerspoketoaks"
allow_virtual_network_access = true
allow_forwarded_traffic = true
}
I tried to reproduce the same in my environment to create a peering between 2 virtual networks:
Note: If the Peering Status is currently Initiated status in Vnet peering,kindly enable the peering on both Vnets to get the status connected.
To resolve the Issue, create peering on both the VNet, like below.
#Azure Virtual Network peering between Virtual Network stagingtotest and testtostaging
resource "azurerm_virtual_network_peering" "peeringconnection1" {
name = "stagingtotest"
resource_group_name = local.resource_group_name
virtual_network_name = azurerm_virtual_network.network["staging"].name
remote_virtual_network_id = azurerm_virtual_network.network["test"].id
}
#Azure Virtual Network peering between Virtual Network testtostaging and stagingtotest
resource "azurerm_virtual_network_peering" "peeringconnection2" {
name = "testtostaging"
resource_group_name = local.resource_group_name
virtual_network_name = azurerm_virtual_network.network["test"].name
remote_virtual_network_id = azurerm_virtual_network.network["staging"].id
}
After Terraform apply, Peering created on both the Vnets.
Refer the document here for more.
In our organiztion we have a team which manages central azure services like vpn gateway, firewall, bastion etc. It should also provision subscriptions for our software development teams, which involves managing users and groups, creating a vnet and peering it with the hub etc. The development teams manage all other relevant resources in their subscriptions.
I couldn't find an efficient way to build the IaC around the subscription management process. It seems to me, that you have to run terraform for each subscription separately, since you have to provide a subscription id in the terraform azure provider. This seems a bit complicated to me, I would rather define all subscriptions in a single file and let terraform manage them in a single run, like this:
subscriptions = {
"my-subscription-1" = {
vnet_address_space = ["10.0.4.0/27"],
snet_address_prefixes = ["10.0.4.0/27"],
users = [
"abc#example.com",
"def#example.com",
],
groups = [
"MyAD-Group",
]
},
"my-subscription-2" = {
vnet_address_space = ["10.0.4.32/27"],
snet_address_prefixes = ["10.0.4.32/27"],
users = [
"efg#example.com",
"hij#example.com",
],
groups = [
"AnotherAD-Group",
]
}
}
I know that you can define multiple providers in terraform and assign alias names, but this only works until you have 5-6 subscriptions. In my case I need to manage 50 subscriptions.
Did I miss something? How do you manage your subscriptions?
You must set Multiple Provider Configurations
provider "azurerm" {
subscription_id = "SUBSCRIPTION 1 ID"
features {}
}
provider "azurerm" {
alias = "subscription2"
subscription_id = "SUBSCRIPTION 2 ID"
features {}
}
Then, when you will run a module, you must set the alias, for example:
# Create a Resource Group in subscription 1
resource "azurerm_resource_group" "rg1" {
name = "RG-subscription-1"
location = "eastus"
}
# Create a Resource Group in subscription 2
resource "azurerm_resource_group" "rg2" {
provider = azurerm.subscription2 # this is the alias previously defined
name = "RG-subscription-2"
location = "eastus"
}
Hope this helps!
When I configure Azure Monitoring using the OMS solution for VMs with this answer Enable Azure Monitor for existing Virtual machines using terraform, I notice that this feature is being deprecated and Azure prefers you move to the new monitoring solution (Not using the log analytics agent).
Azure allows me to configure VM monitoring using this GUI, but I would like to do it using terraform.
Is there a particular setup I have to use in terraform to achieve this? (I am using a Linux VM btw)
Yes, that is correct. The omsagent has been marked as legacy and Azure now has a new monitoring agent called "Azure Monitor agent" . The solution given below is for Linux, Please check the Official Terraform docs for Windows machines.
We need three things to do the equal UI counterpart in Terraform.
azurerm_log_analytics_workspace
azurerm_monitor_data_collection_rule
azurerm_monitor_data_collection_rule_association
Below is the example code:
data "azurerm_virtual_machine" "vm" {
name = var.vm_name
resource_group_name = var.az_resource_group_name
}
resource "azurerm_log_analytics_workspace" "workspace" {
name = "${var.project}-${var.env}-log-analytics"
location = var.az_location
resource_group_name = var.az_resource_group_name
sku = "PerGB2018"
retention_in_days = 30
}
resource "azurerm_virtual_machine_extension" "AzureMonitorLinuxAgent" {
name = "AzureMonitorLinuxAgent"
publisher = "Microsoft.Azure.Monitor"
type = "AzureMonitorLinuxAgent"
type_handler_version = 1.0
auto_upgrade_minor_version = "true"
virtual_machine_id = data.azurerm_virtual_machine.vm.id
}
resource "azurerm_monitor_data_collection_rule" "example" {
name = "example-rules"
resource_group_name = var.az_resource_group_name
location = var.az_location
destinations {
log_analytics {
workspace_resource_id = azurerm_log_analytics_workspace.workspace.id
name = "test-destination-log"
}
azure_monitor_metrics {
name = "test-destination-metrics"
}
}
data_flow {
streams = ["Microsoft-InsightsMetrics"]
destinations = ["test-destination-log"]
}
data_sources {
performance_counter {
streams = ["Microsoft-InsightsMetrics"]
sampling_frequency_in_seconds = 60
counter_specifiers = ["\\VmInsights\\DetailedMetrics"]
name = "VMInsightsPerfCounters"
}
}
}
# associate to a Data Collection Rule
resource "azurerm_monitor_data_collection_rule_association" "example1" {
name = "example1-dcra"
target_resource_id = data.azurerm_virtual_machine.vm.id
data_collection_rule_id = azurerm_monitor_data_collection_rule.example.id
description = "example"
}
Reference:
monitor_data_collection_rule
monitor_data_collection_rule_association
I was wondering if someone could help me with setting up Vnet Peerings across subscriptions in Azure using Terraform. Each subscription is within the same tenant, but they have different service principals. I keep getting errors suggesting that the service principal cannot see the resource group in the other subscription. This is despite giving that service principal contributor access to the other subscription.
This is an example of the code I have:
resource "azurerm_virtual_network_peering" "dev-to-test" {
name = "dev-to-test"
resource_group_name = "gl-dev-rg"
virtual_network_name = "gl-dev-vnet"
remote_virtual_network_id = "/subscriptions/subscriptionid/resourceGroups/gl-test-rg/providers/Microsoft.Network/virtualNetworks/gl-test-vnet"
allow_virtual_network_access = true
allow_forwarded_traffic = true
}
resource "azurerm_virtual_network_peering" "test-to-dev" {
name = "test-to-dev"
resource_group_name = "gl-test-rg"
virtual_network_name = "gl-test-vnet"
remote_virtual_network_id = "/subscriptions/subscriptionid/resourceGroups/gl-dev-rg/providers/Microsoft.Network/virtualNetworks/gl-dev-vnet"
allow_virtual_network_access = true
allow_forwarded_traffic = true
}
Any help would be really appreciated!
Further information can be found here:
https://github.com/terraform-providers/terraform-provider-azurerm/issues/1253
The question was asked and answered in this issue. The TL;DR is terraform has an alias parameter for providers. This allows two separate service principals to reference different resources in a single terraform run.