I'm trying to create a Service Fabric cluster in Azure with a Terraform script. The Azure service provider in Terraform has released a "Service Fabric Cluster" (azurerm_service_fabric_cluster) resource. That resource only creates the service fabric management part, ie not the vm scale sets, or networking resources.
How do I create a working SF cluster via Terraform?
Terraform azurerm_service_fabric_cluster resource only provisions the Management. To provision the nodes, Deploy the VMSS with service fabric extension which configures the SF Nodes.
Refer the example on the official provider GitHub for information.
https://github.com/terraform-providers/terraform-provider-azurerm/tree/master/examples/service-fabric/windows-vmss-self-signed-certs
extension {
name = "${var.prefix}ServiceFabricNode"
publisher = "Microsoft.Azure.ServiceFabric"
type = "ServiceFabricNode"
type_handler_version = "1.1"
auto_upgrade_minor_version = false
settings = jsonencode({
"clusterEndpoint" = azurerm_service_fabric_cluster.example.cluster_endpoint
"nodeTypeRef" = azurerm_service_fabric_cluster.example.node_type[0].name
"durabilityLevel" = "bronze"
"nicPrefixOverride" = azurerm_subnet.example.address_prefixes[0]
"enableParallelJobs" = true
"certificate" = {
"commonNames" = [
"${var.prefix}servicefabric.${var.location}.cloudapp.azure.com",
]
"x509StoreName" = "My"
}
})
protected_settings = jsonencode({
"StorageAccountKey1" = azurerm_storage_account.example.primary_access_key
"StorageAccountKey2" = azurerm_storage_account.example.secondary_access_key
})
}
Related
When I configure Azure Monitoring using the OMS solution for VMs with this answer Enable Azure Monitor for existing Virtual machines using terraform, I notice that this feature is being deprecated and Azure prefers you move to the new monitoring solution (Not using the log analytics agent).
Azure allows me to configure VM monitoring using this GUI, but I would like to do it using terraform.
Is there a particular setup I have to use in terraform to achieve this? (I am using a Linux VM btw)
Yes, that is correct. The omsagent has been marked as legacy and Azure now has a new monitoring agent called "Azure Monitor agent" . The solution given below is for Linux, Please check the Official Terraform docs for Windows machines.
We need three things to do the equal UI counterpart in Terraform.
azurerm_log_analytics_workspace
azurerm_monitor_data_collection_rule
azurerm_monitor_data_collection_rule_association
Below is the example code:
data "azurerm_virtual_machine" "vm" {
name = var.vm_name
resource_group_name = var.az_resource_group_name
}
resource "azurerm_log_analytics_workspace" "workspace" {
name = "${var.project}-${var.env}-log-analytics"
location = var.az_location
resource_group_name = var.az_resource_group_name
sku = "PerGB2018"
retention_in_days = 30
}
resource "azurerm_virtual_machine_extension" "AzureMonitorLinuxAgent" {
name = "AzureMonitorLinuxAgent"
publisher = "Microsoft.Azure.Monitor"
type = "AzureMonitorLinuxAgent"
type_handler_version = 1.0
auto_upgrade_minor_version = "true"
virtual_machine_id = data.azurerm_virtual_machine.vm.id
}
resource "azurerm_monitor_data_collection_rule" "example" {
name = "example-rules"
resource_group_name = var.az_resource_group_name
location = var.az_location
destinations {
log_analytics {
workspace_resource_id = azurerm_log_analytics_workspace.workspace.id
name = "test-destination-log"
}
azure_monitor_metrics {
name = "test-destination-metrics"
}
}
data_flow {
streams = ["Microsoft-InsightsMetrics"]
destinations = ["test-destination-log"]
}
data_sources {
performance_counter {
streams = ["Microsoft-InsightsMetrics"]
sampling_frequency_in_seconds = 60
counter_specifiers = ["\\VmInsights\\DetailedMetrics"]
name = "VMInsightsPerfCounters"
}
}
}
# associate to a Data Collection Rule
resource "azurerm_monitor_data_collection_rule_association" "example1" {
name = "example1-dcra"
target_resource_id = data.azurerm_virtual_machine.vm.id
data_collection_rule_id = azurerm_monitor_data_collection_rule.example.id
description = "example"
}
Reference:
monitor_data_collection_rule
monitor_data_collection_rule_association
I can't find documentation to add health and repair extension.
How do i enable health and repair in vmss using Terraform. I already created VMSS but I the health option is disable. I like to enable and configure in my terraform. Anyone has idea?.
If i define under vmss resource block ?
Adding the block has solved the issue
resource "azurerm_linux_virtual_machine_scale_set" "consul_cluster" {
[...]
extension {
name = "ConsulHealthExtension"
publisher = "Microsoft.ManagedServices"
type = "ApplicationHealthLinux"
type_handler_version = "1.0"
auto_upgrade_minor_version = false
settings = jsonencode({
protocol = "http"
port = var.consul_health_port
requestPath = "health"
})
} ```
I want to deploy a Windows VM with Azure Cloud Adoption Framework (CAF) using Terraform. In the example of configuration.tfvars, all the configuration is done.But I cannot find the correct terraform code to deploy this tfvars configuration.
The windows vm module is here.
So far, i have written the code below:
module "caf_virtual_machine" {
source = "aztfmod/caf/azurerm//modules/compute/virtual_machine"
version = "5.0.0"
# belows are the 7 required variables
base_tags = var.tags
client_config =
global_settings = var.global_settings
location = var.location
resource_group_name = var.resource_group_name
settings =
vnets = var.vnets
}
So the vnets, global_settings, resource_group_name variables already exists in the configuration.tfvars. I have added tags and location variables to the configuration.tfvars.
But what should i enter to settings and client_config variables?
The virtual machine is a private module. You should use it by calling the base CAF module.
The Readme of the terraform registry explains how to leverage the core CAF module - https://registry.terraform.io/modules/aztfmod/caf/azurerm/latest/submodules/virtual_machine
Source code of an example:
https://github.com/aztfmod/terraform-azurerm-caf/tree/master/examples/compute/virtual_machine/211-vm-bastion-winrm-agents/registry
You have a library of configuration files examples showing how to deploy virtual machines
https://github.com/aztfmod/terraform-azurerm-caf/tree/master/examples/compute/virtual_machine
module "caf" {
source = "aztfmod/caf/azurerm"
version = "5.0.0"
global_settings = var.global_settings
tags = var.tags
resource_groups = var.resource_groups
storage_accounts = var.storage_accounts
keyvaults = var.keyvaults
managed_identities = var.managed_identities
role_mapping = var.role_mapping
diagnostics = {
# Get the diagnostics settings of services to create
diagnostic_log_analytics = var.diagnostic_log_analytics
diagnostic_storage_accounts = var.diagnostic_storage_accounts
}
compute = {
virtual_machines = var.virtual_machines
}
networking = {
vnets = var.vnets
network_security_group_definition = var.network_security_group_definition
public_ip_addresses = var.public_ip_addresses
}
security = {
dynamic_keyvault_secrets = var.dynamic_keyvault_secrets
}
}
Note - it is recommended to leverage the VScode devcontainer provided in the source repository to execute the terraform deployment. The devcontainer includes the tooling required to deploy Azure solutions.
I have a azure app service that is provisioned by Terraform. The app service fires up the docker image from Azure ACR there fore it need to access the ACR. I'm currently using the password and login name method in my Terraform configuration. How can I make the azure app service to access the ACR by service principle role assignment in Terraform?
resource "azurerm_app_service" "tf_app_service" {
name = var.application_name
location = azurerm_resource_group.tf_resource_group.location
resource_group_name = azurerm_resource_group.tf_resource_group.name
app_service_plan_id = azurerm_app_service_plan.tf_service_plan.id
site_config {
always_on = true
linux_fx_version = "DOCKER|${var.acr_name}.azurecr.io/${var.img_repo_name}:${var.tag}"
}
// How to use role assignment?
app_settings = {
DOCKER_REGISTRY_SERVER_URL = // need to avoid docker URL
WEBSITES_ENABLE_APP_SERVICE_STORAGE = "false"
DOCKER_REGISTRY_SERVER_USERNAME = // need to avoid user name
DOCKER_REGISTRY_SERVER_PASSWORD = // need to avoid PW
}
identity {
type = "SystemAssigned"
}
tags = {
environment = var.environment
DeployedBy = "terraform"
}
}
The steps to use a Service Principle to access ACR are lined out here.
So to do the same in Terraform, you need to first create a new service principle. Then assign a password to it. Afterwards you should be able to use those two to fill the app settings of your app service.
i would like to deploy Azure landingzone using terraform in multiple subscriptions, Hub network should have azure firewall in subscription1 and each spoke have different subscriptions, i need 4 spokes which would be deployed in 4 separate subscriptions.
can some one help me with logic, how to write terraform.
For your requirements, here is the architecture that you can follow. The Hub and the spoke are connected via the VNet Peering. According to the description:
The virtual networks can be in the same, or different subscriptions.
When you peer virtual networks in different subscriptions, both
subscriptions can be associated to the same or different Azure Active
Directory tenant.
So you can peer VNets in two different subscriptions. I assume you use the Azure CLI as the authentication your account already login and has enough permission in both two subscriptions. Here is an example code:
provider "azurerm" {
features {}
alias = "subscription1"
subscription_id = "xxxxxxx"
}
provider "azurerm" {
features {}
alias = "subscription2"
subscription_id = "xxxxxxx"
}
data "azurerm_virtual_network" "remote" {
provider = azurerm.subscription1
name = "remote_vnet_name"
resource_group_name = "remote_group_name"
}
data "azurerm_virtual_network" "vnet" {
provider = azurerm.subscription2
name = "vnet_name"
resource_group_name = "group_name"
}
resource "azurerm_virtual_network_peering" "peering" {
provider = azurerm.subscription2
name = "${data.azurerm_virtual_network.vnet.name}-to-${data.azurerm_virtual_network.remote.name}"
resource_group_name = "group_name"
virtual_network_name = data.azurerm_virtual_network.vnet.name
remote_virtual_network_id = data.azurerm_virtual_network.remote.id
allow_virtual_network_access = true
allow_forwarded_traffic = true
# `allow_gateway_transit` must be set to false for vnet Global Peering
allow_gateway_transit = false
}
resource "azurerm_virtual_network_peering" "peering1" {
provider = azurerm.subscription1
name = "${data.azurerm_virtual_network.remote.name}-to-${data.azurerm_virtual_network.vnet.name}"
resource_group_name = "remote_group_name"
virtual_network_name = data.azurerm_virtual_network.remote.name
remote_virtual_network_id = data.azurerm_virtual_network.vnet.id
allow_virtual_network_access = true
allow_forwarded_traffic = true
# `allow_gateway_transit` must be set to false for vnet Global Peering
allow_gateway_transit = false
}
The VNet peering always comes with a pair. So you need to create a peering for each VNet that in the different subscriptions in a peering. This example just shows you how to create a peering for the two VNets in different subscriptions. Then you can complete the whole architecture as you wish in Terraform.