Provisioning Azure Data Factory with Managed private endpoints using Terraform? - azure

I am developing a Terraform Script to provision the Azure Data Factory that reads the data from the storage account and updates the Azure SQL Server. It works.
I have created the Private Endpoints for both the Storage account and Azure SQL Server
Storage Account Private Endpoint
SQL Server Private Endpoint
Now, I want to update the Azure Data factory to use these private endpoints. I do understand that I need to setup the IR & managed private endpoints in the Azure Data Factory.
How do I achieve this using Terraform? Below is my script so far
# Create a Data Factory
resource "azurerm_data_factory" "terraform-demo-factory" {
name = "tf-demo-factory"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
github_configuration {
account_name = "kavija"
branch_name = "main"
git_url = "https://github.com/kavija/azure-data-factory-etl-demo"
repository_name = "azure-data-factory-etl-demo"
root_folder = "/"
}
tags = {
creator = "Terraform"
project = "terraform-demo"
}
identity {
type = "UserAssigned"
identity_ids = [azurerm_user_assigned_identity.uai_adf.id]
}
}
# Assign role to ADF Service Principal
resource "azurerm_role_assignment" "uai_adf_storage_access_reader" {
scope = module.storage_account.storage_account_id
role_definition_name = "Storage Account Reader"
principal_id = azurerm_user_assigned_identity.uai_adf.principal_id
}
resource "azurerm_role_assignment" "uai_adf_storage_access_blob_contributor" {
scope = module.storage_account.storage_account_id
role_definition_name = "Storage Blob Data Contributor"
principal_id = azurerm_user_assigned_identity.uai_adf.principal_id
}
Also, Is there a way to create a Private Endpoint for Azure Data Factory itself? so that it will be with in my VNET.

I was able to setup using the below terraform code
resource "azurerm_virtual_machine" "selfhostvm" {
name = "${var.prefix}-VM"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.example.id]
vm_size = "Standard_B1s"
storage_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2016-Datacenter"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "${var.prefix}-VM"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_windows_config {
timezone = "Pacific Standard Time"
provision_vm_agent = true
}
}
resource "azurerm_data_factory_integration_runtime_self_hosted" "host" {
name = "${var.prefix}IntegrationRuntimeHOST"
data_factory_id = azurerm_data_factory.host.id
}
resource "azurerm_virtual_machine_extension" "test" {
name = "${var.prefix}-EXT"
virtual_machine_id = azurerm_virtual_machine.selfhostvm.id
publisher = "Microsoft.Compute"
type = "CustomScriptExtension"
type_handler_version = "1.10"
settings = jsonencode({
"fileUris" = ["https://raw.githubusercontent.com/kvija85/install-df-agent/main/gatewayInstall.ps1"],
"commandToExecute" = "powershell -ExecutionPolicy Unrestricted -File gatewayInstall.ps1 ${azurerm_data_factory_integration_runtime_self_hosted.host.primary_authorization_key} && timeout /t 120"
})
}

Related

Unable to build msql virtual machine from terraform, using azurerm_mssql_virtual_machine. Errorcode: CRPNotAllowedOperation

I'm running azurerm_mssql_virtual_machine to build a SQL Server virtual machine from a custom imag. (Image configured with SQL Server 2016 prepare image).
This is the code that I am running:
resource "azurerm_mssql_virtual_machine" "mssql_vm" {
provider = azurerm.spoke-subscription
virtual_machine_id = azurerm_windows_virtual_machine.sql_server.id
sql_license_type = "PAYG"
sql_connectivity_port = "49535"
sql_connectivity_update_username = var.sql_login
sql_connectivity_update_password = var.sql_password
sql_instance {
collation = "Latin1_General_CI_AS"
}
assessment {
enabled = true
run_immediately = true
}
storage_configuration {
disk_type = "${var.disk_type}"
storage_workload_type = "OLTP"
data_settings {
default_file_path = "F:\\DATA"
luns = [1]
}
log_settings {
default_file_path = "G:\\LOGS"
luns = [2]
}
temp_db_settings {
default_file_path = "K:\\TEMPDB"
luns = [3]
}
}
lifecycle {
ignore_changes = [
tags,
#assessment[0].schedule
]
}
tags = {
"application owner" = var.application_owner_tag
"environment" = var.environment_tag
"department" = var.department_tag
"technicalcontact" = var.technicalcontact_tag
"application" = var.application_tag
"service" = "SQL server"
}
}
I get this error:
performing CreateOrUpdate: sqlvirtualmachines.SqlVirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 --
Original Error: Code="CRPNotAllowedOperation" Message="Operation cannot be completed due to the following error: VM Extension with publisher 'Microsoft.SqlServer.Management' and type 'SqlIaaSAgent' does not support setting enableAutomaticUpgrade property to true on this subscription.
Steps I've taken to try and resolve:
Re-register SQL Server virtual machines to the Azure subscription
Turned off automatic upgrade on azurerm_windows_virtual_machine
I tried to reproduce the same in my environment:
Code:
resource "azurerm_mssql_virtual_machine" "example" {
virtual_machine_id = azurerm_windows_virtual_machine.example.id
sql_license_type = "PAYG"
r_services_enabled = true
sql_connectivity_port = 1433
sql_connectivity_type = "PRIVATE"
sql_connectivity_update_password = "xxx"
sql_connectivity_update_username = "sqllogin"
auto_patching {
day_of_week = "Sunday"
maintenance_window_duration_in_minutes = 60
maintenance_window_starting_hour = 2
}
}
resource "azurerm_virtual_network" "example" {
name = "kavexample-network"
address_space = ["10.0.0.0/16"]
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
}
resource "azurerm_subnet" "example" {
name = "internal"
resource_group_name = data.azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "example" {
name = "kavya-example-nic"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_windows_virtual_machine" "example" {
name = "kavyaexamplemc"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
size = "Standard_F2"
admin_username = "xxx"
admin_password = "xx"
enable_automatic_updates = true
patch_mode = "Manual"
hotpatching_enabled = true
network_interface_ids = [
azurerm_network_interface.example.id,
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2016-Datacenter"
version = "latest"
}
}
Received same error:
│ Error: waiting for creation of Sql Virtual Machine (Sql Virtual Machine Name "kavyaexamplemc" / Resource Group "v-sakavya-Mindtree"): Code="CRPNotAllowedOperation" Message="Operation cannot be completed due to the following error: VM Extension with publisher 'Microsoft.SqlServer.Management' and type 'SqlIaaSAgent' does not support setting enableAutomaticUpgrade property to true on this subscription."
Even tried changing, but was still receving the same error again and again.
enable_automatic_updates = false
patch_mode = "Manual"
hotpatching_enabled = false
Try deleting the vm resource completely and create a new one with changed settings .
Try using below code:
I tried setting enable_automatic_upgrades = false , azurerm_virtual_machine has this property .Make use of that.
Also ,
Code:
resource "azurerm_virtual_network" "main" {
name = "kavyasarvnetwork"
address_space = ["10.0.0.0/16"]
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = data.azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "main" {
name = "kavyasarnic"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "example" {
name = "kavyasarvm"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.main.id]
vm_size = "Standard_DS1_v2"
storage_os_disk {
name = "kavyasar-OSDisk"
caching = "ReadOnly"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
os_type = "Windows"
}
storage_image_reference {
publisher = "MicrosoftSQLServer"
offer = "SQL2017-WS2016"
sku = "SQLDEV"
version = "latest"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_windows_config {
timezone = "Pacific Standard Time"
provision_vm_agent = true
enable_automatic_upgrades = false
}
tags = {
environment = "staging"
}
}
resource "azurerm_mssql_virtual_machine" "example" {
virtual_machine_id = azurerm_virtual_machine.example.id
sql_license_type = "PAYG"
r_services_enabled = true
sql_connectivity_port = 1433
sql_connectivity_type = "PRIVATE"
sql_connectivity_update_password = "Password1234!"
sql_connectivity_update_username = "sqllogin"
}
This seems to be the cause due to limitations: What is the SQL Server IaaS Agent extension? (Windows) - SQL Server on Azure VMs | Microsoft Learn
The SQL IaaS Agent extension only supports:
SQL Server VMs deployed through the Azure Resource Manager. SQL Server
VMs deployed through the classic model are not supported.
SQL Server VMs deployed to the public or Azure Government cloud.
Deployments to other private or government clouds are not supported.
Reference : azurerm_mssql_virtual_machine | Resources | hashicorp/azurerm | Terraform Registry

Deploying a VM with managed identity using Terraform on Azure fails

I am currently working on deploying a VM on Azure using Terraform. The VM deployed correctly when using client_id, subscription_id, client_secret and tenant_id in the AzureRM provider block. However, I want to make use of managed identities so I don't have to expose the client_secret.
Things I tried:
For this, I followed this guide
I included the azuread provider block, used the "use_msi = true" to indicate that managed identities should be used. Also included the azurerm_subscription block,azurerm_Client_config, as well as a resource definition. Then added the role assignment to the VM.
Code:
terraform {
required_providers {
azuread = {
source = "hashicorp/azuread"
}
}
}
provider "azurerm" {
features {}
//client_id = "XXXXXXXXXXXXXX"
//client_secret = "XXXXXXXXXXXXXX"
//subscription_id = "XXXXXXXXXXXXXX"
tenant_id = "TENANT_ID"
//use_msi = true
}
provider "azuread" {
use_msi = true
tenant_id = "TENANT_ID"
}
#Resource group definition
resource "azurerm_resource_group" "myVMachineRG" {
name = "testnew-resources"
location = "westus2"
}
resource "azurerm_virtual_network" "myVNet" {
name = "testnew-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.myVMachineRG.location
resource_group_name = azurerm_resource_group.myVMachineRG.name
}
resource "azurerm_subnet" "mySubnet" {
name = "testnew-internal-subnet"
resource_group_name = azurerm_resource_group.myVMachineRG.name
virtual_network_name = azurerm_virtual_network.myVNet.name
#256 total IPs
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "myNIC" {
name = "testnew-nic"
location = azurerm_resource_group.myVMachineRG.location
resource_group_name = azurerm_resource_group.myVMachineRG.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.mySubnet.id
private_ip_address_allocation = "Dynamic"
}
}
#ADDED HERE:
data "azurerm_subscription" "current" {}
data "azurerm_client_config" "example" {
}
resource "azurerm_virtual_machine" "example" {
name = "testnew-vm"
location = azurerm_resource_group.myVMachineRG.location
resource_group_name = azurerm_resource_group.myVMachineRG.name
network_interface_ids = ["${azurerm_network_interface.myNIC.id}"]
vm_size = "Standard_F2"
#Option to delete disks when Terraform destroy is performed.
#This is to ensure that we don't keep wasting balance
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "OSDISK"
caching = "Readwrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
#Just for testing purposes, would be better to use a KeyVault reference here instead.
os_profile {
computer_name = "XXXXXXXXXXXXXX"
admin_username = "XXXXXXXXXXXXXX"
admin_password = "XXXXXXXXXXXXXX"
}
#Force password to authenticate
os_profile_linux_config {
disable_password_authentication = false
}
identity {
type = "SystemAssigned"
}
}
data "azurerm_role_definition" "contributor" {
name = "Contributor"
}
resource "azurerm_role_assignment" "example" {
//name = azurerm_virtual_machine.example.name
scope = data.azurerm_subscription.current.id
role_definition_name = "Contributor"
//role_definition_id = "${data.azurerm_subscription.current.id}${data.azurerm_role_definition.contributor.id}"
//principal_id = azurerm_virtual_machine.example.identity[0].principal_id
principal_id = data.azurerm_client_config.example.object_id
}
Error:
Error: building AzureRM Client: obtain subscription() from Azure CLI: parsing json result from the Azure CLI: waiting for the Azure CLI: exit status 1: ERROR: Please run 'az login' to setup account.
│
│ with provider["registry.terraform.io/hashicorp/azurerm"],
│ on main.tf line 9, in provider "azurerm":
│ 9: provider "azurerm" {
I dont understand why it's still asking to use az login when I am trying to use Managed Identity for log-in.
Redacted the tenantID for security purposes.
Any help would be greatly appreciated :)
I tried to reproduce the same requirement in my environment and was able to deploy it successfully.
Need to provide name of the managed identity if you are authenticating via managed identities in terraform.
Add msi_name under azuread provider.
Note: As you have given, make sure that managed identities should have enough permissions (contributor role) to authenticate and create resources otherwise deployment will fail.
main.tf
data "azurerm_subscription" "current" {}
variable "subscription_id" {
default = "xxxxxxxxxxxx"
}
provider "azurerm"{
features{}
subscription_id = var.subscription_id
}
provider "azuread"{
features{}
use_msi = true
msi-name = "jahnaviidentity" //Give Name of the Managed identity
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_virtual_network" "main" {
name = "main-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "main" {
name = "main-nic"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "<configurationname>"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "main" {
name = "main-vm"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.main.id]
vm_size = "Standard_DS1_v2"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "osdisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "<computername>"
admin_username = "<admin/username>"
admin_password = "xxxxxx"
}
os_profile_linux_config {
disable_password_authentication = false
}
identity {
type = "SystemAssigned"
}
}
Output:
terraform init:
terraform plan:
terraform apply:
Deployed successfully in Azure Portal:
your provider block has usemsi commented out for azurerm (the one that's failing.) Is that just a code transfer mistake to this question? Would have put this in comments but my reputation is not high enough.
Looks like azurerm might also need the subscription id (unlike azuread)
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/managed_service_identity
The use_msi property should be in azurerm as well. From the above link:
enter image description here
Also, just ot be sure, you've already configured the managed identity to use for this purpose, right?

Terraform enable VM Insights

Did someone managed to enable via terraforms Insights for a VM?
i'm able to create a VM, enable logging, but not enable insights..
i've seen this question: but don't find a clear answer..
How to enable azure vm application insights monitoring agent using terraform
Here is my full terraform script that i'm using for tests, i'm running it directly on the cloud shell from azure.
# Configure the Azure provider
provider "azurerm" {
# The "feature" block is required for AzureRM provider 2.x.
features {}
}
variable "prefix" {
default = "tfvmex"
}
resource "azurerm_resource_group" "main" {
name = "${var.prefix}-resources"
location = "West Europe"
}
resource "azurerm_virtual_network" "main" {
name = "${var.prefix}-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = azurerm_resource_group.main.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "main" {
name = "${var.prefix}-nic"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "main" {
name = "${var.prefix}-vm"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
network_interface_ids = [azurerm_network_interface.main.id]
vm_size = "Standard_DS1_v2"
# Uncomment this line to delete the OS disk automatically when deleting the VM
# delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
# delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
resource "azurerm_storage_account" "main" {
name = "omstesttest22"
resource_group_name = azurerm_resource_group.main.name
location = "westus"
account_tier = "Standard"
account_replication_type = "GRS"
tags = {
environment = "staging"
}
}
resource "azurerm_log_analytics_workspace" "law02" {
name = "${var.prefix}-logAnalytics"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
sku = "PerGB2018"
retention_in_days = 30
}
resource "azurerm_log_analytics_solution" "example" {
solution_name = "ContainerInsights"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
workspace_resource_id = azurerm_log_analytics_workspace.law02.id
workspace_name = azurerm_log_analytics_workspace.law02.name
plan {
publisher = "Microsoft"
product = "OMSGallery/ContainerInsights"
}
}
#===================================================================
# Set Monitoring and Log Analytics Workspace
#===================================================================
resource "azurerm_virtual_machine_extension" "oms_mma02" {
name = "test-OMSExtension"
virtual_machine_id = azurerm_virtual_machine.main.id
publisher = "Microsoft.EnterpriseCloud.Monitoring"
type = "OmsAgentForLinux"
type_handler_version = "1.12"
auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"workspaceId" : "${azurerm_log_analytics_workspace.law02.workspace_id}"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"workspaceKey" : "${azurerm_log_analytics_workspace.law02.primary_shared_key}"
}
PROTECTED_SETTINGS
}
Hope it was clear.
Thanks!
From the document, VM insights require the following two agents to be installed on each virtual machine to be monitored.
Log Analytics agent. Collects events and performance data from the virtual machine or virtual machine scale set and delivers it to the Log Analytics workspace. Deployment methods for the Log Analytics agent on Azure resources use the VM extension for Windows and Linux.
Dependency agent. Collects discovered data about processes running on the virtual machine and external process dependencies, which are used by the Map feature in VM insights. The Dependency agent relies on the Log Analytics agent to deliver its data to Azure Monitor. Deployment methods for the Dependency agent on Azure resources use the VM extension for Windows and Linux.
After my validation, you can add the DependencyAgent extension to your existing code.
resource "azurerm_virtual_machine_extension" "da" {
name = "DAExtension"
virtual_machine_id = azurerm_virtual_machine.main.id
publisher = "Microsoft.Azure.Monitoring.DependencyAgent"
type = "DependencyAgentLinux"
type_handler_version = "9.5"
auto_upgrade_minor_version = true
}
For more information, read Configure Log Analytics workspace for VM insights and Enable VM insights guest health (preview)
please use the product "OMSGallery/VMInsights" (instead of "OMSGallery/ContainerInsights")
resource "azurerm_log_analytics_solution" "..." {
solution_name = "..."
location = ...
resource_group_name = ...
workspace_resource_id = ...
workspace_name = ...
plan {
publisher = "Microsoft"
product = "OMSGallery/VMInsights"
}
}
To deploy it using Terraform:
Deploy a log analytics workspace and a VMInsights solution associated with the workspace.
resource "azurerm_log_analytics_workspace" "law" {
name = "LogAnalyticsWorkspace"
location = "Your location"
resource_group_name = "Your resource group"
sku = "PerGB2018"
retention_in_days = "your retention in days"
internet_ingestion_enabled= true
internet_query_enabled = false
tags = "Your tags"
}
resource "azurerm_log_analytics_solution" "vminsights" {
solution_name = "VMInsights"
location = "Your location"
resource_group_name = "Your resource group"
workspace_resource_id = azurerm_log_analytics_workspace.law.id
workspace_name = azurerm_log_analytics_workspace.law.name
tags = "Your tags"
plan {
publisher = "Microsoft"
product = "OMSGallery/VMInsights"
}
}
Deploy VM with as usual with OMSAgent and DependencyAgentWindows extensions:
resource "azurerm_windows_virtual_machine" "vm" {
......
......
}
OMS for Windows:
https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/oms-windows
resource "azurerm_virtual_machine_extension" "omsext" {
name = "OMSExtension"
virtual_machine_id = azurerm_windows_virtual_machine.vm.id
publisher = "Microsoft.EnterpriseCloud.Monitoring"
type = "MicrosoftMonitoringAgent"
type_handler_version = "1.0"
auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"workspaceId": "${azurerm_log_analytics_workspace.law.id}"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"workspaceKey": "${azurerm_log_analytics_workspace.law.primary_shared_key}"
}
PROTECTED_SETTINGS
tags = "Your tags"
}
DA Agent for Windows:
https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/agent-dependency-windows
resource "azurerm_virtual_machine_extension" "DAAgent" {
name = "DAAgentExtension"
virtual_machine_id = azurerm_windows_virtual_machine.vm.id
publisher = "Microsoft.Azure.Monitoring.DependencyAgent"
type = "DependencyAgentWindows"
type_handler_version = "9.10"
auto_upgrade_minor_version = true
tags = "Your tags"
}
Microsoft have changed the settings needed in the MicrosoftMonitoringAgent extensions, and the terraform specified by #Bill no longer works as of June 2022. The Terraform that worked for me was:
# Import the subscription and resource groups
data "azurerm_subscription" "current" {
}
data "azurerm_resource_group" "rg" {
name = "rg-name"
provider = azurerm
}
resource "random_password" "windowsvm-password" {
length = 24
special = false
}
# Define the VM itself
resource "azurerm_windows_virtual_machine" "windowsvm-c" {
name = "mywindowsvm"
computer_name = "mywindowsvm"
resource_group_name = data.azurerm_resource_group.rg.name
location = data.azurerm_resource_group.rg.location
size = "Standard_B2s"
admin_username = "adminlogin"
admin_password = random_password.windowsvm-password.result
identity { type = "SystemAssigned" }
network_interface_ids = [
azurerm_network_interface.windowsvm-c-nic.id,
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2022-datacenter-azure-edition-core"
version = "latest"
}
patch_mode = "AutomaticByPlatform"
hotpatching_enabled = true
}
# Add logging and monitoring
resource "azurerm_log_analytics_workspace" "law" {
name = "vmloganalytics"
resource_group_name = data.azurerm_resource_group.rg-c.name
location = data.azurerm_resource_group.rg-c.location
sku = "PerGB2018"
retention_in_days = 365
internet_ingestion_enabled= true
internet_query_enabled = false
}
resource "azurerm_log_analytics_solution" "vminsights" {
solution_name = "vminsights"
resource_group_name = data.azurerm_resource_group.rg-c.name
location = data.azurerm_resource_group.rg-c.location
workspace_resource_id = azurerm_log_analytics_workspace.law.id
workspace_name = azurerm_log_analytics_workspace.law.name
plan {
publisher = "Microsoft"
product = "VMInsights"
}
}
# This extension is needed for other extensions
resource "azurerm_virtual_machine_extension" "daa-agent" {
name = "DependencyAgentWindows"
virtual_machine_id = azurerm_windows_virtual_machine.windowsvm-c.id
publisher = "Microsoft.Azure.Monitoring.DependencyAgent"
type = "DependencyAgentWindows"
type_handler_version = "9.10"
automatic_upgrade_enabled = true
auto_upgrade_minor_version = true
}
# Add logging and monitoring extensions
resource "azurerm_virtual_machine_extension" "monitor-agent" {
depends_on = [ azurerm_virtual_machine_extension.daa-agent ]
name = "AzureMonitorWindowsAgent"
virtual_machine_id = azurerm_windows_virtual_machine.windowsvm-c.id
publisher = "Microsoft.Azure.Monitor"
type = "AzureMonitorWindowsAgent"
type_handler_version = "1.5"
automatic_upgrade_enabled = true
auto_upgrade_minor_version = true
}
resource "azurerm_virtual_machine_extension" "msmonitor-agent" {
depends_on = [ azurerm_virtual_machine_extension.daa-agent ]
name = "MicrosoftMonitoringAgent" # Must be called this
virtual_machine_id = azurerm_windows_virtual_machine.windowsvm-c.id
publisher = "Microsoft.EnterpriseCloud.Monitoring"
type = "MicrosoftMonitoringAgent"
type_handler_version = "1.0"
# Not yet supported
# automatic_upgrade_enabled = true
# auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"workspaceId": "${azurerm_log_analytics_workspace.law.id}",
"azureResourceId": "${azurerm_windows_virtual_machine.windowsvm-c.id}",
"stopOnMultipleConnections": "false"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"workspaceKey": "${azurerm_log_analytics_workspace.law.primary_shared_key}"
}
PROTECTED_SETTINGS
}
Note the extended settings under "msmonitor-agent"
Here are few articles for this topic, maybe you can reference to:
Azure Monitor for application monitoring with Terraform
Azure Insights: Terraform; Log Analytics Workspaces; Custom scripts with Arc-enabled servers; Virtual WAN resources

Azure The 'resourceTargetId' property of endpoint 'vm1-TF' is invalid or missing

I want to implement traffic manager in Terraform between two vm's from different locations (West Europe and North Europe). I attach my code, but I don't know how to configure "target_resource_id" for each vm, because the vm's were created in a for loop, also the networks. The traffic manager would switch to the secondary vm, in case of failure of the first vm. Any ideas?
My code:
variable "subscription_id" {}
variable "tenant_id" {}
variable "environment" {}
variable "azurerm_resource_group_name" {}
variable "locations" {
type = map(string)
default = {
vm1 = "North Europe"
vm2 = "West Europe"
}
}
# Configure the Azure Provider
provider "azurerm" {
subscription_id = var.subscription_id
tenant_id = var.tenant_id
version = "=2.10.0"
features {}
}
resource "azurerm_virtual_network" "main" {
for_each = var.locations
name = "${each.key}-network"
address_space = ["10.0.0.0/16"]
location = each.value
resource_group_name = var.azurerm_resource_group_name
}
resource "azurerm_subnet" "internal" {
for_each = var.locations
name = "${each.key}-subnet"
resource_group_name = var.azurerm_resource_group_name
virtual_network_name = azurerm_virtual_network.main[each.key].name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_public_ip" "example" {
for_each = var.locations
name = "${each.key}-pip"
location = each.value
resource_group_name = var.azurerm_resource_group_name
allocation_method = "Static"
idle_timeout_in_minutes = 30
tags = {
environment = "dev01"
}
}
resource "azurerm_network_interface" "main" {
for_each = var.locations
name = "${each.key}-nic"
location = each.value
resource_group_name = var.azurerm_resource_group_name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal[each.key].id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.example[each.key].id
}
}
resource "random_password" "password" {
length = 16
special = true
override_special = "_%#"
}
resource "azurerm_virtual_machine" "main" {
for_each = var.locations
name = "${each.key}t-vm"
location = each.value
resource_group_name = var.azurerm_resource_group_name
network_interface_ids = [azurerm_network_interface.main[each.key].id]
vm_size = "Standard_D2s_v3"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
storage_os_disk {
name = "${each.key}-myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "${each.key}-hostname"
admin_username = "testadmin"
admin_password = random_password.password.result
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "dev01"
}
}
resource "random_id" "server" {
keepers = {
azi_id = 1
}
byte_length = 8
}
resource "azurerm_traffic_manager_profile" "example" {
name = random_id.server.hex
resource_group_name = var.azurerm_resource_group_name
traffic_routing_method = "Priority"
dns_config {
relative_name = random_id.server.hex
ttl = 100
}
monitor_config {
protocol = "http"
port = 80
path = "/"
interval_in_seconds = 30
timeout_in_seconds = 9
tolerated_number_of_failures = 3
}
tags = {
environment = "dev01"
}
}
resource "azurerm_traffic_manager_endpoint" "first-vm" {
for_each = var.locations
name = "${each.key}-TF"
resource_group_name = var.azurerm_resource_group_name
profile_name = "${azurerm_traffic_manager_profile.example.name}"
target_resource_id = "[azurerm_network_interface.main[each.key].id]"
type = "azureEndpoints"
priority = "${[each.key] == "vm1" ? 1 : 2}"
}
My error:
Error: trafficmanager.EndpointsClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error.
Status=400 Code="BadRequest" Message="The 'resourceTargetId' property of endpoint 'vm1-TF' is invalid or missing.
The property must be specified only for the following endpoint types: AzureEndpoints, NestedEndpoints.
You must have read access to the resource to which it refers."
According to this documentation:
Azure endpoints are used for Azure-based services in Traffic Manager. The following Azure resource types are supported:
PaaS cloud services
Web Apps
Web App Slots
PublicIPAddress resources (which can be connected to VMs either directly or via an Azure Load Balancer). The publicIpAddress must have a DNS name assigned to be used in a Traffic Manager profile.
In your code, you need to change the Target ResourceId to PublicIPAddress. You should not pass the Network Interface ID.
resource "azurerm_traffic_manager_endpoint" "first-vm" {
for_each = var.locations
name = "${each.key}-TF"
resource_group_name = var.azurerm_resource_group_name
profile_name = "${azurerm_traffic_manager_profile.example.name}"
target_resource_id = azurerm_public_ip.example[each.key].id
type = "azureEndpoints"
priority = "${[each.key] == "vm1" ? 1 : 2}"
}
Please Note: The publicIpAddress must have a DNS name assigned to be used in a Traffic Manager profile.
Also you can check this ARM Template which is equivalent to this terraform template.
Hope this helps!

Terraform connect to aks cluster without running az login

My goal is to create an Ubuntu VM which will connect to aks without running az login.
The main idea behind that is that I want to let other people connect to that aks cluster only and not be able to read\write any other resources on Azure. Currently, I've tried to achieve that by creating a new role and assign this role to the VM, but, no luck so far.
My question is - is it possible to run az aks get-credentials ... without running az login?
Terraform template:
# Create AKS Cluster
resource "azurerm_kubernetes_cluster" "akscluster" {
count = var.cluster_count
name = "${var.cluster_name}-${count.index}"
location = var.location
resource_group_name = azurerm_resource_group.aksrg.name
dns_prefix = var.dns
default_node_pool {
name = var.node_pool_name
node_count = var.node_count
vm_size = var.vm_size
type = "VirtualMachineScaleSets"
}
service_principal {
client_id = var.kubernetes_client_id
client_secret = var.kubernetes_client_secret
}
tags = {
Environment = var.tags
}
}
# Create virtual machine
resource "azurerm_virtual_machine" "myterraformvm" {
count = var.cluster_count
name = "aks-${count.index}"
location = var.location
resource_group_name = azurerm_resource_group.aksrg.name
network_interface_ids = [azurerm_network_interface.myterraformnic.id]
vm_size = "Standard_DS1_v2"
storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
os_profile {
computer_name = "hostname"
admin_username = "var.admin"
admin_password = "var.pass"
}
os_profile_linux_config {
disable_password_authentication = false
# ssh_keys {
# path = "/home/azureuser/.ssh/authorized_keys"
# key_data = "ssh-rsa AAAAB3Nz{snip}hwhqT9h"
# }
}
identity {
type = "SystemAssigned"
}
boot_diagnostics {
enabled = "true"
storage_uri = azurerm_storage_account.mystorageaccount.primary_blob_endpoint
}
tags = {
environment = var.tags
}
}
resource "azurerm_virtual_machine_extension" "example" {
count = var.cluster_count
name = "hostname"
virtual_machine_id = azurerm_virtual_machine.myterraformvm[count.index].id
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = <<SETTINGS
{
"commandToExecute": "curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash"
}
SETTINGS
tags = {
environment = var.tags
}
}
You can create a remote file using the remote-exec provisioner, passing the azurerm_kubernetes_cluster.aks.kube_config_raw resource. I made an example for you here: https://github.com/ams0/terraform-templates/tree/master/aks-vm.
It creates a vnet with 2 subnets, an AKS in one and an ubuntu VM in the other one, and creates a local /home/ubuntu/.kube/config in the VM. You just need to download kubectl and you're good to go.

Resources