Create SQL virtual machine using terraform throwing error - azure

Below is the complete code that I am using to create the SQL virtual machine, while creating the resources I get the below mentioned error, I tried to debug by
pinning the azurerm to a specific version,
increased the quota limit of the subscription for the location.
It was working well previously and has suddenly throwing the errors.
#Database Server 1
provider "azurerm" {
version = "2.10"
features {}
}
resource "azurerm_resource_group" "RG" {
name = "resource_db"
location = var.location
}
resource "azurerm_virtual_network" "VN" {
name = "vnet_db"
resource_group_name = azurerm_resource_group.RG.name
location = azurerm_resource_group.RG.location
address_space = ["10.10.0.0/16"]
}
resource "azurerm_subnet" "DBSN" {
name = "snet_db"
resource_group_name = azurerm_resource_group.RG.name
virtual_network_name = azurerm_virtual_network.VN.name
address_prefixes = ["10.10.2.0/24"]
}
resource "azurerm_public_ip" "DBAZPIP" {
name = "pip_db"
resource_group_name = azurerm_resource_group.RG.name
location = azurerm_resource_group.RG.location
allocation_method = "Static"
}
resource "azurerm_network_security_group" "NSGDB" {
name = "nsg_db"
location = azurerm_resource_group.RG.location
resource_group_name = azurerm_resource_group.RG.name
# RDP
security_rule {
name = "RDP"
priority = 300
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "3389"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "SQL"
priority = 310
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "1433"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_subnet_network_security_group_association" "mainDB" {
subnet_id = azurerm_subnet.DBSN.id
network_security_group_id = azurerm_network_security_group.NSGDB.id
}
resource "azurerm_network_interface" "vmnicprimary" {
name = "nic_db"
location = azurerm_resource_group.RG.location
resource_group_name = azurerm_resource_group.RG.name
ip_configuration {
name = "ipConfig_db"
subnet_id = azurerm_subnet.DBSN.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.DBAZPIP.id
}
}
resource "azurerm_virtual_machine" "DatabaseServer" {
name = "vm_db"
location = azurerm_resource_group.RG.location
resource_group_name = azurerm_resource_group.RG.name
network_interface_ids = [azurerm_network_interface.vmnicprimary.id,]
vm_size = "Standard_D4s_v3"
storage_image_reference {
publisher = "MicrosoftSQLServer"
offer = "SQL2017-WS2016"
sku = "Enterprise"
version = "latest"
}
storage_os_disk {
name = "osdisk_db"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
os_profile {
computer_name = "compdb"
admin_username = "vmadmin"
admin_password = "P#ssW0rd123456"
}
os_profile_windows_config {
provision_vm_agent = true
enable_automatic_upgrades = true
}
}
resource "azurerm_mssql_virtual_machine" "example" {
virtual_machine_id = azurerm_virtual_machine.DatabaseServer.id
sql_license_type = "PAYG"
sql_connectivity_type = "PUBLIC"
}
Running the above code throws the following error:
Error: retrieving Sql Virtual Machine (Sql Virtual Machine Name "vm_m2m80" / Resource Group "resource_m2m80"): sqlvirtualmachine.SQLVirtualMachinesClient#Get: Failure responding to request: StatusCode=500 -- Original Error: autorest/azure: Service returned an error. Status=500 Code="InternalServerError" Message="An unexpected error occured while processing the request. Tracking ID: '9a1622b0-f7d1-4070-96c0-ca67d66a3522'"
on main.tf line 117, in resource "azurerm_mssql_virtual_machine" "example":
117: resource "azurerm_mssql_virtual_machine" "example" {

TLDR: It has been fixed!!
Update from Microsoft:
The fix has been released
"Hope this finds you well.
We have confirmed internally, there will be a fix for this issue soon. I will update you once it is deployed."
We have the same thing, failing on every single build, using various Terraform and Azure API versions, this started happening two days ago for us. When trying to import to state it timeouts out as well..
Error: reading Sql Virtual Machine (Sql Virtual Machine Name "sqlvmname" / Resource Group "resource group"): sqlvirtualmachine.SQLVirtualMachinesClient#Get: Failure sending request: StatusCode=500 -- Original Error: context deadline exceeded
I believe this is an API issue. We engaged Microsoft Support and they are able to reproduce the issue using this page(thank you :) ). They are checking internally and are engaging more resources at Microsoft to check it. In the meantime I don't think there is anything that can be done.
One possible work around - seeing as this actually does create the resource in Azure may be to create it using Terraform then comment out your code - and since it's not in state it wont delete it. Not pretty..

Related

Receiving error from Azure API when attempting to use additional_unattend_content argument

Hoping someone might be able to assist. Using Terraform on Azure and looking for a method to deploy windows VMs and auto-login + configure winrm. I’ve found that some use the azurerm_windows_virtual_machine.<name_of_vm>.additional_unattend_content to set this up normally.
Example found in provider github repo: https://github.com/hashicorp/terraform-provider-azurerm/blob/b0c897055329438be6a3a[…]ned-to-active-directory/modules/active-directory-domain/main.tf
I’m getting some errors from the azure backend and was hoping maybe someone more knowledgeable than me would have experience with this. Getting pushback from Azure support when I requested their help. Appreciate any info anyone can provide!
Happy to provide logs or anything else thats needed!!!
resource "azurerm_windows_virtual_machine" "wks_win10" {
count = var.number_of_win10_wks
depends_on = [azurerm_network_interface.wks_nic_win10]
name = "wks-win10-${count.index}"
location = var.location
resource_group_name = var.rg_name
size = var.vm_size
provision_vm_agent = true
computer_name = "wks-win10-${count.index}"
admin_username = var.windows_username
admin_password = var.windows_password
network_interface_ids = ["${element(azurerm_network_interface.wks_nic_win10.*.id, count.index)}"]
os_disk {
caching = "ReadWrite"
name = "wks-win10-osdisk-${count.index}"
disk_size_gb = "250"
storage_account_type = "StandardSSD_LRS"
}
source_image_reference {
publisher = "MicrosoftWindowsDesktop"
offer = "Windows-10"
sku = "win10-21h2-ent"
version = "latest"
}
additional_unattend_content {
setting = "AutoLogon"
content = local.auto_logon_data
# content = "<AutoLogon><Password><Value>${var.windows_password}</Value></Password><Enabled>true</Enabled><LogonCount>3</LogonCount><Username>${var.windows_username}</Username></AutoLogon>"
}
winrm_listener {
protocol = "Http"
}
tags = merge(var.tags,
{
"kind"="workstation"
"os"="windows"
})
}
resource "azurerm_virtual_machine_extension" "wks_win10_vm_extension_network_watcher" {
count = var.number_of_win10_wks
depends_on = [azurerm_windows_virtual_machine.wks_win10]
name = "win10netwatch${count.index}"
virtual_machine_id = "${element(azurerm_windows_virtual_machine.wks_win10.*.id, count.index )}"
publisher = "Microsoft.Azure.NetworkWatcher"
type = "NetworkWatcherAgentWindows"
type_handler_version = "1.4"
auto_upgrade_minor_version = true
}
Errors:
module.compute.azurerm_network_interface.wks_nic_win10[0]: Creation complete after 1s [id=/subscriptions/<subscription-id>/resourceGroups/test-rg/providers/Microsoft.Network/networkInterfaces/wks-win10-nic-0]
module.compute.azurerm_windows_virtual_machine.wks_win10[0]: Creating...
2022-11-23T16:09:12.176-0500 [ERROR] provider.terraform-provider-azurerm_v3.30.0_x5: Response contains error diagnostic: #module=sdk.proto diagnostic_detail= diagnostic_severity=ERROR diagnostic_summary="creating Windows Virtual Machine: (Name "wks-win10-0" / Resource Group "test-rg"): compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter" Message="The value of parameter windowsConfiguration.additionalUnattendContent.content is invalid." Target="windowsConfiguration.additionalUnattendContent.content"" #caller=github.com/hashicorp/terraform-plugin-go#v0.10.0/tfprotov5/internal/diag/diagnostics.go:56 tf_provider_addr=provider tf_req_id=6a628786-49b2-388d-85f9-07e4eeb8a618 tf_resource_type=azurerm_windows_virtual_machine tf_rpc=ApplyResourceChange tf_proto_version=5.2 timestamp=2022-11-23T16:09:12.176-0500
2022-11-23T16:09:12.181-0500 [ERROR] vertex "module.compute.azurerm_windows_virtual_machine.wks_win10[0]" error: creating Windows Virtual Machine: (Name "wks-win10-0" / Resource Group "test-rg"): compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter" Message="The value of parameter windowsConfiguration.additionalUnattendContent.content is invalid." Target="windowsConfiguration.additionalUnattendContent.content"
I tried to reproduce the scenario in my environment:
Terraform code:
resource "azurerm_windows_virtual_machine" "example" {
name = "kaacctvm"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.example.id]
size = "Standard_F2"
admin_username = "txxxin"
admin_password = "Pasxxx4!"
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "txxxmin"
admin_password = "gfgxx4!"
}
source_image_reference {
publisher = "MicrosoftWindowsDesktop"
offer = "Windows-10"
sku = "win10-21h2-ent"
version = "latest"
}
additional_unattend_content {
setting = "AutoLogon"
content = "<AutoLogon><Password><Value>${var.windows_password}</Value></Password><Enabled>true</Enabled><LogonCount>3</LogonCount><Username>${var.windows_username}</Username></AutoLogon>"
}
winrm_listener {
protocol = "Http"
}
tags = {
environment = "staging"
}
}
resource "azurerm_virtual_machine_extension" "example" {
name = "kavyahostname"
virtual_machine_id = azurerm_windows_virtual_machine.example.id
publisher ="Microsoft.Azure.NetworkWatcher"
type = "NetworkWatcherAgentWindows"
type_handler_version = "1.4"
auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"commandToExecute": "hostname && uptime"
}
SETTINGS
tags = {
environment = "Production"
}
}
Received similar error:
VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter" Message="The value of parameter “” is invalid."
Make sure the contents in the username and password are of correct format here while calling autologon data .
content = "<AutoLogon><Password><Value>${var.windows_password}</Value></Password><Enabled>true</Enabled><LogonCount>3</LogonCount><Username>${var.windows_username}</Username></AutoLogon>"
Please check azure-quickstart-templates| issues | github
It can be array which can not be base64 encoded
Check with the colon mistakes or spelling mistakes
I have variables:
variable "windows_username" {
type = string
default = "xxx"
}
variable "windows_password" {
type = string
default = "xxx"
}
Then vm extension created sucessfully:
Also check this Microsoft.Compute/virtualMachines - Bicep, ARM template & Terraform AzAPI reference | Microsoft Learn

Terraform azurerm_storage_share_directory does not work with file share 'NFS'

We created an Azure storage account with the intention of creating an 'Azure File' to be mounted using NFS (default is SMB). Below is the Terraform code which creates a storage account, a file share and a private endpoint to the file share so that it can be mounted using NFS.
resource "azurerm_storage_account" "az_file_sa" {
name = "abcdxxxyyyzzz"
resource_group_name = local.resource_group_name
location = var.v_region
account_tier = "Premium"
account_kind = "FileStorage"
account_replication_type = "LRS"
enable_https_traffic_only = false
}
resource "azurerm_storage_share" "file_share" {
name = "fileshare"
storage_account_name = azurerm_storage_account.az_file_sa.name
quota = 100
enabled_protocol = "NFS"
depends_on = [ azurerm_storage_account.az_file_sa ]
}
resource "azurerm_private_endpoint" "fileshare-endpoint" {
name = "fileshare-endpoint"
location = var.v_region
resource_group_name = local.resource_group_name
subnet_id = azurerm_subnet.subnet2.id
private_service_connection {
name = "fileshare-endpoint-connection"
private_connection_resource_id = azurerm_storage_account.az_file_sa.id
is_manual_connection = false
subresource_names = [ "file" ]
}
depends_on = [ azurerm_storage_share.file_share ]
}
This works fine. Now, if we try to create a directory on this file share using below Terraform code
resource "azurerm_storage_share_directory" "xxx" {
name = "dev"
share_name = "fileshare"
storage_account_name = "abcdxxxyyyzzz"
}
error we get is,
│ Error: checking for presence of existing Directory "dev" (File Share "fileshare" / Storage Account "abcdxxxyyyzzz" / Resource Group "RG_XXX_YO"): directories.Client#Get: Failure sending request: StatusCode=0 -- Original Error: Get "https://abcdxxxyyyzzz.file.core.windows.net/fileshare/dev?restype=directory": read tcp 192.168.1.3:61175->20.60.179.37:443: read: connection reset by peer
Clearly, this share is not accessible over public https endpoint.
Is there a way to create a directory using 'azurerm_storage_share_directory' when file share is of type 'NFS'?
We were able to mount NFS on a Linux VM (in the same virtual network) using below code where 10.10.2.4 is private IP of the NFS fileshare endpoint.
sudo mkdir -p /mount/abcdxxxyyyzzz/fileshare
sudo mount -t nfs 10.10.2.4:/abcdxxxyyyzzz/fileshare /mount/abcdxxxyyyzzz/fileshare -o vers=4,minorversion=1,sec=sys
regards, Yogesh
full Terraform files
vnet.tf
resource "azurerm_virtual_network" "vnet" {
name = "yogimogi-vnet"
address_space = ["10.10.0.0/16"]
location = local.region
resource_group_name = local.resource_group_name
depends_on = [ azurerm_resource_group.rg ]
}
resource "azurerm_subnet" "subnet1" {
name = "yogimogi-vnet-subnet1"
resource_group_name = local.resource_group_name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.10.1.0/24"]
service_endpoints = ["Microsoft.Storage"]
}
resource "azurerm_subnet" "subnet2" {
name = "yogimogi-vnet-subnet2"
resource_group_name = local.resource_group_name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.10.2.0/24"]
service_endpoints = ["Microsoft.Storage"]
}
main.tf
resource "azurerm_resource_group" "rg" {
name = local.resource_group_name
location = local.region
tags = {
description = "Resource group for some testing, Yogesh KETKAR"
createdBy = "AutomationEdge"
createDate = "UTC time: ${timestamp()}"
}
}
resource "azurerm_storage_account" "sa" {
name = local.storage_account_name
resource_group_name = local.resource_group_name
location = local.region
account_tier = "Premium"
account_kind = "FileStorage"
account_replication_type = "LRS"
enable_https_traffic_only = false
depends_on = [ azurerm_resource_group.rg ]
}
resource "azurerm_storage_share" "file_share" {
name = "fileshare"
storage_account_name = azurerm_storage_account.sa.name
quota = 100
enabled_protocol = "NFS"
depends_on = [ azurerm_storage_account.sa ]
}
resource "azurerm_storage_account_network_rules" "network_rule" {
storage_account_id = azurerm_storage_account.sa.id
default_action = "Allow"
ip_rules = ["127.0.0.1"]
virtual_network_subnet_ids = [azurerm_subnet.subnet2.id, azurerm_subnet.subnet1.id]
bypass = ["Metrics"]
}
resource "azurerm_private_endpoint" "fileshare-endpoint" {
name = "fileshare-endpoint"
location = local.region
resource_group_name = local.resource_group_name
subnet_id = azurerm_subnet.subnet2.id
private_service_connection {
name = "fileshare-endpoint-connection"
private_connection_resource_id = azurerm_storage_account.sa.id
is_manual_connection = false
subresource_names = [ "file" ]
}
depends_on = [ azurerm_storage_share.file_share ]
}
resource "azurerm_storage_share_directory" "d1" {
name = "d1"
share_name = azurerm_storage_share.file_share.name
storage_account_name = azurerm_storage_account.sa.name
depends_on = [ azurerm_storage_share.file_share, azurerm_private_endpoint.fileshare-endpoint ]
}
error is
╷
│ Error: checking for presence of existing Directory "d1" (File Share "fileshare" / Storage Account "22xdkkdkdkdkdkdkdx22" / Resource Group "RG_Central_US_YOGIMOGI"): directories.Client#Get: Failure sending request: StatusCode=0 -- Original Error: Get
"https://22xdkkdkdkdkdkdkdx22.file.core.windows.net/fileshare/d1?restype=directory": read tcp 10.41.7.110:54240->20.209.18.37:443: read: connection reset by peer
│
│ with azurerm_storage_share_directory.d1,
│ on main.tf line 60, in resource "azurerm_storage_share_directory" "d1":
│ 60: resource "azurerm_storage_share_directory" "d1" {
│
╵
I tried to reproduce the same having private endpoint ,having NFS enabled
and got errors as network rule is not created when NFS enabled.
As virtual network provides access control for NFS , after vnet creation you must configure a virtual network rule,for file share to be accessed.
resource "azurerm_virtual_network" "example" {
name = "ka-vnet"
address_space = ["10.0.0.0/16"]
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
// tags = local.common_tags
}
resource "azurerm_subnet" "storage" {
name = "ka-subnet"
resource_group_name = data.azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_storage_account" "az_file_sa" {
name = "kaabdx"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_tier = "Premium"
account_kind = "FileStorage"
account_replication_type = "LRS"
enable_https_traffic_only = false
//provide network rules
network_rules {
default_action = "Allow"
ip_rules = ["127.0.0.1/24"]
//23.45.1.0/24
virtual_network_subnet_ids = ["${azurerm_subnet.storage.id }"]
}
}
resource "azurerm_private_endpoint" "fileshare-endpoint" {
name = "fileshare-endpoint"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
subnet_id = azurerm_subnet.storage.id
private_service_connection {
name = "fileshare-endpoint-connection"
private_connection_resource_id = azurerm_storage_account.az_file_sa.id
is_manual_connection = false
subresource_names = [ "file" ]
}
depends_on = [ azurerm_storage_share.file_share ]
}
resource "azurerm_storage_share" "file_share" {
name = "fileshare"
storage_account_name = azurerm_storage_account.az_file_sa.name
quota = 100
enabled_protocol = "NFS"
depends_on = [ azurerm_storage_account.az_file_sa ]
}
resource "azurerm_storage_share_directory" "mynewfileshare" {
name = "kadev"
share_name = azurerm_storage_share.file_share.name
storage_account_name = azurerm_storage_account.az_file_sa.name
}
regarding the error that you got :
Error: checking for presence of existing Directory ... directories.Client#Get: Failure sending request: StatusCode=0 -- Original Error: Get "https://abcdxxxyyyzzz.file.core.windows.net/fileshare/dev?restype=directory": read tcp 192.168.1.3:61175->20.60.179.37:443: read: connection reset by peer
Please note that :
VNet peering will not be able to give access to file share. Virtual
network peering with virtual networks hosted in the private endpoint
give NFS share access to the clients in peered virtual networks .Each
of virtual network or subnet must be individually added to the
allowlist.
A checking for presence of existing Directory occurs if the terraform is not initiated .Run Terraform init and then try to Terraform plan and terraform apply.
References:
Cannot create azurerm_storage_container in azurerm_storage_account that uses network_rules · GitHub
NFS Azure file share problems | learn.microsoft.com

Unable to connect to virtual machine using RDP (Remote Desktop)

I have created a virtual machine using the below terraform code:
Here is the VM code:
# demo instance
resource "azurerm_virtual_machine" "demo-instance" {
name = "${var.prefix}-vm"
location = var.resource_group_location
resource_group_name = var.resource_group_name
network_interface_ids = [
azurerm_network_interface.demo-instance.id]
vm_size = "Standard_A1_v2"
# this is a demo instance, so we can delete all data on termination
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
storage_image_reference {
publisher = "RedHat"
offer = "RHEL"
sku = "7-RAW"
version = "7.5.2018042521"
}
storage_os_disk {
name = "RED-HAT-osdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "MyOS"
admin_username = "MyUsername"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
}
resource "azurerm_network_interface" "demo-instance" {
name = "${var.prefix}-instance1"
location = var.resource_group_location
resource_group_name = var.resource_group_name
ip_configuration {
name = "instance1"
subnet_id = azurerm_subnet.demo-internal-1.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.demo-instance.id
}
}
resource "azurerm_network_interface_security_group_association" "allow-ssh" {
network_interface_id = azurerm_network_interface.demo-instance.id
network_security_group_id = azurerm_network_security_group.allow-ssh.id
}
resource "azurerm_public_ip" "demo-instance" {
name = "instance1-public-ip"
location = var.resource_group_location
resource_group_name = var.resource_group_name
allocation_method = "Dynamic"
}
and here is the network config:
resource "azurerm_virtual_network" "demo" {
name = "${var.prefix}-network"
location = var.resource_group_location
resource_group_name = var.resource_group_name
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "demo-internal-1" {
name = "${var.prefix}-internal-1"
resource_group_name = var.resource_group_name
virtual_network_name = azurerm_virtual_network.demo.name
address_prefixes = ["10.0.0.0/24"]
}
resource "azurerm_network_security_group" "allow-ssh" {
name = "${var.prefix}-allow-ssh"
location = var.resource_group_location
resource_group_name = var.resource_group_name
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = var.ssh-source-address
destination_address_prefix = "*"
}
}
As a result, i am able to connect to the virtual-machine using SSH. However, when i try to connect using RDP, i face with the below error:
What i have tried:
I read this document and added an inbound role into my network
However, i am not still able to get connect with RDP.
So, far i know that my VM is in network because it has a password and i know it is running because i can connect using SSH. But, i still don't know why the RDP does not work.
Since this is a Linux VM, you can only connect via SSH protocol even though you have allowed both 3389 and 22 in the NSG.
I see from the screenshot that you have allowed RDP traffic in the VM you are creating now. But the VM you create is RHEL server, you won't be able to take RDP into that, you can SSH only. Only windows vm can be logged in by using RDP.
If you want to login RHEL server from a particular Windows Jump box, that is possible, deploy a windows VM with opening RDP port and add one rule for RHEL server where source IP would be the windows VM. Then you can login to windows VM as bastion and take ssh to RHEL from this bastion. Let me know if your query is cleared.

Terraform/HCL in Azure issues

I am new to HCL and Terraform and have having issues with associating a security group and a backend address pool to the network interface. I am creating 2 network interfaces in a single network interface block:
#Create network interface for 2 VMs
resource "azurerm_network_interface" "FrontNetworkInterface" {
count = 2
name = "niFront${count.index}"
location = azurerm_resource_group.PWSDevResourceGroup.location
resource_group_name = azurerm_resource_group.PWSDevResourceGroup.name
ip_configuration {
name = "ipconfFrontVM"
subnet_id = azurerm_subnet.PWSDevSubnet.id
private_ip_address_allocation = "dynamic"
}
}
I have tried associating in various ways that have produced different errors:
ATTEMPT 1:
#Connect security group to the network interface
resource "azurerm_network_interface_security_group_association" "PWSDevSecurityGroupAssoc" {
network_interface_id = azurerm_network_interface.FrontNetworkInterface.id
network_security_group_id = azurerm_network_security_group.PWSDevSecurityGroup.id
}
#Connect 2 backend ips to the load balancer
resource "azurerm_network_interface_backend_address_pool_association" "BackendIPAssoc" {
network_interface_id = azurerm_network_interface.FrontNetworkInterface.id
ip_configuration_name = "bipa"
backend_address_pool_id = azurerm_lb_backend_address_pool.BackendIpPool.id
}
ERRORS:
Error: Missing resource instance key
on front.tf line 85, in resource "azurerm_network_interface_security_group_association" "PWSDevSecurityGroupAssoc":
85: network_interface_id = azurerm_network_interface.FrontNetworkInterface.id
Because azurerm_network_interface.FrontNetworkInterface has "count" set, its
attributes must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
azurerm_network_interface.FrontNetworkInterface[count.index]
Error: Missing resource instance key
on front.tf line 91, in resource "azurerm_network_interface_backend_address_pool_association" "BackendIPAssoc":
91: network_interface_id = azurerm_network_interface.FrontNetworkInterface.id
Because azurerm_network_interface.FrontNetworkInterface has "count" set, its
attributes must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
azurerm_network_interface.FrontNetworkInterface[count.index]
ATTEMPT 2/3/4 (Using "[count.index]", "[count.index].id", or "[element(azurerm_network_interface.FrontNetworkInterface.*.id, count.index)]" as described in the previous error):
#Connect security group to the network interface
resource "azurerm_network_interface_security_group_association" "PWSDevSecurityGroupAssoc" {
network_interface_id = azurerm_network_interface.FrontNetworkInterface[count.index]
network_security_group_id = azurerm_network_security_group.PWSDevSecurityGroup.id
}
#Connect 2 backend ips to the load balancer
resource "azurerm_network_interface_backend_address_pool_association" "BackendIPAssoc" {
network_interface_id = azurerm_network_interface.FrontNetworkInterface[count.index]
ip_configuration_name = "bipa"
backend_address_pool_id = azurerm_lb_backend_address_pool.BackendIpPool.id
}
ERROR (Same result for [count.index].id and [element(azurerm_network_interface.FrontNetworkInterface.*.id, count.index)]):
Error: Reference to "count" in non-counted context
on front.tf line 85, in resource "azurerm_network_interface_security_group_association" "PWSDevSecurityGroupAssoc":
85: network_interface_id = azurerm_network_interface.FrontNetworkInterface[count.index]
The "count" object can only be used in "module", "resource", and "data"
blocks, and only when the "count" argument is set.
Error: Reference to "count" in non-counted context
front.tf line 91, in resource "azurerm_network_interface_backend_address_pool_association" "BackendIPAssoc":
network_interface_id = azurerm_network_interface.FrontNetworkInterface[count.index]
The "count" object can only be used in "module", "resource", and "data"
blocks, and only when the "count" argument is set.
Also, I am receiving this error on my azurerm_virtual_machine block:
line 162, in resource "azurerm_virtual_machine" "FrontEndVirtualMachines":
162: admin_ssh_key {
Blocks of type "admin_ssh_key" are not expected here.
I am following what is shown here:
https://learn.microsoft.com/en-us/azure/developer/terraform/create-linux-virtual-machine-with-infrastructure
As you can see, the admin_ssh_key block is provided. I tried using version 2.0 as used in the scripts; however, I experienced the same result.
Thanks for your help!! :)
When referencing a resource created with count you still need to add the .id. See the following example. For more information see this link.
provider "azurerm" {
version = "~>2.23.0"
features {}
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "East US"
}
resource "azurerm_virtual_network" "example" {
name = "vnet"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
address_space = ["10.0.0.0/16"]
dns_servers = ["10.0.0.4", "10.0.0.5"]
}
resource "azurerm_subnet" "example" {
name = "example"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.1.0/24"]
}
resource "azurerm_network_interface" "example" {
count = 2
name = format("int%s", count.index)
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "ip"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "dynamic"
}
}
resource "azurerm_network_security_group" "example" {
name = "acceptanceTestSecurityGroup1"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
security_rule {
name = "test123"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_network_interface_security_group_association" "secgroup" {
count = length(azurerm_network_interface.example)
network_interface_id = azurerm_network_interface.example[count.index].id
network_security_group_id = azurerm_network_security_group.example.id
}
I will admit that I haven't read the whole story, but it looks like your attempt #2/3/4 was pretty close. Where you use [count.index], you need to specify a count, otherwise there's no count to index. So if you just add count = 2 to those two resource blocks, it should work.
Better yet, either have the 2 as a variable, or use
count = len(azurerm_network_interface.FrontNetworkInterface)
to ensure you don't end up with mismatched numbers when you change the 2 later on.

azure terraform attaching azure file share to windows machine

Problem statement
I am in the process to create an Azure VM cluster of windows os. till now I can create an Azure file share. and azure windows cluster. I want to attach file share created to each VM in my cluster. unable to find reference how to add same on windows VM.
code for this
resource "azurerm_storage_account" "main" {
name = "stor${var.environment}${var.cost_centre}${var.project}"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
account_tier = "${var.storage_account_tier}"
account_replication_type = "${var.storage_replication_type}"
}
resource "azurerm_storage_share" "main" {
name = "storageshare${var.environment}${var.cost_centre}${var.project}"
resource_group_name = "${azurerm_resource_group.main.name}"
storage_account_name = "${azurerm_storage_account.main.name}"
quota = "${var.storage_share_quota}"
}
resource "azurerm_virtual_machine" "vm" {
name = "vm-${var.location_id}-${var.environment}-${var.cost_centre}-${var.project}-${var.seq_id}-${count.index}"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
availability_set_id = "${azurerm_availability_set.main.id}"
vm_size = "${var.vm_size}"
network_interface_ids = ["${element(azurerm_network_interface.main.*.id, count.index)}"]
count = "${var.vm_count}"
storage_image_reference {
publisher = "${var.image_publisher}"
offer = "${var.image_offer}"
sku = "${var.image_sku}"
version = "${var.image_version}"
}
storage_os_disk {
name = "osdisk${count.index}"
create_option = "FromImage"
}
os_profile {
computer_name = "${var.vm_name}-${count.index}"
admin_username = "${var.admin_username}"
admin_password = "${var.admin_password}"
}
os_profile_windows_config {}
depends_on = ["azurerm_network_interface.main"]
}
Azure doesnt offer anything like that, so uou cannot do that natively, you need to create a script and run that script on the vm use script extension\dsc extension or if terraform supports that - with terraform.
https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-linux

Resources