I am trying to create a backup policy and enable backup while provision the Azure VM using terraform (Terraform Version - 1.1.13, Azure Provider - 2.90.0). Terraform fails to enable backup with the below error.
Error: waiting for the Azure Backup Protected VM "VM;iaasvmcontainerv2;Test-product-cloud-infra;arulazurebkup-vm" to be true (Resource Group "Test-Product-Cloud-Infra") to provision: context deadline exceeded
│
│ with azurerm_backup_protected_vm.backup,
│ on main.tf line 176, in resource "azurerm_backup_protected_vm" "backup":
│ 176: resource "azurerm_backup_protected_vm" "backup" {
│
Terraform Scripts
resource "azurerm_backup_policy_vm" "example" {
name = "Test-backup-policy"
resource_group_name = "Test-Product-Cloud-Infra"
recovery_vault_name = "backuptest"
backup {
frequency = "Daily"
time = "23:00"
}
retention_daily {
count = 7
}
}
resource "azurerm_backup_protected_vm" "backup" {
resource_group_name = "Test-Product-Cloud-Infra"
recovery_vault_name = "backuptest"
source_vm_id = azurerm_virtual_machine.example.id
backup_policy_id = azurerm_backup_policy_vm.example.id
depends_on = [azurerm_virtual_machine.example,
azurerm_virtual_machine_extension.example,
azurerm_backup_policy_vm.example]
}
When i check the error in Azure portal for the backup job, i find the below entry
On further troubleshooting getting the below when enabling backup in CLI.
You are getting the error as you are using a recovery vault which is not present in the same location as the VM .
I tested the same as below :
I created the VM in West US and the existing Recovery Services Vault was in East US. So ,I got the below error :
To solve the issue ,You have to use the same location for all the resources as the Recovery Services Vault i.e. in my case same as the resource group (East US):
resource "azurerm_virtual_machine" "main" {
name = "ansuman-vm"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.example.id]
vm_size = "Standard_DS1_v2"
# Uncomment this line to delete the OS disk automatically when deleting the VM
# delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
# delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
}
data "azurerm_recovery_services_vault" "example" {
name = "recoveryvaultansuman"
resource_group_name = data.azurerm_resource_group.example.name
}
resource "azurerm_backup_policy_vm" "example" {
name = "ansuman-recovery-vault-policy"
resource_group_name = data.azurerm_resource_group.example.name
recovery_vault_name = data.azurerm_recovery_services_vault.example.name
backup {
frequency = "Daily"
time = "23:00"
}
retention_daily {
count = 7
}
}
resource "azurerm_backup_protected_vm" "vm1" {
resource_group_name = data.azurerm_resource_group.example.name
recovery_vault_name = data.azurerm_recovery_services_vault.example.name
source_vm_id = azurerm_virtual_machine.main.id
backup_policy_id = azurerm_backup_policy_vm.example.id
}
Output:
Related
We created an Azure storage account with the intention of creating an 'Azure File' to be mounted using NFS (default is SMB). Below is the Terraform code which creates a storage account, a file share and a private endpoint to the file share so that it can be mounted using NFS.
resource "azurerm_storage_account" "az_file_sa" {
name = "abcdxxxyyyzzz"
resource_group_name = local.resource_group_name
location = var.v_region
account_tier = "Premium"
account_kind = "FileStorage"
account_replication_type = "LRS"
enable_https_traffic_only = false
}
resource "azurerm_storage_share" "file_share" {
name = "fileshare"
storage_account_name = azurerm_storage_account.az_file_sa.name
quota = 100
enabled_protocol = "NFS"
depends_on = [ azurerm_storage_account.az_file_sa ]
}
resource "azurerm_private_endpoint" "fileshare-endpoint" {
name = "fileshare-endpoint"
location = var.v_region
resource_group_name = local.resource_group_name
subnet_id = azurerm_subnet.subnet2.id
private_service_connection {
name = "fileshare-endpoint-connection"
private_connection_resource_id = azurerm_storage_account.az_file_sa.id
is_manual_connection = false
subresource_names = [ "file" ]
}
depends_on = [ azurerm_storage_share.file_share ]
}
This works fine. Now, if we try to create a directory on this file share using below Terraform code
resource "azurerm_storage_share_directory" "xxx" {
name = "dev"
share_name = "fileshare"
storage_account_name = "abcdxxxyyyzzz"
}
error we get is,
│ Error: checking for presence of existing Directory "dev" (File Share "fileshare" / Storage Account "abcdxxxyyyzzz" / Resource Group "RG_XXX_YO"): directories.Client#Get: Failure sending request: StatusCode=0 -- Original Error: Get "https://abcdxxxyyyzzz.file.core.windows.net/fileshare/dev?restype=directory": read tcp 192.168.1.3:61175->20.60.179.37:443: read: connection reset by peer
Clearly, this share is not accessible over public https endpoint.
Is there a way to create a directory using 'azurerm_storage_share_directory' when file share is of type 'NFS'?
We were able to mount NFS on a Linux VM (in the same virtual network) using below code where 10.10.2.4 is private IP of the NFS fileshare endpoint.
sudo mkdir -p /mount/abcdxxxyyyzzz/fileshare
sudo mount -t nfs 10.10.2.4:/abcdxxxyyyzzz/fileshare /mount/abcdxxxyyyzzz/fileshare -o vers=4,minorversion=1,sec=sys
regards, Yogesh
full Terraform files
vnet.tf
resource "azurerm_virtual_network" "vnet" {
name = "yogimogi-vnet"
address_space = ["10.10.0.0/16"]
location = local.region
resource_group_name = local.resource_group_name
depends_on = [ azurerm_resource_group.rg ]
}
resource "azurerm_subnet" "subnet1" {
name = "yogimogi-vnet-subnet1"
resource_group_name = local.resource_group_name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.10.1.0/24"]
service_endpoints = ["Microsoft.Storage"]
}
resource "azurerm_subnet" "subnet2" {
name = "yogimogi-vnet-subnet2"
resource_group_name = local.resource_group_name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.10.2.0/24"]
service_endpoints = ["Microsoft.Storage"]
}
main.tf
resource "azurerm_resource_group" "rg" {
name = local.resource_group_name
location = local.region
tags = {
description = "Resource group for some testing, Yogesh KETKAR"
createdBy = "AutomationEdge"
createDate = "UTC time: ${timestamp()}"
}
}
resource "azurerm_storage_account" "sa" {
name = local.storage_account_name
resource_group_name = local.resource_group_name
location = local.region
account_tier = "Premium"
account_kind = "FileStorage"
account_replication_type = "LRS"
enable_https_traffic_only = false
depends_on = [ azurerm_resource_group.rg ]
}
resource "azurerm_storage_share" "file_share" {
name = "fileshare"
storage_account_name = azurerm_storage_account.sa.name
quota = 100
enabled_protocol = "NFS"
depends_on = [ azurerm_storage_account.sa ]
}
resource "azurerm_storage_account_network_rules" "network_rule" {
storage_account_id = azurerm_storage_account.sa.id
default_action = "Allow"
ip_rules = ["127.0.0.1"]
virtual_network_subnet_ids = [azurerm_subnet.subnet2.id, azurerm_subnet.subnet1.id]
bypass = ["Metrics"]
}
resource "azurerm_private_endpoint" "fileshare-endpoint" {
name = "fileshare-endpoint"
location = local.region
resource_group_name = local.resource_group_name
subnet_id = azurerm_subnet.subnet2.id
private_service_connection {
name = "fileshare-endpoint-connection"
private_connection_resource_id = azurerm_storage_account.sa.id
is_manual_connection = false
subresource_names = [ "file" ]
}
depends_on = [ azurerm_storage_share.file_share ]
}
resource "azurerm_storage_share_directory" "d1" {
name = "d1"
share_name = azurerm_storage_share.file_share.name
storage_account_name = azurerm_storage_account.sa.name
depends_on = [ azurerm_storage_share.file_share, azurerm_private_endpoint.fileshare-endpoint ]
}
error is
╷
│ Error: checking for presence of existing Directory "d1" (File Share "fileshare" / Storage Account "22xdkkdkdkdkdkdkdx22" / Resource Group "RG_Central_US_YOGIMOGI"): directories.Client#Get: Failure sending request: StatusCode=0 -- Original Error: Get
"https://22xdkkdkdkdkdkdkdx22.file.core.windows.net/fileshare/d1?restype=directory": read tcp 10.41.7.110:54240->20.209.18.37:443: read: connection reset by peer
│
│ with azurerm_storage_share_directory.d1,
│ on main.tf line 60, in resource "azurerm_storage_share_directory" "d1":
│ 60: resource "azurerm_storage_share_directory" "d1" {
│
╵
I tried to reproduce the same having private endpoint ,having NFS enabled
and got errors as network rule is not created when NFS enabled.
As virtual network provides access control for NFS , after vnet creation you must configure a virtual network rule,for file share to be accessed.
resource "azurerm_virtual_network" "example" {
name = "ka-vnet"
address_space = ["10.0.0.0/16"]
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
// tags = local.common_tags
}
resource "azurerm_subnet" "storage" {
name = "ka-subnet"
resource_group_name = data.azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_storage_account" "az_file_sa" {
name = "kaabdx"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_tier = "Premium"
account_kind = "FileStorage"
account_replication_type = "LRS"
enable_https_traffic_only = false
//provide network rules
network_rules {
default_action = "Allow"
ip_rules = ["127.0.0.1/24"]
//23.45.1.0/24
virtual_network_subnet_ids = ["${azurerm_subnet.storage.id }"]
}
}
resource "azurerm_private_endpoint" "fileshare-endpoint" {
name = "fileshare-endpoint"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
subnet_id = azurerm_subnet.storage.id
private_service_connection {
name = "fileshare-endpoint-connection"
private_connection_resource_id = azurerm_storage_account.az_file_sa.id
is_manual_connection = false
subresource_names = [ "file" ]
}
depends_on = [ azurerm_storage_share.file_share ]
}
resource "azurerm_storage_share" "file_share" {
name = "fileshare"
storage_account_name = azurerm_storage_account.az_file_sa.name
quota = 100
enabled_protocol = "NFS"
depends_on = [ azurerm_storage_account.az_file_sa ]
}
resource "azurerm_storage_share_directory" "mynewfileshare" {
name = "kadev"
share_name = azurerm_storage_share.file_share.name
storage_account_name = azurerm_storage_account.az_file_sa.name
}
regarding the error that you got :
Error: checking for presence of existing Directory ... directories.Client#Get: Failure sending request: StatusCode=0 -- Original Error: Get "https://abcdxxxyyyzzz.file.core.windows.net/fileshare/dev?restype=directory": read tcp 192.168.1.3:61175->20.60.179.37:443: read: connection reset by peer
Please note that :
VNet peering will not be able to give access to file share. Virtual
network peering with virtual networks hosted in the private endpoint
give NFS share access to the clients in peered virtual networks .Each
of virtual network or subnet must be individually added to the
allowlist.
A checking for presence of existing Directory occurs if the terraform is not initiated .Run Terraform init and then try to Terraform plan and terraform apply.
References:
Cannot create azurerm_storage_container in azurerm_storage_account that uses network_rules · GitHub
NFS Azure file share problems | learn.microsoft.com
I am using terraform with azure to provision an ubuntu virtual machine and I am getting the below error:
creating Linux Virtual Machine: (Name "test-bastion" / Resource Group "ssi-test"): compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="ResourcePurchaseValidationFailed" Message="User failed validation to purchase resources. Error message: 'You have not accepted the legal terms on this subscription: 'xxxxx-xxxxx-xxxxx-xxxx' for this plan.
I can spin up VM's through azure portal but not with terraform.
Here's my terraform module
resource "azurerm_linux_virtual_machine" "linux_virtual_machine" {
name = join("-", [var.environment, "bastion"])
resource_group_name = var.resource_group_name
location = var.location
size = var.bastion_size
admin_username = var.bastion_admin_username
computer_name = join("-", [var.project, var.environment, "bastion"])
custom_data = filebase64(var.bastion_custom_data_path)
network_interface_ids = [
azurerm_network_interface.bastion_nic.id
]
admin_ssh_key {
username = var.bastion_admin_username
public_key = file(var.bastion_public_key_path)
}
source_image_reference {
publisher = var.bastion_publisher
offer = var.bastion_offer
sku = var.bastion_sku
version = var.bastion_version
}
plan {
name = var.bastion_sku
publisher = var.bastion_publisher
product = var.bastion_offer
}
os_disk {
name = join("-", [var.project, var.environment, "bastion-os-disk"])
storage_account_type = "Standard_LRS"
caching = "ReadWrite"
disk_size_gb = var.bastion_os_disk_size_gb
}
}
# Create network interface
resource "azurerm_network_interface" "bastion_nic" {
name = join("-", [var.project, var.environment, "bastion-nic"])
location = var.location
resource_group_name = var.resource_group_name
depends_on = [azurerm_public_ip.bastion_public_ip]
ip_configuration {
name = join("-", [var.project, var.environment, "bastion-nic-conf"])
subnet_id = var.bastion_subnet_id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.bastion_public_ip.id
}
tags = var.default_tags
}
and here are the variable values (some are removed)
bastion_admin_username = "ubuntu"
bastion_os_disk_size_gb = "60"
bastion_public_key_path = "./data/keys/bastion.pub"
bastion_size = "Standard_B2s"
bastion_publisher = "canonical"
bastion_offer = "0001-com-ubuntu-server-focal"
bastion_sku = "20_04-lts-gen2"
bastion_version = "latest"
bastion_custom_data_path = "./data/scripts/bastion.sh"
Can someone help me?
Plan block is mostly for BYOS images like RedHat, Arista & Palo Alto. Below flavor doesn't need any plan as this can be used without accepting marketplace terms first before using them via automation.
> az vm image list-skus -l westeurope -p canonical -f 0001-com-ubuntu-server-focal
{
"extendedLocation": null,
"id": "/Subscriptions/b500a058-6396-45db-a15d-3f31913e84a5/Providers/Microsoft.Compute/Locations/westeurope/Publishers/canonical/ArtifactTypes/VMImage/Offers/0001-com-ubuntu-server-focal/Skus/20_04-lts-gen2",
"location": "westeurope",
"name": "20_04-lts-gen2",
"properties": {
"automaticOSUpgradeProperties": {
"automaticOSUpgradeSupported": false
}
},
"tags": null
}
If you remove below plan block from azurerm_linux_virtual_machine resource, it should work for the image flavor you picked.
plan {
name = var.bastion_sku
publisher = var.bastion_publisher
product = var.bastion_offer
}
The reason why it's working via portal because ARM template doesn't add plan block there. You can download and verify ARM template before creating VM on portal if you want.
Accept the agreement first, with this resource: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/marketplace_agreement
Below is the complete code that I am using to create the SQL virtual machine, while creating the resources I get the below mentioned error, I tried to debug by
pinning the azurerm to a specific version,
increased the quota limit of the subscription for the location.
It was working well previously and has suddenly throwing the errors.
#Database Server 1
provider "azurerm" {
version = "2.10"
features {}
}
resource "azurerm_resource_group" "RG" {
name = "resource_db"
location = var.location
}
resource "azurerm_virtual_network" "VN" {
name = "vnet_db"
resource_group_name = azurerm_resource_group.RG.name
location = azurerm_resource_group.RG.location
address_space = ["10.10.0.0/16"]
}
resource "azurerm_subnet" "DBSN" {
name = "snet_db"
resource_group_name = azurerm_resource_group.RG.name
virtual_network_name = azurerm_virtual_network.VN.name
address_prefixes = ["10.10.2.0/24"]
}
resource "azurerm_public_ip" "DBAZPIP" {
name = "pip_db"
resource_group_name = azurerm_resource_group.RG.name
location = azurerm_resource_group.RG.location
allocation_method = "Static"
}
resource "azurerm_network_security_group" "NSGDB" {
name = "nsg_db"
location = azurerm_resource_group.RG.location
resource_group_name = azurerm_resource_group.RG.name
# RDP
security_rule {
name = "RDP"
priority = 300
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "3389"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "SQL"
priority = 310
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "1433"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_subnet_network_security_group_association" "mainDB" {
subnet_id = azurerm_subnet.DBSN.id
network_security_group_id = azurerm_network_security_group.NSGDB.id
}
resource "azurerm_network_interface" "vmnicprimary" {
name = "nic_db"
location = azurerm_resource_group.RG.location
resource_group_name = azurerm_resource_group.RG.name
ip_configuration {
name = "ipConfig_db"
subnet_id = azurerm_subnet.DBSN.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.DBAZPIP.id
}
}
resource "azurerm_virtual_machine" "DatabaseServer" {
name = "vm_db"
location = azurerm_resource_group.RG.location
resource_group_name = azurerm_resource_group.RG.name
network_interface_ids = [azurerm_network_interface.vmnicprimary.id,]
vm_size = "Standard_D4s_v3"
storage_image_reference {
publisher = "MicrosoftSQLServer"
offer = "SQL2017-WS2016"
sku = "Enterprise"
version = "latest"
}
storage_os_disk {
name = "osdisk_db"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
os_profile {
computer_name = "compdb"
admin_username = "vmadmin"
admin_password = "P#ssW0rd123456"
}
os_profile_windows_config {
provision_vm_agent = true
enable_automatic_upgrades = true
}
}
resource "azurerm_mssql_virtual_machine" "example" {
virtual_machine_id = azurerm_virtual_machine.DatabaseServer.id
sql_license_type = "PAYG"
sql_connectivity_type = "PUBLIC"
}
Running the above code throws the following error:
Error: retrieving Sql Virtual Machine (Sql Virtual Machine Name "vm_m2m80" / Resource Group "resource_m2m80"): sqlvirtualmachine.SQLVirtualMachinesClient#Get: Failure responding to request: StatusCode=500 -- Original Error: autorest/azure: Service returned an error. Status=500 Code="InternalServerError" Message="An unexpected error occured while processing the request. Tracking ID: '9a1622b0-f7d1-4070-96c0-ca67d66a3522'"
on main.tf line 117, in resource "azurerm_mssql_virtual_machine" "example":
117: resource "azurerm_mssql_virtual_machine" "example" {
TLDR: It has been fixed!!
Update from Microsoft:
The fix has been released
"Hope this finds you well.
We have confirmed internally, there will be a fix for this issue soon. I will update you once it is deployed."
We have the same thing, failing on every single build, using various Terraform and Azure API versions, this started happening two days ago for us. When trying to import to state it timeouts out as well..
Error: reading Sql Virtual Machine (Sql Virtual Machine Name "sqlvmname" / Resource Group "resource group"): sqlvirtualmachine.SQLVirtualMachinesClient#Get: Failure sending request: StatusCode=500 -- Original Error: context deadline exceeded
I believe this is an API issue. We engaged Microsoft Support and they are able to reproduce the issue using this page(thank you :) ). They are checking internally and are engaging more resources at Microsoft to check it. In the meantime I don't think there is anything that can be done.
One possible work around - seeing as this actually does create the resource in Azure may be to create it using Terraform then comment out your code - and since it's not in state it wont delete it. Not pretty..
I'm creating VMs using the script below beginning with "# Script to create VM". The script is being called from a higher level directory so as to create the VMs using modules, the call looks something like in the code below starting with "#Template..". The problem is that we are missing the state for a few VMs that were created during a previous run. I've tried importing the VM itself but looking at the state file it does not appear anything similar to the ones already there created using the bottom script. Any help would be great.
#Template to call VM Script below
module <virtual_machine_name> {
source = "./vm"
virtual_machine_name = "<virtual_machine_name>"
resource_group_name = "<resource_group_name>"
availability_set_name = "<availability_set_name>"
virtual_machine_size = "<virtual_machine_size>"
subnet_name = "<subnet_name>"
private_ip = "<private_ip>"
optional:
production = true (default is false)
data_disk_name = ["<disk1>","<disk2>"]
data_disk_size = ["50","100"] size is in GB
}
# Script to create VM
data azurerm_resource_group rgdata02 {
name = "${var.resource_group_name}"
}
data azurerm_subnet sndata02 {
name = "${var.subnet_name}"
resource_group_name = "${var.core_resource_group_name}"
virtual_network_name = "${var.virtual_network_name}"
}
data azurerm_availability_set availsetdata02 {
name = "${var.availability_set_name}"
resource_group_name = "${var.resource_group_name}"
}
data azurerm_backup_policy_vm bkpoldata02 {
name = "${var.backup_policy_name}"
recovery_vault_name = "${var.recovery_services_vault_name}"
resource_group_name = "${var.core_resource_group_name}"
}
data azurerm_log_analytics_workspace law02 {
name = "${var.log_analytics_workspace_name}"
resource_group_name = "${var.core_resource_group_name}"
}
#===================================================================
# Create NIC
#===================================================================
resource "azurerm_network_interface" "vmnic02" {
name = "nic${var.virtual_machine_name}"
location = "${data.azurerm_resource_group.rgdata02.location}"
resource_group_name = "${var.resource_group_name}"
ip_configuration {
name = "ipcnfg${var.virtual_machine_name}"
subnet_id = "${data.azurerm_subnet.sndata02.id}"
private_ip_address_allocation = "Static"
private_ip_address = "${var.private_ip}"
}
}
#===================================================================
# Create VM with Availability Set
#===================================================================
resource "azurerm_virtual_machine" "vm02" {
count = var.avail_set != "" ? 1 : 0
depends_on = [azurerm_network_interface.vmnic02]
name = "${var.virtual_machine_name}"
location = "${data.azurerm_resource_group.rgdata02.location}"
resource_group_name = "${var.resource_group_name}"
network_interface_ids = [azurerm_network_interface.vmnic02.id]
vm_size = "${var.virtual_machine_size}"
availability_set_id = "${data.azurerm_availability_set.availsetdata02.id}"
tags = var.tags
# This means the OS Disk will be deleted when Terraform destroys the Virtual Machine
# NOTE: This may not be optimal in all cases.
delete_os_disk_on_termination = true
os_profile {
computer_name = "${var.virtual_machine_name}"
admin_username = "__VMUSER__"
admin_password = "__VMPWD__"
}
os_profile_linux_config {
disable_password_authentication = false
}
storage_image_reference {
id = "${var.image_id}"
}
storage_os_disk {
name = "${var.virtual_machine_name}osdisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
os_type = "Linux"
}
boot_diagnostics {
enabled = true
storage_uri = "${var.boot_diagnostics_uri}"
}
}
#===================================================================
# Create VM without Availability Set
#===================================================================
resource "azurerm_virtual_machine" "vm03" {
count = var.avail_set == "" ? 1 : 0
depends_on = [azurerm_network_interface.vmnic02]
name = "${var.virtual_machine_name}"
location = "${data.azurerm_resource_group.rgdata02.location}"
resource_group_name = "${var.resource_group_name}"
network_interface_ids = [azurerm_network_interface.vmnic02.id]
vm_size = "${var.virtual_machine_size}"
# availability_set_id = "${data.azurerm_availability_set.availsetdata02.id}"
tags = var.tags
# This means the OS Disk will be deleted when Terraform destroys the Virtual Machine
# NOTE: This may not be optimal in all cases.
delete_os_disk_on_termination = true
os_profile {
computer_name = "${var.virtual_machine_name}"
admin_username = "__VMUSER__"
admin_password = "__VMPWD__"
}
os_profile_linux_config {
disable_password_authentication = false
}
storage_image_reference {
id = "${var.image_id}"
}
storage_os_disk {
name = "${var.virtual_machine_name}osdisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
os_type = "Linux"
}
boot_diagnostics {
enabled = true
storage_uri = "${var.boot_diagnostics_uri}"
}
}
#===================================================================
# Set Monitoring and Log Analytics Workspace
#===================================================================
resource "azurerm_virtual_machine_extension" "oms_mma02" {
count = var.bootstrap ? 1 : 0
name = "${var.virtual_machine_name}-OMSExtension"
virtual_machine_id = "${azurerm_virtual_machine.vm02.id}"
publisher = "Microsoft.EnterpriseCloud.Monitoring"
type = "OmsAgentForLinux"
type_handler_version = "1.8"
auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"workspaceId" : "${data.azurerm_log_analytics_workspace.law02.workspace_id}"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"workspaceKey" : "${data.azurerm_log_analytics_workspace.law02.primary_shared_key}"
}
PROTECTED_SETTINGS
}
#===================================================================
# Associate VM to Backup Policy
#===================================================================
resource "azurerm_backup_protected_vm" "vm02" {
count = var.bootstrap ? 1 : 0
resource_group_name = "${var.core_resource_group_name}"
recovery_vault_name = "${var.recovery_services_vault_name}"
source_vm_id = "${azurerm_virtual_machine.vm02.id}"
backup_policy_id = "${data.azurerm_backup_policy_vm.bkpoldata02.id}"}
On my understanding that you do not understand the Terraform Import clearly. So I would show you what does it mean.
When you want to import the pre-existing resources, you need to configure the resource in the Terraform files first that how the existing resources configured. And all the resources would be imported into the state files.
Another caveat currently is that only a single resource can be imported into a state file at a time.
When you want to import the resources into a module, I assume the folder structure like this:
testingimportfolder
└── main.tf
└── terraform.tfstate
└── terraform.tfstate.backup
└───module
└── main.tf
And the main.tf file in the folder testingimportfolder set the module block liek this:
module "importlab" {
source = "./module"
...
}
And after you finish importing all the resources into the state file, and then you can see the output of the command terraform state list like this:
module.importlab.azurerm_network_security_group.nsg
module.importlab.azurerm_resource_group.rg
module.importlab.azurerm_virtual_network.vnet
All the resource name should like module.module_name.azurerm_xxxx.resource_name. If you use the module inside the module, I assume the folder structure like this:
importmodules
├── main.tf
├── modules
│ └── vm
│ ├── main.tf
│ └── module
│ └── main.tf
And the file importmodules/modules/vm/main.tf like this:
module "azurevm" {
source = "./module"
...
}
Then after you finish importing all the resources into the state file, and then you can see the output of the command terraform state list like this:
module.vm.module.azurevm.azurerm_network_interface.example
Yes, it just likes what you have got. The state file will store your existing resources as you quote the modules one by one. So you need to plan your code and modules carefully and clearly. Or you will make yourself confused.
Problem statement
I am in the process to create an Azure VM cluster of windows os. till now I can create an Azure file share. and azure windows cluster. I want to attach file share created to each VM in my cluster. unable to find reference how to add same on windows VM.
code for this
resource "azurerm_storage_account" "main" {
name = "stor${var.environment}${var.cost_centre}${var.project}"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
account_tier = "${var.storage_account_tier}"
account_replication_type = "${var.storage_replication_type}"
}
resource "azurerm_storage_share" "main" {
name = "storageshare${var.environment}${var.cost_centre}${var.project}"
resource_group_name = "${azurerm_resource_group.main.name}"
storage_account_name = "${azurerm_storage_account.main.name}"
quota = "${var.storage_share_quota}"
}
resource "azurerm_virtual_machine" "vm" {
name = "vm-${var.location_id}-${var.environment}-${var.cost_centre}-${var.project}-${var.seq_id}-${count.index}"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
availability_set_id = "${azurerm_availability_set.main.id}"
vm_size = "${var.vm_size}"
network_interface_ids = ["${element(azurerm_network_interface.main.*.id, count.index)}"]
count = "${var.vm_count}"
storage_image_reference {
publisher = "${var.image_publisher}"
offer = "${var.image_offer}"
sku = "${var.image_sku}"
version = "${var.image_version}"
}
storage_os_disk {
name = "osdisk${count.index}"
create_option = "FromImage"
}
os_profile {
computer_name = "${var.vm_name}-${count.index}"
admin_username = "${var.admin_username}"
admin_password = "${var.admin_password}"
}
os_profile_windows_config {}
depends_on = ["azurerm_network_interface.main"]
}
Azure doesnt offer anything like that, so uou cannot do that natively, you need to create a script and run that script on the vm use script extension\dsc extension or if terraform supports that - with terraform.
https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-linux