I am trying to create an azure vm in the Germany West Central region but I am getting the following error:
Error: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status= Code="SkuNotAvailable" Message="The requested size for resource '/subscriptions//resourceGroups/shared-rg/providers/Microsoft.Compute/virtualMachines/jumphost' is currently not available in location 'germanywestcentral' zones '' for subscription ''. Please try another size or deploy to a different location or zones. See https://aka.ms/azureskunotavailable for details."
│
│ with module.jump_host_vm.azurerm_virtual_machine.vm,
│ on modules/virtual-machine/main.tf line 1, in resource "azurerm_virtual_machine" "vm":
│ 1: resource "azurerm_virtual_machine" "vm" {
I am using the Standard_A1_v2 size and SKU of 22.04-LTS. Please fin my terraform code below:
resource "azurerm_virtual_machine" "vm" {
name = var.vm_name
location = var.location
resource_group_name = var.rg_name
network_interface_ids = var.nic_id
vm_size = var.vm_size #"Standard_A1_v2"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
storage_image_reference {
publisher = var.storage_image_reference.publisher #"Canonical"
offer = var.storage_image_reference.offer #"UbuntuServer"
sku = var.storage_image_reference.sku #"20.04-LTS"
version = var.storage_image_reference.version #"latest"
}
storage_os_disk {
name = var.storage_os_disk.name #"myosdisk1"
caching = var.storage_os_disk.caching #"ReadWrite"
create_option = var.storage_os_disk.create_option #"FromImage"
managed_disk_type = var.storage_os_disk.managed_disk_type #"Standard_LRS"
}
os_profile_linux_config {
disable_password_authentication = true
}
tags = merge(var.common_tags)
}
and the values for the above is as follows:
jump_host_vm_name = "jumphost"
jump_host_vm_size = "Standard_A1_v2"
jump_host_storage_image_reference = {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "22.04-LTS"
version = "latest"
}
jump_host_storage_os_disk = {
name = "myosdisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
Can someone help me understand why it is not working? According to the azure site [1], this VM is available in the Germany region.
[1] - https://azure.microsoft.com/en-us/explore/global-infrastructure/products-by-region/?regions=all&products=virtual-machines
Seems the Canonical:UbuntuServer:22.04-LTS:latest" was not available and its under preview. we can use bellowed version 16.04-LTS /19_10-daily-gen2
For 16.04 version, VM size as "Standard_A1_v2"
For latest sku 19_10-daily-gen2, supported VM size will be Standard_DS2_v2
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS" //"19_10-daily-gen2"
version = "latest"
}
Here is the command to get the supported versions of SKUs
Get-AzVMImageSku -Location "Germany West Central" -PublisherName "Canonical" -Offer "UbuntuServer" | Select Skus
please find below sample code reference from snippet
main tf as follows
data "azurerm_resource_group" "example" {
name = "***********"
}
data "azuread_client_config" "current" {}
resource "azurerm_virtual_network" "puvnet" {
name = "Public_VNET"
resource_group_name = data.azurerm_resource_group.example.name
location = "Germany West Central"
address_space = ["10.19.0.0/16"]
dns_servers = ["10.19.0.4", "10.19.0.5"]
}
resource "azurerm_subnet" "osubnet" {
name = "Outer_Subnet"
resource_group_name = data.azurerm_resource_group.example.name
address_prefixes = ["10.19.1.0/24"]
virtual_network_name = azurerm_virtual_network.puvnet.name
}
resource "azurerm_network_interface" "main" {
name = "testdemo"
location = "Germany West Central"
resource_group_name = data.azurerm_resource_group.example.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.osubnet.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "main" {
name = "vmjumphost"
location = "Germany West Central"
resource_group_name = data.azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.main.id]
vm_size = "Standard_A1_v2"
storage_image_reference {
offer = "UbuntuServer"
publisher = "Canonical"
sku = "19_10-daily-gen2"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "********"
admin_password = "*********"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
provider file as follow:
terraform {
required_version = "~>1.3.3"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">=3.0.0"
}
}
}
provider "azurerm" {
features {}
skip_provider_registration = true
}
Upon running on
terraform plan
upon apply
terraform apply -auto-approve
Verification from UI
Related
I'm running azurerm_mssql_virtual_machine to build a SQL Server virtual machine from a custom imag. (Image configured with SQL Server 2016 prepare image).
This is the code that I am running:
resource "azurerm_mssql_virtual_machine" "mssql_vm" {
provider = azurerm.spoke-subscription
virtual_machine_id = azurerm_windows_virtual_machine.sql_server.id
sql_license_type = "PAYG"
sql_connectivity_port = "49535"
sql_connectivity_update_username = var.sql_login
sql_connectivity_update_password = var.sql_password
sql_instance {
collation = "Latin1_General_CI_AS"
}
assessment {
enabled = true
run_immediately = true
}
storage_configuration {
disk_type = "${var.disk_type}"
storage_workload_type = "OLTP"
data_settings {
default_file_path = "F:\\DATA"
luns = [1]
}
log_settings {
default_file_path = "G:\\LOGS"
luns = [2]
}
temp_db_settings {
default_file_path = "K:\\TEMPDB"
luns = [3]
}
}
lifecycle {
ignore_changes = [
tags,
#assessment[0].schedule
]
}
tags = {
"application owner" = var.application_owner_tag
"environment" = var.environment_tag
"department" = var.department_tag
"technicalcontact" = var.technicalcontact_tag
"application" = var.application_tag
"service" = "SQL server"
}
}
I get this error:
performing CreateOrUpdate: sqlvirtualmachines.SqlVirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 --
Original Error: Code="CRPNotAllowedOperation" Message="Operation cannot be completed due to the following error: VM Extension with publisher 'Microsoft.SqlServer.Management' and type 'SqlIaaSAgent' does not support setting enableAutomaticUpgrade property to true on this subscription.
Steps I've taken to try and resolve:
Re-register SQL Server virtual machines to the Azure subscription
Turned off automatic upgrade on azurerm_windows_virtual_machine
I tried to reproduce the same in my environment:
Code:
resource "azurerm_mssql_virtual_machine" "example" {
virtual_machine_id = azurerm_windows_virtual_machine.example.id
sql_license_type = "PAYG"
r_services_enabled = true
sql_connectivity_port = 1433
sql_connectivity_type = "PRIVATE"
sql_connectivity_update_password = "xxx"
sql_connectivity_update_username = "sqllogin"
auto_patching {
day_of_week = "Sunday"
maintenance_window_duration_in_minutes = 60
maintenance_window_starting_hour = 2
}
}
resource "azurerm_virtual_network" "example" {
name = "kavexample-network"
address_space = ["10.0.0.0/16"]
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
}
resource "azurerm_subnet" "example" {
name = "internal"
resource_group_name = data.azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "example" {
name = "kavya-example-nic"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_windows_virtual_machine" "example" {
name = "kavyaexamplemc"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
size = "Standard_F2"
admin_username = "xxx"
admin_password = "xx"
enable_automatic_updates = true
patch_mode = "Manual"
hotpatching_enabled = true
network_interface_ids = [
azurerm_network_interface.example.id,
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2016-Datacenter"
version = "latest"
}
}
Received same error:
│ Error: waiting for creation of Sql Virtual Machine (Sql Virtual Machine Name "kavyaexamplemc" / Resource Group "v-sakavya-Mindtree"): Code="CRPNotAllowedOperation" Message="Operation cannot be completed due to the following error: VM Extension with publisher 'Microsoft.SqlServer.Management' and type 'SqlIaaSAgent' does not support setting enableAutomaticUpgrade property to true on this subscription."
Even tried changing, but was still receving the same error again and again.
enable_automatic_updates = false
patch_mode = "Manual"
hotpatching_enabled = false
Try deleting the vm resource completely and create a new one with changed settings .
Try using below code:
I tried setting enable_automatic_upgrades = false , azurerm_virtual_machine has this property .Make use of that.
Also ,
Code:
resource "azurerm_virtual_network" "main" {
name = "kavyasarvnetwork"
address_space = ["10.0.0.0/16"]
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = data.azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "main" {
name = "kavyasarnic"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "example" {
name = "kavyasarvm"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.main.id]
vm_size = "Standard_DS1_v2"
storage_os_disk {
name = "kavyasar-OSDisk"
caching = "ReadOnly"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
os_type = "Windows"
}
storage_image_reference {
publisher = "MicrosoftSQLServer"
offer = "SQL2017-WS2016"
sku = "SQLDEV"
version = "latest"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_windows_config {
timezone = "Pacific Standard Time"
provision_vm_agent = true
enable_automatic_upgrades = false
}
tags = {
environment = "staging"
}
}
resource "azurerm_mssql_virtual_machine" "example" {
virtual_machine_id = azurerm_virtual_machine.example.id
sql_license_type = "PAYG"
r_services_enabled = true
sql_connectivity_port = 1433
sql_connectivity_type = "PRIVATE"
sql_connectivity_update_password = "Password1234!"
sql_connectivity_update_username = "sqllogin"
}
This seems to be the cause due to limitations: What is the SQL Server IaaS Agent extension? (Windows) - SQL Server on Azure VMs | Microsoft Learn
The SQL IaaS Agent extension only supports:
SQL Server VMs deployed through the Azure Resource Manager. SQL Server
VMs deployed through the classic model are not supported.
SQL Server VMs deployed to the public or Azure Government cloud.
Deployments to other private or government clouds are not supported.
Reference : azurerm_mssql_virtual_machine | Resources | hashicorp/azurerm | Terraform Registry
When deploying a custom script extension for a VM in Azure, it times out after 15 minutes. The timeout block is set to 2hrs. I cannot figure out why it keeps timing out. Could anyone point me in the right direction please? Thanks.
Resource to deploy (https://i.stack.imgur.com/lIfKj.png)
Error (https://i.stack.imgur.com/GFYRL.png)
In Azure, each resource will take a particular amount of time for provisioning. For Virtual Network Gateway's/ Virtual machines, timeout is up to 2 hours as mentioned in terraform timeouts.
Therefore, the timeout block we provide for any virtual machine has to be less than two hours (2h).
I tried creating a replica for azure vm extension resource by using below terraform code and it deployed successfully.
timeout block:
timeouts {
create = "1h30m"
delete = "20m"
}
azure_VM_extension:
resource "azurerm_virtual_machine_extension" "xxxxx" {
name = "xxxxname"
virtual_machine_id = azurerm_virtual_machine.example.id
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = <<SETTINGS
{
"commandToExecute": "hostname && uptime"
}
SETTINGS
tags = {
environment = "Production"
}
timeouts {
create = "1h30m"
delete = "20m"
}
}
Created a virtual machine by adding required configurations under resource group.
main.tf:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "xxxxxRG" {
name = "xxxxx-RG"
location = "xxxxxx"
}
resource "azurerm_virtual_network" "example" {
name = "xxxxx"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "example" {
name = "xxxxx"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "example" {
name = "xxxxxx"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "xxxxconfiguration"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_storage_account" "example" {
name = "xxxxx"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "staging"
}
}
resource "azurerm_storage_container" "example" {
name = "xxxxxx"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
resource "azurerm_virtual_machine" "example" {
name = "xxxxxxVM"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.example.id]
vm_size = "Standard_F2"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "xxxxx"
vhd_uri = "${azurerm_storage_account.example.primary_blob_endpoint}${azurerm_storage_container.example.name}/myosdisk1.vhd"
caching = "ReadWrite"
create_option = "FromImage"
}
os_profile {
computer_name = "xxxxxname"
admin_username = "xxxx"
admin_password = "xxxxxx"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
resource "azurerm_virtual_machine_extension" "example" {
name = "hostname"
virtual_machine_id = azurerm_virtual_machine.example.id
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = <<SETTINGS
{
"commandToExecute": "hostname && uptime"
}
SETTINGS
tags = {
environment = "Production"
}
timeouts {
create = "1h30m"
delete = "20m"
}
}
Executed:
terraform init:
terraform plan:
terraform apply:
Extension added successfully after deployment:
You can upgrade status if you want to use extensions.
I resolved the issue by changing the type_handler_version to 1.9.
We have a very specific requirement where some of the Vendors provides their images from Azure Marketplace and some just provide the .vhd
I need to build a terraform code where user should have an option to either create a VM based out of Azure Marketplace image, or he should be able provide source_uri of the VHD to create a VM.
For now I have the codes ready to create a VM from .vdh file,
resource "azurerm_virtual_machine" "this" {
name = var.name
location = var.location
resource_group_name = var.resource_group_name
vm_size = var.size
network_interface_ids = [azurerm_network_interface.this.id]
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
tags = var.tags
availability_set_id = var.availability_set_id == "" ? null : var.availability_set_id
resource "azurerm_managed_disk" "os" {
name = var.os_disk_name
location = "${var.location}"
resource_group_name = var.resource_group_name
os_type = "Linux"
storage_account_type = "Standard_LRS"
create_option = "Import"
storage_account_id = var.storage_account_id
source_uri = var.source_uri
disk_size_gb = var.disk_size_gb
}
# attach the managed disk, created from the imported vhd.
storage_os_disk {
name = join("", [var.name, "-", var.os_disk_name])
os_type = "Linux"
managed_disk_id = azurerm_managed_disk.os.id
managed_disk_type = "Standard_LRS"
caching = "ReadWrite"
create_option = "Attach"
}
os_profile_linux_config {
disable_password_authentication = false
}
}
The default option should be spin up a VM from Azure Marketplace. Can this be archived via variables
You can check the list of VM's image based on publishers available in the Azure Marketplace.
az vm image list --output table --all --publisher center-for-internet-security-inc.
I am taking the below image from Azure MarketPlace as a reference:
You can find your images based on offer, SKU, Publisher on your requirement. Refer to this MS Document for more info.
You can use this terraform code to create Azure VM from Marketplace image:
main.tf
provider "azurerm" {
features{}
}
data "azurerm_resource_group" "main" {
name = "${var.resource_group_name}"
}
resource "azurerm_virtual_network" "main" {
name = "${var.prefix}-network"
address_space = ["10.0.0.0/16"]
location = data.azurerm_resource_group.main.location
resource_group_name = data.azurerm_resource_group.main.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = data.azurerm_resource_group.main.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "main" {
name = "${var.prefix}-nic"
location = data.azurerm_resource_group.main.location
resource_group_name = data.azurerm_resource_group.main.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "main" {
name = "${var.prefix}-vm"
location = data.azurerm_resource_group.main.location
resource_group_name = data.azurerm_resource_group.main.name
network_interface_ids = [azurerm_network_interface.main.id]
admin_username = "adminuser"
vm_size = "Standard_DS1_v2"
# Uncomment this line to delete the OS disk automatically when deleting the VM
# delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
# delete_data_disks_on_termination = true
storage_image_reference {
publisher = "${var.publisher}"
offer = "${var.offer}"
sku = "${var.sku}"
version = "${var.version1}"
}
plan {
publisher = "${var.publisher}"
product = "${var.offer}"
name = "cis-ubuntu2004-l1""
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
variable.tf
variable "resource_group_name" {
default = "v-XXXXX-XXXXX"
}
variable "prefix" {
default = "tfvmex"
}
variable "publisher" {
default="center-for-internet-security-inc"
}
variable "offer" {
default = "cis-ubuntu-linux-2004-l1"
}
variable "sku" {
default = "cis-ubuntu2004-l1povw"
}
variable "version1" {
default="1.1.9"
}
Due to somepolicy appiled in my subscription so i am not able to test it but yes you can test in your enviorment.
You can refer this document for the same requirement.
I’m trying to create a VM in Azure using below config.
resource “azurerm_virtual_machine” “VM38” {
name = “VM38”
resource_group_name = data.azurerm_resource_group.myRG.name
location = data.azurerm_resource_group.myRG.location
vm_size = “Standard_F16s_v2”
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
os_profile {
computer_name = “vm38”
admin_username = “adminuser”
admin_password = “Password1234!”
custom_data = base64encode(data.cloudinit_config.hybrid_vm38_cloudinit_cfg.rendered)
}
os_profile_linux_config {
disable_password_authentication = false
}
storage_image_reference {
id = data.azurerm_image.my_image.id
}
depends_on = [aws_instance.vm12]
storage_os_disk {
name = “VMDisk”
create_option = “FromImage”
caching = “ReadWrite”
#disk_size_gb = 16
#os_type = “Linux”
#managed_disk_type = “Standard_LRS”
vhd_uri = var.vmVHDURI
}
network_interface_ids = [azurerm_network_interface.mgmtNwIntf.id, azurerm_network_interface.transportNwIntf.id]
}
When I execute terraform apply I’m getting below error…
Error: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 – Original Error: autorest/azure: Service returned an error. Status= Code=“PropertyChangeNotAllowed” Message=“Changing property ‘osDisk.name’ is not allowed.” Target=“osDisk.name”
with azurerm_virtual_machine.VM38,
on az_virtual_machine.tf line 1, in resource “azurerm_virtual_machine” “VM38”:
1: resource “azurerm_virtual_machine” “VM38” {
Please let me know how to resolve this issue.
Terraform and Azure provider version details are given below:
Terraform v1.0.8
on linux_amd64
provider registry.terraform.io/hashicorp/azurerm v2.79.1
Thanks & Regards,
-Ravi
**In terraform.tfvars**
resourceGroupName = "myResourceGroup"
deviceImageName = "myDeviceImageName"
**In cloudinit_config.tf**
data "cloudinit_config" "hybrid_vm38_cloudinit_cfg" {
gzip = false
base64_encode = false
depends_on = [aws_instance.hybrid_vm12]
part {
filename = "cloud-config"
content_type = "text/cloud-config"
content = file("cloudinit/vm38_cloud_config.yaml")
}
part {
filename = "config-C8K.txt"
content_type = "text/cloud-boothook"
content = file("cloudinit/vm38_cloud_boothook.cfg")
}
}
**In az_resource_group.tf**
data "azurerm_resource_group" "vm38RG" {
name = var.resourceGroupName
}
**In az_image.tf**
data "azurerm_image" "deviceImage" {
name = var.deviceImageName
resource_group_name = data.azurerm_resource_group.vm38RG.name
}
**In az_virtual_network.tf**
resource "azurerm_virtual_network" "vm38VirtualNw" {
name = "vm38VirtualNw"
address_space = ["30.0.0.0/16"]
location = "eastus"
resource_group_name = data.azurerm_resource_group.vm38RG.name
tags = {
environment = "My virtual network"
}
}
**In az_subnet.tf**
resource "azurerm_subnet" "vm38MgmtSubnet" {
name = "vm38MgmtSubnet"
resource_group_name = data.azurerm_resource_group.vm38RG.name
virtual_network_name = azurerm_virtual_network.vm38VirtualNw.name
address_prefixes = ["30.0.11.0/24"]
}
resource "azurerm_subnet" "vm38TransportSubnet" {
name = "vm38TransportSubnet"
resource_group_name = data.azurerm_resource_group.vm38RG.name
virtual_network_name = azurerm_virtual_network.vm38VirtualNw.name
address_prefixes = ["30.0.12.0/24"]
}
**In az_network_interface.tf**
resource "azurerm_network_interface" "vm38MgmtNwIntf" {
name = "vm38MgmtNwIntf"
location = data.azurerm_resource_group.vm38RG.location
resource_group_name = data.azurerm_resource_group.vm38RG.name
ip_configuration {
name = "vm38MgmtPvtIP"
subnet_id = azurerm_subnet.vm38MgmtSubnet.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.vm38MgmtPublicIP.id
}
}
resource "azurerm_network_interface" "vm38TransportNwIntf" {
name = "vm38TransportNwIntf"
location = data.azurerm_resource_group.vm38RG.location
resource_group_name = data.azurerm_resource_group.vm38RG.name
ip_configuration {
name = "vm38TransportPvtIP"
subnet_id = azurerm_subnet.vm38TransportSubnet.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.vm38TransportPublicIP.id
}
}
**In az_virtual_machine.tf**
resource "azurerm_virtual_machine" "VM38" {
name = "VM38"
resource_group_name = data.azurerm_resource_group.vm38RG.name
location = data.azurerm_resource_group.vm38RG.location
vm_size = "Standard_F16s_v2"
delete_os_disk_on_termination = true
#delete_data_disks_on_termination = true
os_profile {
computer_name = "vm38"
admin_username = "adminuser"
admin_password = "Password1234!"
custom_data = base64encode(data.cloudinit_config.hybrid_vm38_cloudinit_cfg.rendered)
}
os_profile_linux_config {
disable_password_authentication = false
}
storage_image_reference {
id = data.azurerm_image.deviceImage.id
}
depends_on = [aws_instance.hybrid_vm12]
storage_os_disk {
name = "osDisk"
create_option = "FromImage"
caching = "ReadWrite"
#disk_size_gb = 16
#os_type = "Linux"
managed_disk_type = "Standard_LRS"
}
/*
storage_data_disk {
name = "vm38SecondaryDisk"
caching = "ReadWrite"
create_option = "Empty"
disk_size_gb = 2048
lun = 0
managed_disk_type = "Premium_LRS"
}
*/
network_interface_ids = [
azurerm_network_interface.vm38MgmtNwIntf.id,
azurerm_network_interface.vm38TransportNwIntf.id
]
}
You can't change the os_disk name while creating the VM. It should be "osdisk" or something starting with that.
I tested using the below code:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "ansuman-resources"
location = "West US 2"
}
resource "azurerm_virtual_network" "example" {
name = "ansuman-network"
address_space = ["10.0.0.0/16"]
location = "${azurerm_resource_group.example.location}"
resource_group_name = "${azurerm_resource_group.example.name}"
}
resource "azurerm_subnet" "example" {
name = "internal"
resource_group_name = "${azurerm_resource_group.example.name}"
virtual_network_name = "${azurerm_virtual_network.example.name}"
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "example" {
name = "ansuman-nic"
location = "${azurerm_resource_group.example.location}"
resource_group_name = "${azurerm_resource_group.example.name}"
ip_configuration {
name = "testconfiguration1"
subnet_id = "${azurerm_subnet.example.id}"
private_ip_address_allocation = "Dynamic"
}
}
# we assume that this Custom Image already exists
data "azurerm_image" "custom" {
name = "ansumantestvm-image-20211007225625"
resource_group_name = "resourcegroup"
}
resource "azurerm_virtual_machine" "example" {
name = "ansuman-vm"
location = "${azurerm_resource_group.example.location}"
resource_group_name = "${azurerm_resource_group.example.name}"
network_interface_ids = ["${azurerm_network_interface.example.id}"]
vm_size = "Standard_F2"
# This means the OS Disk will be deleted when Terraform destroys the Virtual Machine
# NOTE: This may not be optimal in all cases.
delete_os_disk_on_termination = true
storage_image_reference {
id = "${data.azurerm_image.custom.id}"
}
storage_os_disk {
name = "osdisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_windows_config {
}
}
Output:
Note: Please make sure while creating the image from the original VM , first generalize it . If its not generalized then VM created from the custom image will get stuck in creating state and will not be able to boot up.
If you want to change the osdisk name to something of your choice then as a solution try creating the managed os disk first from the image using create option "copy" or "import" and then attach the disk while creating the VM as creating managed disk from custom image is also not supported ,it can be only done for platform image or marketplace image . You can refer this GitHub issue and this Github issue.
Reference terraform code for similar issue to give custom name to osdisk created from platform image/ market place image which Charles Xu has done in this SO thread.
I am trying to create multiple Azure VM using for_each via terrform , I am able to create 2 NIC card but while defining NIC id in zurerm_windows_virtual_machine block , both VM are picking same NIC card (last one , index 1) and result is only VM is getting created and other got failed .
what would be logic for (network_interface_ids = azurerm_network_interface.az_nic[*].id) put that 1st vm will pick 1st NIC and second do so .
#---------------creating Network Interface for Windows VM's---------------
resource "azurerm_network_interface" "az_nic" {
count = length(var.vm_names)
name = "${var.vm_names[count.index]}_nic"
location = var.location
resource_group_name = data.azurerm_resource_group.Resource_group.name
ip_configuration {
name = var.vm_names[count.index]
subnet_id = data.azurerm_subnet.subnet.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_windows_virtual_machine" "myvm" {
for_each = toset(var.vm_names)
name = each.value
resource_group_name = data.azurerm_resource_group.Resource_group.name
location = var.location
size = "Standard_D2s_v3"
admin_username = "abc"
admin_password = "uejehrikch123"
network_interface_ids = azurerm_network_interface.az_nic[*].id
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2016-Datacenter"
version = "latest"
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
You can add the count parameter in the resource "azurerm_windows_virtual_machine" instead of mixing the count and for_each.
Suppose you have
variable "vm_names" {
default = ["vm1", "vm2"]
}
then you can change the resource .tf file like this:
resource "azurerm_windows_virtual_machine" "myvm" {
count = length(var.vm_names)
name = element(var.vm_names,count.index)
resource_group_name = data.azurerm_resource_group.Resource_group.name
location = var.location
size = "Standard_D2s_v3"
admin_username = "abc"
admin_password = "uejehrikch123"
network_interface_ids = [element(azurerm_network_interface.az_nic.*.id, count.index)]
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2016-Datacenter"
version = "latest"
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}