I am trying to setup a Service fabric cluster and while doing so I am creating a azure virtual machine scale set with LinuxDiagnostic as one of the extension. Following is the code for the VM scale set:
resource "azurerm_virtual_machine_scale_set" "sf_scale_set" {
name = "sf-scale-set-${terraform.workspace}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.fusion.name}"
# automatic rolling upgrade
automatic_os_upgrade = true
upgrade_policy_mode = "Automatic"
# required when using rolling upgrade policy
health_probe_id = "${azurerm_lb_probe.sf_lb_probe.id}"
sku {
name = "${var.sf_scale_set_vm_config["name"]}"
tier = "${var.sf_scale_set_vm_config["tier"]}"
capacity = "${var.sf_scale_set_vm_config["capacity"]}"
}
storage_profile_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04"
version = "6.0.12"
}
storage_profile_os_disk {
name = ""
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile_secrets {
source_vault_id = "${var.sf_vault_id}"
vault_certificates {
certificate_url = "${var.sf_vault_url}"
}
}
storage_profile_data_disk {
lun = 0
caching = "ReadWrite"
create_option = "Empty"
disk_size_gb = 40
}
os_profile {
computer_name_prefix = "sf-vm-${terraform.workspace}"
admin_username = "hachadmin"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/admin/.ssh/authorized_keys"
key_data = "${file("sshkeys/admin.pub")}"
}
}
network_profile {
name = "sf-vm-net-profile-${terraform.workspace}"
primary = true
ip_configuration {
name = "sf-ip-config-${terraform.workspace}"
primary = true
subnet_id = "${azurerm_subnet.sf_vnet_subnet.id}"
load_balancer_backend_address_pool_ids = ["${azurerm_lb_backend_address_pool.sf_be_vm_set.id}"]
load_balancer_inbound_nat_rules_ids = ["${element(azurerm_lb_nat_pool.sf_nat_vm_set.*.id, count.index)}"]
}
}
extension {
name = "sf-scale-set-extension-${terraform.workspace}"
publisher = "Microsoft.Azure.ServiceFabric"
type = "ServiceFabricLinuxNode"
type_handler_version = "1.0"
settings = "{ \"certificate\": { \"thumbprint\": \"${var.cert_thumbprint}\", \"x509StoreName\": \"My\" } , \"clusterEndpoint\": \"${azurerm_service_fabric_cluster.sf_service.cluster_endpoint}\", \"nodeTypeRef\": \"${terraform.workspace}-sf-node-type\", \"durabilityLevel\": \"${var.sf_reliability}\",\"nicPrefixOverride\": \"${azurerm_subnet.sf_vnet_subnet.address_prefix}\",\"enableParallelJobs\": \"true\"}"
protected_settings = "{\"StorageAccountKey1\": \"${azurerm_storage_account.sf_storage.primary_access_key}\", \"StorageAccountKey2\": \"${azurerm_storage_account.sf_storage.secondary_access_key}\"}"
}
extension {
name = "sf-scale-set-linux-diag-extension-${terraform.workspace}" # This extension connects vms to the cluster.
publisher = "Microsoft.OSTCExtensions"
type = "LinuxDiagnostic"
type_handler_version = "2.3"
auto_upgrade_minor_version = true
protected_settings = "{\"storageAccountName\": \"${azurerm_storage_account.sf_storage_app_diag.primary_access_key}\", \"StorageAccountKey1\": \"${azurerm_storage_account.sf_storage_app_diag.primary_access_key}\", \"StorageAccountKey2\": \"${azurerm_storage_account.sf_storage_app_diag.secondary_access_key}\"}"
settings = "${data.template_file.settings.rendered}"
}
tags {
Region = "${var.location}"
Createdby = "${var.created_by_tag}"
Team = "${var.team_tag}"
Environment = "${terraform.workspace}"
ninetofive = "${var.ninetofivetag}"
}
}
data "template_file" "settings" {
template = "${file("${path.module}/diagnostics/settings2.3.json.tpl")}"
vars {
xml_cfg = "${base64encode(data.template_file.wadcfg.rendered)}"
diag_storage_name = "${azurerm_storage_account.sf_storage_app_diag.name}"
}
}
data "template_file" "wadcfg" {
template = "${file("${path.module}/diagnostics/wadcfg.xml.tpl")}"
vars {
virtual_machine_id = "${azurerm_virtual_machine_scale_set.sf_scale_set.id}"
}
}
The end of Wadcfg file looks like follow:
<WadCfg>
<PerformanceCounters scheduledTransferPeriod="PT1M">
.....
......
</PerformanceCounters>
<Metrics resourceId="${virtual_machine_id}">
<MetricAggregation scheduledTransferPeriod="PT1H"/>
<MetricAggregation scheduledTransferPeriod="PT1M"/>
</Metrics>
</DiagnosticMonitorConfiguration>
</WadCfg>
Settings2.3.json.tpl file is
{
"xmlCfg": "${xml_cfg}",
"storageAccount": "${diag_storage_name}"
}
While trying to run the Terraform code I get the following error:
[+] Found tfvars file ./profiles/eu-sprint/eu-sprint.tfvars
Error: Cycle: data.template_file.wadcfg, data.template_file.settings, azurerm_virtual_machine_scale_set.sf_scale_set
I am assuming that Terraform is trying to render the template wadcfg.xml.tpl without the Azure VM scale set. Following are some of my question:
How can I enforce Terraform to wait until the Azure VM scale set are created before trying to render the wadcfg.xml.tpl file
As part of rendering my wadcfg.xml.tpl I am passing the VM id's, I know this will work if I am only creating one instance but will the code above also work for VM scale set without me explicitly looping through each VM's? If incase I would have to loop through each of them, what would be the recommenced approach?
I saw there's a https://www.terraform.io/docs/providers/azurerm/r/virtual_machine_extension.html resource existing to install VM extension, will this also work for VM scale set? If not is there a better way I could organize my settings and protected_settings part so that they are reader friendly?
I would appreciate some help here.
Related
I'm trying to deploy a Virtual Machine Scale Set Extension via Terraform but there are few issues here, The requirement was to implement without loadbalancer attached
resource "azurerm_virtual_machine_scale_set" "example" {
name = "mytestscaleset-1"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
# automatic rolling upgrade
# automatic_os_upgrade = true
upgrade_policy_mode = "Rolling"
rolling_upgrade_policy {
max_batch_instance_percent = 20
max_unhealthy_instance_percent = 20
max_unhealthy_upgraded_instance_percent = 5
pause_time_between_batches = "PT0S"
}
sku {
name = "Standard_F2"
tier = "Standard"
capacity = 2
}
storage_profile_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_profile_os_disk {
name = ""
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
storage_profile_data_disk {
lun = 0
caching = "ReadWrite"
create_option = "Empty"
disk_size_gb = 10
}
os_profile {
computer_name_prefix = "testvm"
admin_username = "myadmin"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/myadmin/.ssh/authorized_keys"
key_data = file("C:/Users/User/Downloads/VmSS key/azkey")
}
}
network_profile {
name = "terraformnetworkprofile"
primary = true
ip_configuration {
name = "TestIPConfiguration"
primary = true
subnet_id = azurerm_subnet.example.id
public_ip_address_configuration {
name = "Avx192"
idle_timeout = 30
domain_name_label = "vjst23"
}
}
}
tags = {
environment = "staging"
}
}
Once deployed its giving error for health probe
│ Error: compute.VirtualMachineScaleSetsClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="BadRequest" Message="Rolling Upgrade mode is not supported for this Virtual Machine Scale Set because a health probe or health extension was not provided."
│
│ with azurerm_virtual_machine_scale_set.example,
│ on Se.tf line 81, in resource "azurerm_virtual_machine_scale_set" "example":
│ 81: resource "azurerm_virtual_machine_scale_set" "example" {
How to provide health probe directly if there is no load balancer attached to the deployment?
As you are deploying without load balancer . So , you need to do the below changes in your code:
Change the upgrade_policy_mode = "Rolling" to upgrade_policy_mode = "Manual"/"Automatic"
Remove the below block :
rolling_upgrade_policy {
max_batch_instance_percent = 20
max_unhealthy_instance_percent = 20
max_unhealthy_upgraded_instance_percent = 5
pause_time_between_batches = "PT0S"
}
I'm running trying to run a bash script on an Azure Linux VM scaleset using custom script extensions, I have the script uploaded into an Azure Storage account already. The bash script is meant to install ngix, on the VM Scaleset. The script runs without any errors, however if I log into any of the VMScaleset instances to validate I don't see NGIX running.
Bash script here
#!/bin/bash
apt-get update
apt-get install -y nginx
Terraform file here
data "azurerm_subnet" "refdata" {
name = var.subnetName1
virtual_network_name = var.vnetName
resource_group_name = var.resourceGroupName
}
resource "azurerm_windows_virtual_machine_scale_set" "res-vmscaleset" {
name = var.vmScaleSetName
resource_group_name = azurerm_resource_group.DevRG.name
location = azurerm_resource_group.DevRG.location
sku = "Standard_F2"
instances = 1
admin_password = "xxxxxx"
admin_username = "adminuser"
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2016-Datacenter-Server-Core"
version = "latest"
}
os_disk {
storage_account_type = "Standard_LRS"
caching = "ReadWrite"
}
network_interface {
name = "vmscaleset-nic"
primary = true
ip_configuration {
name = "internal"
primary = true
subnet_id=data.azurerm_subnet.test.id
}
}
}
resource "azurerm_linux_virtual_machine_scale_set" "res-linuxscale" {
name = "linuxvmss"
resource_group_name = azurerm_resource_group.DevRG.name
location = azurerm_resource_group.DevRG.location
sku = "Standard_F2"
instances = 2
admin_password = "Password1234!"
disable_password_authentication = false
admin_username = "adminuser"
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
os_disk {
storage_account_type = "Standard_LRS"
caching = "ReadWrite"
}
network_interface {
name = "lvmscaleset-nic"
primary = true
ip_configuration {
name = "internal"
primary = true
subnet_id=data.azurerm_subnet.test.id
}
}
}
resource "azurerm_virtual_machine_scale_set_extension" "res-extension" {
name = "example"
virtual_machine_scale_set_id = azurerm_linux_virtual_machine_scale_set.res-linuxscale.id
publisher = "Microsoft.OSTCExtensions"
type = "CustomScriptForLinux"
type_handler_version = "1.0"
settings = <<SETTINGS
{
"fileUris": ["https://xxxxxxxxxxx.blob.core.windows.net/shellscript11/post-deploy.sh"],
"commandToExecute": "sh post-deploy.sh"
}
SETTINGS
}
Reference to this document, you can use the publisher and type for your custom script like this.
resource "azurerm_virtual_machine_scale_set_extension" "res-extension" {
name = "nnn-extension"
virtual_machine_scale_set_id = azurerm_linux_virtual_machine_scale_set.example.id
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = jsonencode({
"fileUris" = ["https://xxxx.blob.core.windows.net/shscripts/aptupdate.sh"],
"commandToExecute" = "sh aptupdate.sh"
}
)
}
After applying the above configurations, you could upgrade each vmss instance, then the Nginx will be running.
Result
I've managed to deploy my Service Fabric but struggling for it to communicate with the Virtual Machine Scale Sets. All the nodes have deployed but they're not communicating with Service Fabric.
I've tried adding more parameters to my resources but unfortunately I'm getting a very lame error message which doesn't make sense.
resource "azurerm_service_fabric_cluster" "brcgs-ngd-dev" {
name = "BRCGS-NGD-${var.environment}-SF"
resource_group_name = var.resource_group_name
location = var.location
reliability_level = "Bronze"
upgrade_mode = "Automatic"
vm_image = "Windows"
management_endpoint = "https://example.com/Explorer"
node_type {
name = "sfNodes"
instance_count = 3
is_primary = true
client_endpoint_port = "19000"
http_endpoint_port = "19080"
}
fabric_settings {
name = "Security"
parameters = {
"ClusterProtectionLevel" = "EncryptAndSign"
}
}
certificate {
thumbprint = "example"
thumbprint_secondary = "example"
x509_store_name = "my"
}
}
resource "azurerm_virtual_machine_scale_set" "sf-nodes" {
name = "sfNodes"
location = var.location
resource_group_name = var.resource_group_name
upgrade_policy_mode = "automatic"
sku {
name = "Standard_D1_V2"
tier = "Standard"
capacity = 3
}
storage_profile_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServerSemiAnnual"
sku = "Datacenter-Core-1803-with-Containers-smalldisk"
version = "latest"
}
storage_profile_os_disk {
os_type = "Windows"
caching = "ReadOnly"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name_prefix = "sfNodes"
admin_username = "brcgsdev"
admin_password = var.adminpassword
}
os_profile_secrets = [
{
source_vault_id = "/subscriptions/exampleid/resourceGroups/rg-ngd-mig-inf-01/providers/Microsoft.KeyVault/vaults/kv-ngd-mig-infra"
vault_certificates = [
{
certificate_url = "https://example/certificates/cert/c5326f869a624079a0f1f48afe525331"
certificate_store = "My"
}
]
}
]
network_profile {
name = "NIC-brcgs-ngd-${var.environment}-sf-0"
primary = "true"
ip_configuration {
primary = "true"
name = "NIC-brcgs-ngd-${var.environment}-sf-0"
subnet_id = var.subnet_id
load_balancer_backend_address_pool_ids = [var.backendlb]
}
}
extension { # This extension connects vms to the cluster.
name = "ServiceFabricNodeVMscalesets"
publisher = "Microsoft.Azure.ServiceFabric"
type = "ServiceFabricNode"
type_handler_version = "1.0"
settings = "{ \"certificate\": { \"thumbprint\": \"example\", \"x509StoreName\": \"My\" } , \"clusterEndpoint\": \"example.uksouth.cloudapp.azure.com:19000\", \"nodeTypeRef\": \"sfNodes\", \"dataPath\": \"D:\\\\SvcFab\",\"durabilityLevel\": \"Bronze\",\"nicPrefixOverride\": \"******\"}"
}
}
The error message I get is
Error: Unsupported argument
on servicefabric\main.tf line 57, in resource "azurerm_virtual_machine_scale_set" "sf-nodes":
57: os_profile_secrets = [
An argument named "os_profile_secrets" is not expected here. Did you mean to
define a block of type "os_profile_secrets"?
As you can see the error message is not very helpful at all.
Can anyone help me on this?
Thanks
Terraform Template has a bit similar syntax with ARM Template. For the error message, you could define the os_profile_secrets as a block via removing "=". It looks like this:
os_profile_secrets {
source_vault_id = "/subscriptions/exampleid/resourceGroups/rg-ngd-mig-inf-01/providers/Microsoft.KeyVault/vaults/kv-ngd-mig-infra"
vault_certificates {
certificate_url = "https://example/certificates/cert/c5326f869a624079a0f1f48afe525331"
certificate_store = "My"
}
}
To deploy Service Fabric and instances with Terraform, here is an example for deploying Linux nodes for your reference.
Attempting to deploy a Function App on a Premium plan that serves the functions from a container. The HOWTO for this works well enough: https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-function-linux-custom-image?tabs=nodejs#create-an-app-from-the-image
However, when I try to deploy it using Terraform, no sale. Everything looks right but the function does not show up in the side menu (it does for the one deployed with the az CLI), nor can I hit it with Postman.
Via Resource Explorer I can see that the Functions are not being populated. Here is the HCL that I am using
resource "azurerm_app_service_plan" "plan" {
name = "${var.app_name}-Premium-ConsumptionPlan"
location = "WestUS"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
kind = "Elastic"
reserved = true
sku {
tier = "ElasticPremium"
size = "EP1"
}
}
data "azurerm_container_registry" "registry" {
name = "${var.app_name}registry"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
}
resource "azurerm_function_app" "funcApp" {
name = "${var.app_name}-userapi-${var.env_name}-funcapp"
location = "WestUS"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
app_service_plan_id = "${azurerm_app_service_plan.plan.id}"
storage_connection_string = "${azurerm_storage_account.storage.primary_connection_string}"
version = "~2"
app_settings = {
FUNCTIONS_EXTENSION_VERSION = "~2"
FUNCTIONS_WORKER_RUNTIME = "dotnet"
DOCKER_REGISTRY_SERVER_URL = "${data.azurerm_container_registry.registry.login_server}"
DOCKER_REGISTRY_SERVER_USERNAME = "${data.azurerm_container_registry.registry.admin_username}"
DOCKER_REGISTRY_SERVER_PASSWORD = "${data.azurerm_container_registry.registry.admin_password}"
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = "${azurerm_storage_account.storage.primary_connection_string}"
DOCKER_CUSTOM_IMAGE_NAME = "${data.azurerm_container_registry.registry.login_server}/pingtrigger:test"
WEBSITE_CONTENTSHARE = "${azurerm_storage_account.storage.name}"
FUNCTION_APP_EDIT_MODE = "readOnly"
}
site_config {
always_on = true
linux_fx_version = "DOCKER|${data.azurerm_container_registry.registry.login_server}/pingtrigger:test"
}
}
----- Updated based on answer ----
The solution was to instruct Function App to NOT use storage to discover metadata about available functions - this involves setting WEBSITES_ENABLE_APP_SERVICE_STORAGE to false. Here is my updated script
resource "azurerm_app_service_plan" "plan" {
name = "${var.app_name}-premiumPlan"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
location = "${data.azurerm_resource_group.rg.location}"
kind = "Linux"
reserved = true
sku {
tier = "Premium"
size = "P1V2"
}
}
data "azurerm_container_registry" "registry" {
name = "${var.app_name}registry"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
}
resource "azurerm_function_app" "funcApp" {
name = "userapi-${var.app_name}fa-${var.env_name}"
location = "${data.azurerm_resource_group.rg.location}"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
app_service_plan_id = "${azurerm_app_service_plan.plan.id}"
storage_connection_string = "${azurerm_storage_account.storage.primary_connection_string}"
version = "~2"
app_settings = {
FUNCTION_APP_EDIT_MODE = "readOnly"
https_only = true
DOCKER_REGISTRY_SERVER_URL = "${data.azurerm_container_registry.registry.login_server}"
DOCKER_REGISTRY_SERVER_USERNAME = "${data.azurerm_container_registry.registry.admin_username}"
DOCKER_REGISTRY_SERVER_PASSWORD = "${data.azurerm_container_registry.registry.admin_password}"
WEBSITES_ENABLE_APP_SERVICE_STORAGE = false
}
site_config {
always_on = true
linux_fx_version = "DOCKER|${data.azurerm_container_registry.registry.login_server}/testimage:v1.0.1"
}
}
To create the Azure Function with your custom Docker image, I think your problem is that you set the environment variable FUNCTIONS_WORKER_RUNTIME, it means you use the built-in runtime, but you want to use your custom image. With my test, you only need to configure the function app like this:
resource "azurerm_function_app" "funcApp" {
name = "${var.app_name}-userapi-${var.env_name}-funcapp"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
app_service_plan_id = "${azurerm_app_service_plan.plan.id}"
storage_connection_string = "${azurerm_storage_account.storage.primary_connection_string}"
version = "~2"
app_settings = {
FUNCTIONS_EXTENSION_VERSION = "~2"
DOCKER_REGISTRY_SERVER_URL = "${data.azurerm_container_registry.registry.login_server}"
DOCKER_REGISTRY_SERVER_USERNAME = "${data.azurerm_container_registry.registry.admin_username}"
DOCKER_REGISTRY_SERVER_PASSWORD = "${data.azurerm_container_registry.registry.admin_password}"
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = "${azurerm_storage_account.storage.primary_connection_string}"
WEBSITE_CONTENTSHARE = "${azurerm_storage_account.storage.name}"
DOCKER_CUSTOM_IMAGE_NAME = "${data.azurerm_container_registry.registry.login_server}/pingtrigger:test"
}
site_config {
always_on = true
linux_fx_version = "DOCKER|${data.azurerm_container_registry.registry.login_server}/pingtrigger:test"
}
}
Then you only need to wait a while for the creation.
I am trying to encrypt the "storage_os_disk" on an Azure VM via Terraform.
I have set the managed disk type on the VM OS Disk, so it will be managed, since I know the disk must be managed to allow encryption.
I cannot seem to figure out how to encrypt the OS disk, in terraform
Here is my code i am trying:
resource "azurerm_network_interface" "nic" {
name = "${var.project_ident}-${var.env_ident}-${var.admin_vm_name}-${var.region_suffix}-encrpytest"
location = "${data.azurerm_resource_group.core-rg.location}"
resource_group_name = "${data.azurerm_resource_group.core-rg.name}"
depends_on = ["azurerm_virtual_machine.dns-vm"]
ip_configuration {
name = "${var.project_ident}-${var.env_ident}-${var.admin_vm_name}-${var.region_suffix}-encrpytest"
subnet_id ="${data.terraform_remote_state.network.sn1_id}"
private_ip_address_allocation = "static"
private_ip_address = "${cidrhost(data.terraform_remote_state.network.sn1_address_prefix, 6 )}"
}
}
resource "azurerm_virtual_machine" "admin-vm-encrpytest" {
name = "${var.project_ident}-${var.env_ident}-${var.admin_vm_name}-encrpytest"
location = "${data.azurerm_resource_group.core-rg.location}"
resource_group_name = "${data.azurerm_resource_group.core-rg.name}"
network_interface_ids = ["${azurerm_network_interface.nic.id}"]
vm_size = "Standard_B2s"
depends_on = ["azurerm_virtual_machine.dns-vm"]
# Requires LRS Storage Account
boot_diagnostics {
enabled = "True"
storage_uri = "${data.terraform_remote_state.sa.sa_2_prim_blob_ep}"
#storage_uri = "${data.azurerm_storage_account.storage-account-2.primary_blob_endpoint}"
}
storage_os_disk {
name = "${var.project_ident}-${var.env_ident}-${var.admin_vm_name}-${var.region_suffix}-encrpytest"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
encryption_settings {
enabled = "True"
key_encryption_key {
key_url = "${data.terraform_remote_state.kv.vault_key_1_id}"
source_vault_id = "${data.terraform_remote_state.kv.vault_id}"
}
disk_encryption_key {
secret_url = "${data.terraform_remote_state.kv.vault_key_2_id}"
source_vault_id = "${data.terraform_remote_state.kv.vault_id}"
}
}
}
os_profile {
computer_name = "encrpytest"
admin_username = "cactusadmin"
admin_password = "${var.admin_vm_password}"
}
os_profile_windows_config {
provision_vm_agent = true
enable_automatic_upgrades = true
}
# Uncomment this line to delete the OS disk automatically when deleting the VM
delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
delete_data_disks_on_termination = true
storage_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2016-Datacenter"
version = "latest"
}
}
Thank you
Firstly, the encryption_settings does not exist in the storage_os_disk block but azurerm_managed_disk. So you could create an individual azurerm_managed_disk resource then create VM from a managed disk with the platform image referring here.
Alternatively, you could try to use azurerm_virtual_machine_extension for disk-encryption, refer to this.
resource "azurerm_virtual_machine_extension" "disk-encryption" {
name = "DiskEncryption"
location = "${local.location}"
resource_group_name = "${azurerm_resource_group.environment-rg.name}"
virtual_machine_name = "${azurerm_virtual_machine.server.name}"
publisher = "Microsoft.Azure.Security"
type = "AzureDiskEncryption"
type_handler_version = "2.2"
settings = <<SETTINGS
{
"EncryptionOperation": "EnableEncryption",
"KeyVaultURL": "https://${local.vaultname}.vault.azure.net",
"KeyVaultResourceId": "/subscriptions/${local.subscriptionid}/resourceGroups/${local.vaultresourcegroup}/providers/Microsoft.KeyVault/vaults/${local.vaultname}",
"KeyEncryptionKeyURL": "https://${local.vaultname}.vault.azure.net/keys/${local.keyname}/${local.keyversion}",
"KekVaultResourceId": "/subscriptions/${local.subscriptionid}/resourceGroups/${local.vaultresourcegroup}/providers/Microsoft.KeyVault/vaults/${local.vaultname}",
"KeyEncryptionAlgorithm": "RSA-OAEP",
"VolumeType": "All"
}
SETTINGS
}
I used the vm extension example, and it worked perfectly. The OS disk on my newly deployed Windows VM, was instantly encrypted