I've managed to deploy my Service Fabric but struggling for it to communicate with the Virtual Machine Scale Sets. All the nodes have deployed but they're not communicating with Service Fabric.
I've tried adding more parameters to my resources but unfortunately I'm getting a very lame error message which doesn't make sense.
resource "azurerm_service_fabric_cluster" "brcgs-ngd-dev" {
name = "BRCGS-NGD-${var.environment}-SF"
resource_group_name = var.resource_group_name
location = var.location
reliability_level = "Bronze"
upgrade_mode = "Automatic"
vm_image = "Windows"
management_endpoint = "https://example.com/Explorer"
node_type {
name = "sfNodes"
instance_count = 3
is_primary = true
client_endpoint_port = "19000"
http_endpoint_port = "19080"
}
fabric_settings {
name = "Security"
parameters = {
"ClusterProtectionLevel" = "EncryptAndSign"
}
}
certificate {
thumbprint = "example"
thumbprint_secondary = "example"
x509_store_name = "my"
}
}
resource "azurerm_virtual_machine_scale_set" "sf-nodes" {
name = "sfNodes"
location = var.location
resource_group_name = var.resource_group_name
upgrade_policy_mode = "automatic"
sku {
name = "Standard_D1_V2"
tier = "Standard"
capacity = 3
}
storage_profile_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServerSemiAnnual"
sku = "Datacenter-Core-1803-with-Containers-smalldisk"
version = "latest"
}
storage_profile_os_disk {
os_type = "Windows"
caching = "ReadOnly"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name_prefix = "sfNodes"
admin_username = "brcgsdev"
admin_password = var.adminpassword
}
os_profile_secrets = [
{
source_vault_id = "/subscriptions/exampleid/resourceGroups/rg-ngd-mig-inf-01/providers/Microsoft.KeyVault/vaults/kv-ngd-mig-infra"
vault_certificates = [
{
certificate_url = "https://example/certificates/cert/c5326f869a624079a0f1f48afe525331"
certificate_store = "My"
}
]
}
]
network_profile {
name = "NIC-brcgs-ngd-${var.environment}-sf-0"
primary = "true"
ip_configuration {
primary = "true"
name = "NIC-brcgs-ngd-${var.environment}-sf-0"
subnet_id = var.subnet_id
load_balancer_backend_address_pool_ids = [var.backendlb]
}
}
extension { # This extension connects vms to the cluster.
name = "ServiceFabricNodeVMscalesets"
publisher = "Microsoft.Azure.ServiceFabric"
type = "ServiceFabricNode"
type_handler_version = "1.0"
settings = "{ \"certificate\": { \"thumbprint\": \"example\", \"x509StoreName\": \"My\" } , \"clusterEndpoint\": \"example.uksouth.cloudapp.azure.com:19000\", \"nodeTypeRef\": \"sfNodes\", \"dataPath\": \"D:\\\\SvcFab\",\"durabilityLevel\": \"Bronze\",\"nicPrefixOverride\": \"******\"}"
}
}
The error message I get is
Error: Unsupported argument
on servicefabric\main.tf line 57, in resource "azurerm_virtual_machine_scale_set" "sf-nodes":
57: os_profile_secrets = [
An argument named "os_profile_secrets" is not expected here. Did you mean to
define a block of type "os_profile_secrets"?
As you can see the error message is not very helpful at all.
Can anyone help me on this?
Thanks
Terraform Template has a bit similar syntax with ARM Template. For the error message, you could define the os_profile_secrets as a block via removing "=". It looks like this:
os_profile_secrets {
source_vault_id = "/subscriptions/exampleid/resourceGroups/rg-ngd-mig-inf-01/providers/Microsoft.KeyVault/vaults/kv-ngd-mig-infra"
vault_certificates {
certificate_url = "https://example/certificates/cert/c5326f869a624079a0f1f48afe525331"
certificate_store = "My"
}
}
To deploy Service Fabric and instances with Terraform, here is an example for deploying Linux nodes for your reference.
Related
I have created separate modules for vnet, NIC and VM.. I am trying to create two vms in the vm module and two nics in the nic module... created an output in the nic module to get the nic.id and this output am referring in the vm module , but only one vm gets created with two nics and second vm fails to create due to unavailability of nic... please find my code below, i need to be able to map the individual nic in the nic module to individual vm in the vm moodule
main.tf
module "nic" {
source = "./Nic"
resource_group_name = module.vnet1mod.rgnameout
location = module.vnet1mod.rglocationout
subnet_id = module.vnet1mod.subnetout
}
module "vnet1mod" {
source = "./vnetmodule"
}
module "virtualmachine" {
source = "./VirtualMachine"
resource_group_name = module.vnet1mod.rgnameout
location = module.vnet1mod.rglocationout
network_interface_ids = module.nic.netinterfaceoutput # this is where its failing !!
}
..............
nic module
resource "azurerm_network_interface" "nic1" {
for_each = var.vmdetails
name = each.value.vmnic
location = var.location
resource_group_name = var.resource_group_name
ip_configuration {
name = "internal"
subnet_id = var.subnet_id
private_ip_address_allocation = "Dynamic"
}
}
output "netinterfaceoutput" {
value = tomap({ for k, s in azurerm_network_interface.nic1 : k => s.id })
}
variable "location" {`enter code here`
type = string
description = "(optional) describe your variable"
}
variable "resource_group_name" {
type = string
description = "(optional) describe your variable"
}
variable "subnet_id" {
type = string
description = "(optional) describe your variable"
}
...........
vm module
resource "azurerm_windows_virtual_machine" "vm1" {
for_each = var.vmdetails
name = each.value.vmname
resource_group_name = var.resource_group_name
location = var.location
size = var.vmsize
admin_username = var.adminusername
admin_password = var.adminpassword
network_interface_ids = var.network_interface_ids
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = var.publisher
offer = var.offer
sku = var.sku
version = var.Osversion
}
}
variable "vmdetails" {
type = map(any)
default = {
"vm1" = {
vmname = "vmA-1"
vmnic = "vmnicA-1"
}
"vm2" = {
vmname = "vmA-2"
vmnic = "vmnicA-2"
}
}
}
........
vnet module
resource "azurerm_virtual_network" "vnet1" {
name = var.vnet_name
location = var.location_name
resource_group_name = var.resourcegroup1_name
address_space = var.vnet_address
}
resource "azurerm_subnet" "subnet1" {
name = var.subnet_name
resource_group_name = var.resourcegroup1_name
virtual_network_name = azurerm_virtual_network.vnet1.name
address_prefixes = var.subnet_address
}
output "rgnameout" {
value = azurerm_virtual_network.vnet1.resource_group_name
}
output "rglocationout" {
value = azurerm_virtual_network.vnet1.location
}
output "subnetout" {
value = azurerm_subnet.subnet1.id
}
I am a new in terraform and using below terraform template to create Azure App service plan, App service and App insight together
# Configure the Azure provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.98"
}
}
required_version = ">= 1.1.6"
}
provider "azurerm" {
features { }
}
resource "azurerm_application_insights" "appService-app_insights" {
name ="${var.prefix}-${var.App_Insights}"
location = var.Location
resource_group_name = var.ResourceGroup
application_type = "web" # Node.JS ,java
}
resource "azurerm_app_service" "appservice" {
name ="${var.prefix}-${var.appservice_name}"
location = var.Location
resource_group_name = var.ResourceGroup
app_service_plan_id = azurerm_app_service_plan.appserviceplan.id
https_only = true
site_config {
linux_fx_version = "NODE|10.14"
}
app_settings = {
# "SOME_KEY" = "some-value"
"APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.appService-app_insights.instrumentation_key
}
depends_on = [
azurerm_app_service_plan.appserviceplan,
azurerm_application_insights.appService-app_insights
]
}
# create the AppService Plan for the App Service hosting our website
resource "azurerm_app_service_plan" "appserviceplan" {
name ="${var.prefix}-${var.app_service_plan_name}"
location = var.Location
resource_group_name = var.ResourceGroup
kind ="linux"
reserved = true
sku {
tier = "Standard" #
size = "S1"
}
}
I am generating a variable.tf file at runtime which is quite simple in this case
variable "ResourceGroup" {
default = "TerraRG"
}
variable "Location" {
default = "westeurope"
}
variable "app_service_plan_name" {
default = "terra-asp"
}
variable "appservice_name" {
default = "terra-app"
}
variable "prefix" {
default = "pre"
}
variable "App_Insights" {
default = "terra-ai"
}
Everything working good till here.
No I am trying to extend my infra and I want to go with multiple App + App Service Plan + App Insight which might look like below Json
{
"_comment": "Web App Config",
"webapps": [
{
"Appservice": "app1",
"Appserviceplan": "asp1",
"InstrumentationKey": "abc"
},
{
"Appservice": "app2",
"Appserviceplan": "asp2",
"InstrumentationKey": "def"
},
{
"Appservice": "app3",
"Appserviceplan": "asp2",
"InstrumentationKey": "def"
}
]
}
How can I target such a resource creation.
Should I think on creating App Service Plan First and App Insight and then should plan creating Apps. What could be a better approach for this scenario.
Since app1,app2,app3 are not globally unique i have tried with different name.
I have tried with app service name testapprahuluni12345,testapp12346and testapp12347.
main.tf
# Configure the Azure provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.98"
}
}
}
provider "azurerm" {
features { }
}
resource "azurerm_application_insights" "appService-app_insights" {
name ="${var.prefix}-${var.App_Insights}"
location = var.Location
resource_group_name = var.ResourceGroup
application_type = "web" # Node.JS ,java
}
resource "azurerm_app_service_plan" "appserviceplan" {
count = length(var.app_service_plan_name)
name = var.app_service_plan_name[count.index]
location = var.Location
resource_group_name = var.ResourceGroup
kind ="linux"
reserved = true
sku {
tier = "Standard" #
size = "S1"
}
}
# create the AppService Plan for the App Service hosting our website
resource "azurerm_app_service" "appservice" {
count = length(var.app_names)
name = var.app_names[count.index]
location = var.Location
resource_group_name = var.ResourceGroup
app_service_plan_id = azurerm_app_service_plan.appserviceplan[count.index].id
https_only = true
site_config {
linux_fx_version = "NODE|10.14"
}
app_settings = {
# "SOME_KEY" = "some-value"
"APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.appService-app_insights.instrumentation_key
}
depends_on = [
azurerm_app_service_plan.appserviceplan,
azurerm_application_insights.appService-app_insights
]
}
variable.tf
variable "ResourceGroup" {
default = "v-XXXXX--ree"
}
variable "Location" {
default = "West US 2"
}
/*variable "app_service_plan_name" {
default = "terra-asp"
}
variable "appservice_name" {
default = "terra-app"
}
*/
variable "prefix" {
default = "pre"
}
variable "App_Insights" {
default = "terra-ai"
}
variable "app_names" {
description = "App Service Names"
type = list(string)
default = ["testapprahuluni12345", "testapp12346", "testapp12347"]
}
variable "app_service_plan_name" {
description = "App Service Plan Name"
type = list(string)
default = ["asp1", "asp2", "asp2"]
}
OutPut--
I'm trying to deploy a Virtual Machine Scale Set Extension via Terraform but there are few issues here, The requirement was to implement without loadbalancer attached
resource "azurerm_virtual_machine_scale_set" "example" {
name = "mytestscaleset-1"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
# automatic rolling upgrade
# automatic_os_upgrade = true
upgrade_policy_mode = "Rolling"
rolling_upgrade_policy {
max_batch_instance_percent = 20
max_unhealthy_instance_percent = 20
max_unhealthy_upgraded_instance_percent = 5
pause_time_between_batches = "PT0S"
}
sku {
name = "Standard_F2"
tier = "Standard"
capacity = 2
}
storage_profile_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_profile_os_disk {
name = ""
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
storage_profile_data_disk {
lun = 0
caching = "ReadWrite"
create_option = "Empty"
disk_size_gb = 10
}
os_profile {
computer_name_prefix = "testvm"
admin_username = "myadmin"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/myadmin/.ssh/authorized_keys"
key_data = file("C:/Users/User/Downloads/VmSS key/azkey")
}
}
network_profile {
name = "terraformnetworkprofile"
primary = true
ip_configuration {
name = "TestIPConfiguration"
primary = true
subnet_id = azurerm_subnet.example.id
public_ip_address_configuration {
name = "Avx192"
idle_timeout = 30
domain_name_label = "vjst23"
}
}
}
tags = {
environment = "staging"
}
}
Once deployed its giving error for health probe
│ Error: compute.VirtualMachineScaleSetsClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="BadRequest" Message="Rolling Upgrade mode is not supported for this Virtual Machine Scale Set because a health probe or health extension was not provided."
│
│ with azurerm_virtual_machine_scale_set.example,
│ on Se.tf line 81, in resource "azurerm_virtual_machine_scale_set" "example":
│ 81: resource "azurerm_virtual_machine_scale_set" "example" {
How to provide health probe directly if there is no load balancer attached to the deployment?
As you are deploying without load balancer . So , you need to do the below changes in your code:
Change the upgrade_policy_mode = "Rolling" to upgrade_policy_mode = "Manual"/"Automatic"
Remove the below block :
rolling_upgrade_policy {
max_batch_instance_percent = 20
max_unhealthy_instance_percent = 20
max_unhealthy_upgraded_instance_percent = 5
pause_time_between_batches = "PT0S"
}
I have the following azurerm_function_app terrform section:
resource "azurerm_function_app" "main" {
name = "${var.storage_function_name}"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
app_service_plan_id = "${azurerm_app_service_plan.main.id}"
storage_connection_string = "${azurerm_storage_account.main.primary_connection_string}"
https_only = true
app_settings {
"APPINSIGHTS_INSTRUMENTATIONKEY" = "${azurerm_application_insights.main.instrumentation_key}"
}
}
How can I specify the OS is linux?
Since there is not much documentation, I used following technique to construct terraform template.
Create the type of function app you want in azure portal
Import same resource using terraform import command.
terraform import azurerm_function_app.functionapp1
/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.Web/sites/functionapp1
following information will be retrieved
id = /subscriptions/xxxx/resourceGroups/xxxxxx/providers/Microsoft.Web/sites/xxxx
app_service_plan_id = /subscriptions/xxx/resourceGroups/xxxx/providers/Microsoft.Web/serverfarms/xxxx
app_settings.% = 3
app_settings.FUNCTIONS_WORKER_RUNTIME = node
app_settings.MACHINEKEY_DecryptionKey = xxxxx
app_settings.WEBSITE_NODE_DEFAULT_VERSION = 10.14.1
client_affinity_enabled = false
connection_string.# = 0
default_hostname = xxxx.azurewebsites.net
enable_builtin_logging = false
enabled = true
https_only = false
identity.# = 0
kind = functionapp,linux,container
location = centralus
name = xxxxx
outbound_ip_addresses = xxxxxx
resource_group_name = xxxx
site_config.# = 1
site_config.0.always_on = true
site_config.0.linux_fx_version = DOCKER|microsoft/azure-functions-node8:2.0
site_config.0.use_32_bit_worker_process = true
site_config.0.websockets_enabled = false
site_credential.# = 1
site_credential.0.password =xxxxxx
site_credential.0.username = xxxxxx
storage_connection_string = xxxx
tags.% = 0
version = ~2
From this I build following terraform template
provider "azurerm" {
}
resource "azurerm_resource_group" "linuxnodefunction" {
name = "azure-func-linux-node-rg"
location = "westus2"
}
resource "azurerm_storage_account" "linuxnodesa" {
name = "azurefunclinuxnodesa"
resource_group_name = "${azurerm_resource_group.linuxnodefunction.name}"
location = "${azurerm_resource_group.linuxnodefunction.location}"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_app_service_plan" "linuxnodesp" {
name = "azure-func-linux-node-sp"
location = "${azurerm_resource_group.linuxnodefunction.location}"
resource_group_name = "${azurerm_resource_group.linuxnodefunction.name}"
kind = "Linux"
reserved = true
sku {
capacity = 1
size = "P1v2"
tier = "PremiunV2"
}
}
resource "azurerm_function_app" "linuxnodefuncapp" {
name = "azure-func-linux-node-function-app"
location = "${azurerm_resource_group.linuxnodefunction.location}"
resource_group_name = "${azurerm_resource_group.linuxnodefunction.name}"
app_service_plan_id = "${azurerm_app_service_plan.linuxnodesp.id}"
storage_connection_string = "${azurerm_storage_account.linuxnodesa.primary_connection_string}"
app_settings {
FUNCTIONS_WORKER_RUNTIME = "node"
WEBSITE_NODE_DEFAULT_VERSION = "10.14.1"
}
site_config {
always_on = true
linux_fx_version = "DOCKER|microsoft/azure-functions-node8:2.0"
use_32_bit_worker_process = true
websockets_enabled = false
}
}
Let us know your experience with this. I will try to test few things with this.
I think you need to specify that in app_service_plan block
Kind = "Linux"
kind - (Optional) The kind of the App Service Plan to create. Possible values are Windows (also available as App), Linux and FunctionApp (for a Consumption Plan). Defaults to Windows. Changing this forces a new resource to be created.
NOTE: When creating a Linux App Service Plan, the reserved field must be set to true.
Example from Terraform doc
resource "azurerm_resource_group" "test" {
name = "azure-functions-cptest-rg"
location = "westus2"
}
resource "azurerm_storage_account" "test" {
name = "functionsapptestsa"
resource_group_name = "${azurerm_resource_group.test.name}"
location = "${azurerm_resource_group.test.location}"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_app_service_plan" "test" {
name = "azure-functions-test-service-plan"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
kind = "Linux"
sku {
tier = "Dynamic"
size = "Y1"
}
properties {
reserved = true
}
}
resource "azurerm_function_app" "test" {
name = "test-azure-functions"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
app_service_plan_id = "${azurerm_app_service_plan.test.id}"
storage_connection_string = "${azurerm_storage_account.test.primary_connection_string}"
}
I am trying to setup a Service fabric cluster and while doing so I am creating a azure virtual machine scale set with LinuxDiagnostic as one of the extension. Following is the code for the VM scale set:
resource "azurerm_virtual_machine_scale_set" "sf_scale_set" {
name = "sf-scale-set-${terraform.workspace}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.fusion.name}"
# automatic rolling upgrade
automatic_os_upgrade = true
upgrade_policy_mode = "Automatic"
# required when using rolling upgrade policy
health_probe_id = "${azurerm_lb_probe.sf_lb_probe.id}"
sku {
name = "${var.sf_scale_set_vm_config["name"]}"
tier = "${var.sf_scale_set_vm_config["tier"]}"
capacity = "${var.sf_scale_set_vm_config["capacity"]}"
}
storage_profile_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04"
version = "6.0.12"
}
storage_profile_os_disk {
name = ""
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile_secrets {
source_vault_id = "${var.sf_vault_id}"
vault_certificates {
certificate_url = "${var.sf_vault_url}"
}
}
storage_profile_data_disk {
lun = 0
caching = "ReadWrite"
create_option = "Empty"
disk_size_gb = 40
}
os_profile {
computer_name_prefix = "sf-vm-${terraform.workspace}"
admin_username = "hachadmin"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/admin/.ssh/authorized_keys"
key_data = "${file("sshkeys/admin.pub")}"
}
}
network_profile {
name = "sf-vm-net-profile-${terraform.workspace}"
primary = true
ip_configuration {
name = "sf-ip-config-${terraform.workspace}"
primary = true
subnet_id = "${azurerm_subnet.sf_vnet_subnet.id}"
load_balancer_backend_address_pool_ids = ["${azurerm_lb_backend_address_pool.sf_be_vm_set.id}"]
load_balancer_inbound_nat_rules_ids = ["${element(azurerm_lb_nat_pool.sf_nat_vm_set.*.id, count.index)}"]
}
}
extension {
name = "sf-scale-set-extension-${terraform.workspace}"
publisher = "Microsoft.Azure.ServiceFabric"
type = "ServiceFabricLinuxNode"
type_handler_version = "1.0"
settings = "{ \"certificate\": { \"thumbprint\": \"${var.cert_thumbprint}\", \"x509StoreName\": \"My\" } , \"clusterEndpoint\": \"${azurerm_service_fabric_cluster.sf_service.cluster_endpoint}\", \"nodeTypeRef\": \"${terraform.workspace}-sf-node-type\", \"durabilityLevel\": \"${var.sf_reliability}\",\"nicPrefixOverride\": \"${azurerm_subnet.sf_vnet_subnet.address_prefix}\",\"enableParallelJobs\": \"true\"}"
protected_settings = "{\"StorageAccountKey1\": \"${azurerm_storage_account.sf_storage.primary_access_key}\", \"StorageAccountKey2\": \"${azurerm_storage_account.sf_storage.secondary_access_key}\"}"
}
extension {
name = "sf-scale-set-linux-diag-extension-${terraform.workspace}" # This extension connects vms to the cluster.
publisher = "Microsoft.OSTCExtensions"
type = "LinuxDiagnostic"
type_handler_version = "2.3"
auto_upgrade_minor_version = true
protected_settings = "{\"storageAccountName\": \"${azurerm_storage_account.sf_storage_app_diag.primary_access_key}\", \"StorageAccountKey1\": \"${azurerm_storage_account.sf_storage_app_diag.primary_access_key}\", \"StorageAccountKey2\": \"${azurerm_storage_account.sf_storage_app_diag.secondary_access_key}\"}"
settings = "${data.template_file.settings.rendered}"
}
tags {
Region = "${var.location}"
Createdby = "${var.created_by_tag}"
Team = "${var.team_tag}"
Environment = "${terraform.workspace}"
ninetofive = "${var.ninetofivetag}"
}
}
data "template_file" "settings" {
template = "${file("${path.module}/diagnostics/settings2.3.json.tpl")}"
vars {
xml_cfg = "${base64encode(data.template_file.wadcfg.rendered)}"
diag_storage_name = "${azurerm_storage_account.sf_storage_app_diag.name}"
}
}
data "template_file" "wadcfg" {
template = "${file("${path.module}/diagnostics/wadcfg.xml.tpl")}"
vars {
virtual_machine_id = "${azurerm_virtual_machine_scale_set.sf_scale_set.id}"
}
}
The end of Wadcfg file looks like follow:
<WadCfg>
<PerformanceCounters scheduledTransferPeriod="PT1M">
.....
......
</PerformanceCounters>
<Metrics resourceId="${virtual_machine_id}">
<MetricAggregation scheduledTransferPeriod="PT1H"/>
<MetricAggregation scheduledTransferPeriod="PT1M"/>
</Metrics>
</DiagnosticMonitorConfiguration>
</WadCfg>
Settings2.3.json.tpl file is
{
"xmlCfg": "${xml_cfg}",
"storageAccount": "${diag_storage_name}"
}
While trying to run the Terraform code I get the following error:
[+] Found tfvars file ./profiles/eu-sprint/eu-sprint.tfvars
Error: Cycle: data.template_file.wadcfg, data.template_file.settings, azurerm_virtual_machine_scale_set.sf_scale_set
I am assuming that Terraform is trying to render the template wadcfg.xml.tpl without the Azure VM scale set. Following are some of my question:
How can I enforce Terraform to wait until the Azure VM scale set are created before trying to render the wadcfg.xml.tpl file
As part of rendering my wadcfg.xml.tpl I am passing the VM id's, I know this will work if I am only creating one instance but will the code above also work for VM scale set without me explicitly looping through each VM's? If incase I would have to loop through each of them, what would be the recommenced approach?
I saw there's a https://www.terraform.io/docs/providers/azurerm/r/virtual_machine_extension.html resource existing to install VM extension, will this also work for VM scale set? If not is there a better way I could organize my settings and protected_settings part so that they are reader friendly?
I would appreciate some help here.