Error: waiting for creation of MsSql Database - Terraform failure - terraform

I have had to re-develop my pipeline for building out my infrastructure to use local agent pools and change Ubuntu code to windows (PowerShell code).
I am now in the position where I am building out my infrastructure and it failing on the most basic of tasks.
I have created my SQL Server and that seems to be OK. I have also got my logging system infra done OK too, but I am really struggling on building out a DB on my SQL Server.
At the most basic here is my code. Server build OK.
resource "azurerm_mssql_server" "main" {
name = local.sqlServerName
resource_group_name = local.resourceGroupName
location = var.location
version = "12.0"
minimum_tls_version = "1.2"
administrator_login = var.sql_administrator_login
administrator_login_password = var.sql_administrator_login_password
tags = var.tags
}
resource "azurerm_sql_active_directory_administrator" "main" {
server_name = azurerm_mssql_server.main.name
resource_group_name = local.resourceGroupName
login = local.sql_ad_login
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = local.object_id
}
resource "azurerm_sql_firewall_rule" "main" {
name = var.sql_firewall_rule
resource_group_name = local.resourceGroupName
server_name = azurerm_mssql_server.main.name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
resource "azurerm_mssql_database" "main" {
name = "${local.raptSqlDatabaseName}-${var.environment}"
server_id = azurerm_mssql_server.main.id
min_capacity = 0.5
max_size_gb = 100
zone_redundant = false
collation = "SQL_Latin1_General_CP1_CI_AS"
sku_name = "GP_S_Gen5_2"
auto_pause_delay_in_minutes = 60
create_mode = "Default"
}
I get an error saying:
Error: waiting for creation of MsSql Database "xxx-xxx-xxx-xxx-Prod" (MsSql Server Name "xxx-xxx-xxx-prod" / Resource Group "rg-xxx-xxx-xxx"): Code="InternalServerError" Message="An unexpected error occured while processing the request.
Before I had to design it all in PS and not bash and use local pool, I never had a problem and this worked fine.
I found this but it's saying the correct error but nothing else seems correct. It is odd because I can build up my other infra fine in the same main.tf file.
https://github.com/hashicorp/terraform-provider-azurerm/issues/13194
I am also noticing that Terraform output is not working:
Here is my output file:
output "sql_server_name" {
value = azurerm_mssql_server.main.fully_qualified_domain_name
}
output "sql_server_user" {
value = azurerm_mssql_server.main.administrator_login
}
output "sql_server_password" {
value = azurerm_mssql_server.main.administrator_login_password
sensitive = true
}
#output "cl_sql_database_name" {
# value = azurerm_mssql_database.cl.name
#}
#output "rapt_sql_database_name" {
# value = azurerm_mssql_database.rapt.name
#}
output "app_insights_instrumentation_key" {
value = azurerm_application_insights.main.instrumentation_key
}
Is there any chance this is linked?

Please use latest terraform version i.e. 1.1.13 and azurerm-provider i.e. 2.92.0 as there were some bugs in previous Azure API version which resulted in 500 Error Code which has been fixed in the recent versions as mentioned in this GitHub Issue . Only after a successful apply , you get the output otherwise it won't get stored if there is an error.
I tested the same code with latest versions on both PowerShell and Bash as below :
provider "azurerm" {
features {}
}
data "azurerm_client_config" "current" {}
locals {
resourceGroupName="ansumantest"
sqlServerName="ansumantestsql"
sql_ad_login="sqladmin"
object_id= data.azurerm_client_config.current.object_id
raptSqlDatabaseName="ansserverdb"
}
variable "location" {
default="eastus"
}
variable "sql_administrator_login" {
default="4dm1n157r470r"
}
variable "sql_administrator_login_password" {
default="4-v3ry-53cr37-p455w0rd"
}
variable "sql_firewall_rule" {
default="ansumantestfirewall"
}
variable "environment" {
default="test"
}
resource "azurerm_mssql_server" "main" {
name = local.sqlServerName
resource_group_name = local.resourceGroupName
location = var.location
version = "12.0"
minimum_tls_version = "1.2"
administrator_login = var.sql_administrator_login
administrator_login_password = var.sql_administrator_login_password
}
resource "azurerm_sql_active_directory_administrator" "main" {
server_name = azurerm_mssql_server.main.name
resource_group_name = local.resourceGroupName
login = local.sql_ad_login
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = local.object_id
}
resource "azurerm_sql_firewall_rule" "main" {
name = var.sql_firewall_rule
resource_group_name = local.resourceGroupName
server_name = azurerm_mssql_server.main.name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
resource "azurerm_mssql_database" "main" {
name = "${local.raptSqlDatabaseName}-${var.environment}"
server_id = azurerm_mssql_server.main.id
min_capacity = 0.5
max_size_gb = 100
zone_redundant = false
collation = "SQL_Latin1_General_CP1_CI_AS"
sku_name = "GP_S_Gen5_2"
auto_pause_delay_in_minutes = 60
create_mode = "Default"
}
Output.tf
output "sql_server_name" {
value = azurerm_mssql_server.main.fully_qualified_domain_name
}
output "sql_server_user" {
value = azurerm_mssql_server.main.administrator_login
}
output "sql_server_password" {
value = azurerm_mssql_server.main.administrator_login_password
sensitive = true
}
output "cl_sql_database_name" {
value = azurerm_mssql_database.main.name
}
Output :

Updated Terraform to latest version didnt really help...
I turned on logging with TF_LOGS (env variable in terraform).
Watching the activity logs in the resource group and i noticed it was building the DB server and then there was an error with AAD and then the DB Server build failed after that. So I removed the AAD work in my main.tf - then i reran the pipeline then it worked fine. Phew...

Related

Terraform - ADF to DB connectivity issue when tenant_id is provided in LS configuration - azurerm_data_factory_linked_service_azure_sql_database

Terraform Version
1.2.3
AzureRM Provider Version
v3.13.0
Affected Resource(s)/Data Source(s)
Azure data factory, SQL Database
Terraform Configuration Files
resource "azurerm_data_factory_linked_service_azure_sql_database" "sqldatabase_linked_service_10102022" {
count = (var.subResourcesInfo.sqlDatabaseName != "") ? 1 : 0
depends_on = [azurerm_data_factory_integration_runtime_azure.autoresolve_integration_runtime,
azurerm_data_factory_managed_private_endpoint.sqlserver_managed_endpoint]
name = "AzureSqlDatabase10102022"
data_factory_id = azurerm_data_factory.datafactory.id
integration_runtime_name = "AutoResolveIntegrationRuntime"
use_managed_identity = true
connection_string = format("Integrated Security=False;Data Source=%s.database.windows.net;Initial Catalog=%s;",
var.subResourcesInfo.sqlServerName,
var.subResourcesInfo.sqlDatabaseName)
}
Expected Behaviour
Issue is ADF to DB connectivity, error:
Operation on target DWH_DF_aaa failed: {'StatusCode':'DFExecutorUserError','Message':'Job failed due to reason: com.microsoft.dataflow.broker.InvalidOperationException: Only one valid authentication should be used for AzureSqlDatabase. ServicePrincipalAuthentication is invalid. One or two of servicePrincipalId/key/tenant is missing.','Details':''}
When we created this LS using TF, we get tenant="" in ADF LS Json file which we suspect that causing issue of above error.
When we created the same LS directly on ADF UI, then there is no field of tenant="" in its json file, and if we use this LS in dataflow/pipeline then communication works from ADF to DB.
Expected behavior should be, if we don't provide tenant_id parameter in TF code then in json also should not show tenant="" which then works for connectivity.
I tried to reproduce the scenario in my environment:
With below code , I could create a Linked Service (connection) between Azure SQL Database and Azure Data Factory.
Code:
resource "azurerm_data_factory" "example" {
name = "kaADFexample"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
managed_virtual_network_enabled = true
}
resource "azurerm_storage_account" "example" {
name = "kaaaexample"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_kind = "BlobStorage"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_data_factory_managed_private_endpoint" "example" {
name = "example"
data_factory_id = azurerm_data_factory.example.id
target_resource_id = azurerm_storage_account.example.id
subresource_name = "blob"
}
resource "azurerm_user_assigned_identity" "main" {
depends_on = [data.azurerm_resource_group.example]
name = "kasupports01-mid"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
}
resource "azurerm_data_factory_integration_runtime_azure" "test" {
name = "AutoResolveIntegrationRuntime"
data_factory_id = azurerm_data_factory.example.id
location = "AutoResolve"
virtual_network_enabled = true
}
resource "azurerm_data_factory_linked_service_azure_sql_database" "linked_service_azure_sql_database" {
name = "kaexampleLS"
data_factory_id = azurerm_data_factory.example.id
connection_string = "data source=serverhostname;initial catalog=master;user id=testUser;Password=test;integrated security=False;encrypt=True;connection timeout=30"
use_managed_identity = true
integration_runtime_name = azurerm_data_factory_integration_runtime_azure.test.name
depends_on = [azurerm_data_factory_integration_runtime_azure.test,
azurerm_data_factory_managed_private_endpoint.example]
}
output "id" {
value = azurerm_data_factory_linked_service_azure_sql_database.linked_service_azure_sql_database.id
}
Executed: terraform plan
Output:
id = "/subscriptions/xxxxxxxxx/resourceGroups/xxxxxx/providers/Microsoft.DataFactory/factories/kaADFexample/linkedservices/kaexampleLS"
If the error persists in your case ,try removing the tenant attribute in the data_factory just after deployment is done in terraform.
Please check this known issue and mentioned by #chgenzel in terraform-provider-azurerm issues | Github
ADF:
Managed Identity
Linked service : azure sql
Reference: data_factory_linked_service_azure_sql_database | terraformregistry

Receiving error from Azure API when attempting to use additional_unattend_content argument

Hoping someone might be able to assist. Using Terraform on Azure and looking for a method to deploy windows VMs and auto-login + configure winrm. I’ve found that some use the azurerm_windows_virtual_machine.<name_of_vm>.additional_unattend_content to set this up normally.
Example found in provider github repo: https://github.com/hashicorp/terraform-provider-azurerm/blob/b0c897055329438be6a3a[…]ned-to-active-directory/modules/active-directory-domain/main.tf
I’m getting some errors from the azure backend and was hoping maybe someone more knowledgeable than me would have experience with this. Getting pushback from Azure support when I requested their help. Appreciate any info anyone can provide!
Happy to provide logs or anything else thats needed!!!
resource "azurerm_windows_virtual_machine" "wks_win10" {
count = var.number_of_win10_wks
depends_on = [azurerm_network_interface.wks_nic_win10]
name = "wks-win10-${count.index}"
location = var.location
resource_group_name = var.rg_name
size = var.vm_size
provision_vm_agent = true
computer_name = "wks-win10-${count.index}"
admin_username = var.windows_username
admin_password = var.windows_password
network_interface_ids = ["${element(azurerm_network_interface.wks_nic_win10.*.id, count.index)}"]
os_disk {
caching = "ReadWrite"
name = "wks-win10-osdisk-${count.index}"
disk_size_gb = "250"
storage_account_type = "StandardSSD_LRS"
}
source_image_reference {
publisher = "MicrosoftWindowsDesktop"
offer = "Windows-10"
sku = "win10-21h2-ent"
version = "latest"
}
additional_unattend_content {
setting = "AutoLogon"
content = local.auto_logon_data
# content = "<AutoLogon><Password><Value>${var.windows_password}</Value></Password><Enabled>true</Enabled><LogonCount>3</LogonCount><Username>${var.windows_username}</Username></AutoLogon>"
}
winrm_listener {
protocol = "Http"
}
tags = merge(var.tags,
{
"kind"="workstation"
"os"="windows"
})
}
resource "azurerm_virtual_machine_extension" "wks_win10_vm_extension_network_watcher" {
count = var.number_of_win10_wks
depends_on = [azurerm_windows_virtual_machine.wks_win10]
name = "win10netwatch${count.index}"
virtual_machine_id = "${element(azurerm_windows_virtual_machine.wks_win10.*.id, count.index )}"
publisher = "Microsoft.Azure.NetworkWatcher"
type = "NetworkWatcherAgentWindows"
type_handler_version = "1.4"
auto_upgrade_minor_version = true
}
Errors:
module.compute.azurerm_network_interface.wks_nic_win10[0]: Creation complete after 1s [id=/subscriptions/<subscription-id>/resourceGroups/test-rg/providers/Microsoft.Network/networkInterfaces/wks-win10-nic-0]
module.compute.azurerm_windows_virtual_machine.wks_win10[0]: Creating...
2022-11-23T16:09:12.176-0500 [ERROR] provider.terraform-provider-azurerm_v3.30.0_x5: Response contains error diagnostic: #module=sdk.proto diagnostic_detail= diagnostic_severity=ERROR diagnostic_summary="creating Windows Virtual Machine: (Name "wks-win10-0" / Resource Group "test-rg"): compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter" Message="The value of parameter windowsConfiguration.additionalUnattendContent.content is invalid." Target="windowsConfiguration.additionalUnattendContent.content"" #caller=github.com/hashicorp/terraform-plugin-go#v0.10.0/tfprotov5/internal/diag/diagnostics.go:56 tf_provider_addr=provider tf_req_id=6a628786-49b2-388d-85f9-07e4eeb8a618 tf_resource_type=azurerm_windows_virtual_machine tf_rpc=ApplyResourceChange tf_proto_version=5.2 timestamp=2022-11-23T16:09:12.176-0500
2022-11-23T16:09:12.181-0500 [ERROR] vertex "module.compute.azurerm_windows_virtual_machine.wks_win10[0]" error: creating Windows Virtual Machine: (Name "wks-win10-0" / Resource Group "test-rg"): compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter" Message="The value of parameter windowsConfiguration.additionalUnattendContent.content is invalid." Target="windowsConfiguration.additionalUnattendContent.content"
I tried to reproduce the scenario in my environment:
Terraform code:
resource "azurerm_windows_virtual_machine" "example" {
name = "kaacctvm"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.example.id]
size = "Standard_F2"
admin_username = "txxxin"
admin_password = "Pasxxx4!"
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "txxxmin"
admin_password = "gfgxx4!"
}
source_image_reference {
publisher = "MicrosoftWindowsDesktop"
offer = "Windows-10"
sku = "win10-21h2-ent"
version = "latest"
}
additional_unattend_content {
setting = "AutoLogon"
content = "<AutoLogon><Password><Value>${var.windows_password}</Value></Password><Enabled>true</Enabled><LogonCount>3</LogonCount><Username>${var.windows_username}</Username></AutoLogon>"
}
winrm_listener {
protocol = "Http"
}
tags = {
environment = "staging"
}
}
resource "azurerm_virtual_machine_extension" "example" {
name = "kavyahostname"
virtual_machine_id = azurerm_windows_virtual_machine.example.id
publisher ="Microsoft.Azure.NetworkWatcher"
type = "NetworkWatcherAgentWindows"
type_handler_version = "1.4"
auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"commandToExecute": "hostname && uptime"
}
SETTINGS
tags = {
environment = "Production"
}
}
Received similar error:
VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter" Message="The value of parameter “” is invalid."
Make sure the contents in the username and password are of correct format here while calling autologon data .
content = "<AutoLogon><Password><Value>${var.windows_password}</Value></Password><Enabled>true</Enabled><LogonCount>3</LogonCount><Username>${var.windows_username}</Username></AutoLogon>"
Please check azure-quickstart-templates| issues | github
It can be array which can not be base64 encoded
Check with the colon mistakes or spelling mistakes
I have variables:
variable "windows_username" {
type = string
default = "xxx"
}
variable "windows_password" {
type = string
default = "xxx"
}
Then vm extension created sucessfully:
Also check this Microsoft.Compute/virtualMachines - Bicep, ARM template & Terraform AzAPI reference | Microsoft Learn

Create SQL firewall rules with the sql server using targeted apply

I have an azurerm_sql_server and two azurerm_sql_firewall_rules for this server.
If I do a targeted terraform apply to create a resource depending on the SQL server, the SQL server is created but the firewall rules are not.
Can I require the firewall rules to always be deployed with the SQL server?
"Bonus": The SQL server is in a module and the database using the server is in another module :(
Example code:
infrastructure/main.tf
resource "azurerm_sql_server" "test" {
count = var.enable_dbs ? 1 : 0
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
name = local.dbs_name
version = "12.0"
administrator_login = var.dbs_admin_login
administrator_login_password = var.dbs_admin_password
}
resource "azurerm_sql_firewall_rule" "allow_azure_services" {
count = var.enable_dbs ? 1 : 0
resource_group_name = azurerm_resource_group.test.name
name = "AllowAccessToAzureServices"
server_name = azurerm_sql_server.test[0].name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
webapp/main.tf
resource "azurerm_sql_database" "test" {
count = var.enable_db ? 1 : 0
location = var.location
resource_group_name = var.resource_group_name
server_name = var.dbs_name
name = var.project_name
requested_service_objective_name = var.db_sku_size
}
main.tf
module "infrastructure" {
source = "./infrastructure"
project_name = "infra"
enable_dbs = true
dbs_admin_login = "someusername"
dbs_admin_password = "somepassword"
}
module "my_webapp" {
source = "./webapp"
location = module.infrastructure.location
resource_group_name = module.infrastructure.resource_group_name
project_name = local.project_name
enable_db = true
dbs_name = module.infrastructure.dbs_name
dbs_admin_login = module.infrastructure.dbs_admin_login
dbs_admin_password = module.infrastructure.dbs_admin_password
}
If the whole script is applied using terraform apply everything is fine.
But if only module.my_webapp should be applied using terraform apply -target module.my_webapp the firewall rule is missing because it is not the target and the target doesn't directly require it.
The rule is necessary nonetheless and should be applied every time the database server itself is applied.
Possible "Solution":
Add the firewall rules as output of the infrastructure module:
output "dbs_firewall_rules" {
value = concat(
azurerm_sql_firewall_rule.allow_azure_services,
azurerm_sql_firewall_rule.allow_office_ip
)
}
Then add this output as input to the webapp module:
variable "dbs_firewall_rules" {
description = "DB firewall rules required (used for the database in depends_on)"
type = list
}
And connect it in the main script:
module "my_webapp" {
...
dbs_firewall_rules = module.infrastructure.dbs_firewall_rules
...
}
Drawback: Only one type of resources can be put in the list. This is why I renamed it from dbs_dependencies to dbs_firewall_rules.
If you have the firewall rules defined in the same tf file as the SQL Server they should be getting deployed because the server is referenced which builds up the dependency graph correctly. I have ran into issues with it not working specifically with SQL Server firewall rules. What I eventually did was leverage the depends_on property on the SQL Server resource to make sure that those are always created. That looks like this:
resource "azurerm_sql_firewall_rule" "test" {
name = "FirewallRule1"
resource_group_name = "${azurerm_resource_group.test.name}"
server_name = "${azurerm_sql_server.test.name}"
start_ip_address = "10.0.17.62"
end_ip_address = "10.0.17.62"
depends_on = [azurerm_sql_server.test]
}
resource "azurerm_sql_server" "test" {
name = "mysqlserver"
resource_group_name = "${azurerm_resource_group.test.name}"
location = "${azurerm_resource_group.test.location}"
version = "12.0"
administrator_login = "mradministrator"
administrator_login_password = "thisIsDog11"
tags = {
environment = "production"
}
}
Then just add each rule that you know you want to that. If you are doing your rules outside your module, then you should be able to make this an argument that you can pass into it to force the linking.

Terraform: How to fix azurerm_cosmosdb_account creation times out

Creating a Cosmos DB via Terraform with Replicate data globally enabled times out after one hour with a status code:
StatusCode=202 -- Original Error: context deadline exceeded
Are there any solutions so Terraform can complete successfully?
We tried adding the timeout operation to the Terraform code, however it is not supported
Terraform code that is timing out:
resource "azurerm_resource_group" "resource_group" {
name = "${local.name}"
location = "${var.azure_location}"
tags = "${var.tags}"
}
resource "azurerm_cosmosdb_account" "db" {
name = "${local.name}"
location = "${var.azure_location}"
resource_group_name = "${azurerm_resource_group.resource_group.name}"
offer_type = "Standard"
kind = "GlobalDocumentDB"
tags = "${var.tags}"
enable_automatic_failover = false
consistency_policy {
consistency_level = "Session"
}
geo_location {
location = "${var.failover_azure_location}"
failover_priority = 1
}
geo_location {
location = "${azurerm_resource_group.resource_group.location}"
failover_priority = 0
}
}
I'd expect that Terraform would complete successfully since the Cosmos DB is created post the Terraform timeout without error.

How can I use Terraform's file provisioner to copy from my local machine onto a VM?

I'm new to Terraform and have so far managed to get a basic VM (plus Resource Manager trimmings) up and running on Azure. The next task I have in mind is to have Terraform copy a file from my local machine into the newly created instance. Ideally I'm after a solution where the file will be copied each time the apply command is run.
I feel like I'm pretty close but so far I just get endless "Still creating..." statements once I apply (the file is 0kb so after a couple of mins it feels safe to give up).
So far, this is what I've got (based on this code): https://stackoverflow.com/a/37866044/4941009
Network
resource "azurerm_public_ip" "pub-ip" {
name = "PublicIp"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
public_ip_address_allocation = "Dynamic"
domain_name_label = "${var.hostname}"
}
VM
resource "azurerm_virtual_machine" "vm" {
name = "${var.hostname}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
vm_size = "${var.vm_size}"
network_interface_ids = ["${azurerm_network_interface.nic.id}"]
storage_image_reference {
publisher = "${var.image_publisher}"
offer = "${var.image_offer}"
sku = "${var.image_sku}"
version = "${var.image_version}"
}
storage_os_disk {
name = "${var.hostname}osdisk1"
vhd_uri = "${azurerm_storage_account.stor.primary_blob_endpoint}${azurerm_storage_container.cont.name}/${var.hostname}osdisk.vhd"
os_type = "${var.os_type}"
caching = "ReadWrite"
create_option = "FromImage"
}
os_profile {
computer_name = "${var.hostname}"
admin_username = "${var.admin_username}"
admin_password = "${var.admin_password}"
}
os_profile_windows_config {
provision_vm_agent = true
}
boot_diagnostics {
enabled = true
storage_uri = "${azurerm_storage_account.stor.primary_blob_endpoint}"
}
tags {
environment = "${var.environment}"
}
}
File Provisioner
resource "null_resource" "copy-test-file" {
connection {
type = "ssh"
host = "${azurerm_virtual_machine.vm.ip_address}"
user = "${var.admin_username}"
password = "${var.admin_password}"
}
provisioner "file" {
source = "test.txt"
destination = "/tmp/test.txt"
}
}
As an aside, if I pass incorrect login details to the provisioner (ie rerun this after the VM has already been created and supply a different password to the provisioner) the behaviour is the same. Can anyone suggest where I'm going wrong?
I eventually got this working. Honestly, I kind of forgot about this question so I can't remember what my exact issue was, the example below though seems to work on my instance:
resource "null_resource" remoteExecProvisionerWFolder {
provisioner "file" {
source = "test.txt"
destination = "/tmp/test.txt"
}
connection {
host = "${azurerm_virtual_machine.vm.ip_address}"
type = "ssh"
user = "${var.admin_username}"
password = "${var.admin_password}"
agent = "false"
}
}
So it looks like the only difference here is the addition of agent = "false". This would make some sense as there's only one SSH authentication agent for windows and it's probable I hadn't explicitly specified to use that agent before. However it could well be that I ultimately changed something elsewhere in the configuration. Sorry future people for not being much help on this one.
FYI, windows instances you can connect via type winrm
resource "null_resource" "provision_web" {
connection {
host = "${azurerm_virtual_machine.vm.ip_address}"
type = "winrm"
user = "alex"
password = "alexiscool1!"
}
provisioner "file" {
source = "path/to/folder"
destination = "C:/path/to/destination"
}
}

Resources