Terraform: SSH authentication failed (user#:22): ssh: handshake failed - azure

I wrote some Terraform code to create a new VM and want to execute a command on it via remote-exec but it throws an SSH connection error:
Error: timeout - last error: SSH authentication failed (admin#:22): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain.
My Terraform code:
# Create a resource group if it doesn’t exist
resource "azurerm_resource_group" "rg" {
name = "${var.deployment}-mp-rg"
location = "${var.azure_environment}"
tags = {
environment = "${var.deployment}"
}
}
# Create virtual network
resource "azurerm_virtual_network" "vnet" {
name = "${var.deployment}-mp-vnet"
address_space = ["10.0.0.0/16"]
location = "${var.azure_environment}"
resource_group_name = "${azurerm_resource_group.rg.name}"
tags = {
environment = "${var.deployment}"
}
}
# Create subnet
resource "azurerm_subnet" "subnet" {
name = "${var.deployment}-mp-subnet"
resource_group_name = "${azurerm_resource_group.rg.name}"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
address_prefix = "10.0.1.0/24"
}
# Create public IPs
resource "azurerm_public_ip" "publicip" {
name = "${var.deployment}-mp-publicip"
location = "${var.azure_environment}"
resource_group_name = "${azurerm_resource_group.rg.name}"
allocation_method = "Dynamic"
tags = {
environment = "${var.deployment}"
}
}
# Create Network Security Group and rule
resource "azurerm_network_security_group" "nsg" {
name = "${var.deployment}-mp-nsg"
location = "${var.azure_environment}"
resource_group_name = "${azurerm_resource_group.rg.name}"
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags = {
environment = "${var.deployment}"
}
}
# Create network interface
resource "azurerm_network_interface" "nic" {
name = "${var.deployment}-mp-nic"
location = "${var.azure_environment}"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_security_group_id = "${azurerm_network_security_group.nsg.id}"
ip_configuration {
name = "${var.deployment}-mp-nicconfiguration"
subnet_id = "${azurerm_subnet.subnet.id}"
private_ip_address_allocation = "Dynamic"
public_ip_address_id = "${azurerm_public_ip.publicip.id}"
}
tags = {
environment = "${var.deployment}"
}
}
# Generate random text for a unique storage account name
resource "random_id" "randomId" {
keepers = {
# Generate a new ID only when a new resource group is defined
resource_group = "${azurerm_resource_group.rg.name}"
}
byte_length = 8
}
# Create storage account for boot diagnostics
resource "azurerm_storage_account" "storageaccount" {
name = "diag${random_id.randomId.hex}"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${var.azure_environment}"
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "${var.deployment}"
}
}
# Create virtual machine
resource "azurerm_virtual_machine" "vm" {
name = "${var.deployment}-mp-vm"
location = "${var.azure_environment}"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_interface_ids = ["${azurerm_network_interface.nic.id}"]
vm_size = "Standard_DS1_v2"
storage_os_disk {
name = "${var.deployment}-mp-disk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
os_profile {
computer_name = "${var.deployment}-mp-ansible"
admin_username = "${var.ansible_user}"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/${var.ansible_user}/.ssh/authorized_keys"
key_data = "${var.public_key}"
}
}
boot_diagnostics {
enabled = "true"
storage_uri = "${azurerm_storage_account.storageaccount.primary_blob_endpoint}"
}
tags = {
environment = "${var.deployment}"
}
}
resource "null_resource" "ssh_connection" {
connection {
host = "${azurerm_public_ip.publicip.ip_address}"
type = "ssh"
private_key = "${file(var.private_key)}"
port = 22
user = "${var.ansible_user}"
agent = false
timeout = "1m"
}
provisioner "remote-exec" {
inline = ["sudo apt-get -qq install python"]
}
}
I have tried to SSH into the new VM manually with admin#xx.xx.xx.xx:22 and it works. Looking at the error message I then output the parameter ${azurerm_public_ip.publicip.ip_address} but it is null so I think that this is the reason why the SSH authentication failed but I don't know the reason. If I want to SSH the server via Terraform script, how can I modify the code?

Your issue is that Terraform has built a dependency graph that tells it that the only dependency for the null_resource.ssh_connection is the azurerm_public_ip.publicip resource and so it's starting to try to connect before the instance has been created.
This in itself isn't a massive issues as the provisioner would normally attempt to retry in case SSH isn't yet available but the connection details are being determined as soon as the null resource starts. And with the azurerm_public_ip set to an allocation_method of Dynamic it won't get its IP address until after it has been attached to a resource:
Note Dynamic Public IP Addresses aren't allocated until they're assigned to a resource (such as a Virtual Machine or a Load Balancer) by design within Azure - more information is available below.
There's a few different ways you can solve this. You could make the null_resource depend on the azurerm_virtual_machine.vm resource via interpolation or via depends_on:
resource "null_resource" "ssh_connection" {
connection {
host = "${azurerm_public_ip.publicip.ip_address}"
type = "ssh"
private_key = "${file(var.private_key)}"
port = 22
user = "${var.ansible_user}"
agent = false
timeout = "1m"
}
provisioner "remote-exec" {
inline = [
"echo ${azurerm_virtual_machine.vm.id}",
"sudo apt-get -qq install python",
]
}
}
or
resource "null_resource" "ssh_connection" {
depends_on = ["azurerm_virtual_machine.vm"]
connection {
host = "${azurerm_public_ip.publicip.ip_address}"
type = "ssh"
private_key = "${file(var.private_key)}"
port = 22
user = "${var.ansible_user}"
agent = false
timeout = "1m"
}
provisioner "remote-exec" {
inline = ["sudo apt-get -qq install python"]
}
}
A better approach here would to be to run the provisioner as part of the azurerm_virtual_machine.vm resource instead of a null_resource. The normal reasons to use a null_resource to launch a provisioner are when you need to wait until after something else has happened to a resource such as attaching a disk or if there's not an appropriate resource to attach it to but this doesn't really apply here. So instead of your existing null_resource you'd move the provisioner into the azurerm_virtual_machine.vm resource:
resource "azurerm_virtual_machine" "vm" {
# ...
provisioner "remote-exec" {
connection {
host = "${azurerm_public_ip.publicip.ip_address}"
type = "ssh"
private_key = "${file(var.private_key)}"
port = 22
user = "${var.ansible_user}"
agent = false
timeout = "1m"
}
inline = ["sudo apt-get -qq install python"]
}
}
For many resources this also allows you to refer to the outputs of the resource you are provisioning by using the self keyword. Unfortunately the azurerm_virtual_machine resource doesn't seem to easily expose the IP address of the VM due to this being set by the network_interface_ids.

Related

Error: host for provisioner cannot be empty

I am working on the main.tf file for creating a virtual machine in azure with remote execution and also I would like to create and download the SSH key .pem file in this file to access Linux VM.
main. tf file
# Configure the Microsoft Azure Provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~>2.0"
}
}
}
provider "azurerm" {
features {}
subscription_id = var.subscription_id
client_id = var.client_id
client_secret = var.client_secret
tenant_id = var.tenant_id
}
# Create a resource group if it doesn't exist
resource "azurerm_resource_group" "myterraformgroup" {
name = var.resource_group
location = var.resource_group_location
tags = {
environment = "Terraform Demo"
}
}
# Create virtual network
resource "azurerm_virtual_network" "myterraformnetwork" {
name = "myVnet"
address_space = ["10.0.0.0/16"]
location = "eastus"
resource_group_name = azurerm_resource_group.myterraformgroup.name
tags = {
environment = "Terraform Demo"
}
}
# Create subnet
resource "azurerm_subnet" "myterraformsubnet" {
name = "mySubnet"
resource_group_name = azurerm_resource_group.myterraformgroup.name
virtual_network_name = azurerm_virtual_network.myterraformnetwork.name
address_prefixes = ["10.0.1.0/24"]
}
# Create public IPs
resource "azurerm_public_ip" "myterraformpublicip" {
name = "myPublicIP"
location = "eastus"
resource_group_name = azurerm_resource_group.myterraformgroup.name
allocation_method = "Dynamic"
tags = {
environment = "Terraform Demo"
}
}
# Create Network Security Group and rule
resource "azurerm_network_security_group" "myterraformnsg" {
name = "myNetworkSecurityGroup"
location = "eastus"
resource_group_name = azurerm_resource_group.myterraformgroup.name
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags = {
environment = "Terraform Demo"
}
}
# Create network interface
resource "azurerm_network_interface" "myterraformnic" {
name = "myNIC"
location = "eastus"
resource_group_name = azurerm_resource_group.myterraformgroup.name
ip_configuration {
name = "myNicConfiguration"
subnet_id = azurerm_subnet.myterraformsubnet.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.myterraformpublicip.id
}
tags = {
environment = "Terraform Demo"
}
}
# Connect the security group to the network interface
resource "azurerm_network_interface_security_group_association" "example" {
network_interface_id = azurerm_network_interface.myterraformnic.id
network_security_group_id = azurerm_network_security_group.myterraformnsg.id
}
# Generate random text for a unique storage account name
resource "random_id" "randomId" {
keepers = {
# Generate a new ID only when a new resource group is defined
resource_group = azurerm_resource_group.myterraformgroup.name
}
byte_length = 8
}
# Create storage account for boot diagnostics
resource "azurerm_storage_account" "mystorageaccount" {
name = "diag${random_id.randomId.hex}"
resource_group_name = azurerm_resource_group.myterraformgroup.name
location = "eastus"
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "Terraform Demo"
}
}
# Create (and display) an SSH key
resource "tls_private_key" "example_ssh" {
algorithm = "RSA"
rsa_bits = 2048
}
output "tls_private_key" {
value = tls_private_key.example_ssh.private_key_pem
sensitive = true
}
# Create virtual machine
resource "azurerm_linux_virtual_machine" "myterraformvm" {
name = "myVM"
location = "eastus"
resource_group_name = azurerm_resource_group.myterraformgroup.name
network_interface_ids = [azurerm_network_interface.myterraformnic.id]
size = "Standard_DS1_v2"
os_disk {
name = "myOsDisk"
caching = "ReadWrite"
storage_account_type = "Premium_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
computer_name = "myvm"
admin_username = "azureuser"
disable_password_authentication = true
admin_ssh_key {
username = "azureuser"
public_key = tls_private_key.example_ssh.public_key_openssh
}
boot_diagnostics {
storage_account_uri = azurerm_storage_account.mystorageaccount.primary_blob_endpoint
}
tags = {
environment = "Terraform Demo"
}
}
resource "null_resource" "execute" {
connection {
type = "ssh"
agent = false
user = "azureuser"
host = azurerm_public_ip.myterraformpublicip.ip_address
private_key = tls_private_key.example_ssh.private_key_pem
}
provisioner "file" {
source = "./config"
destination = "~/"
}
provisioner "remote-exec" {
inline = [
"chmod 755 ~/scripts/*",
"sudo sh ~/scripts/foreman_prerequisite_config.sh",
]
}
depends_on = [azurerm_linux_virtual_machine.myterraformvm]
}
Facing the below error when using command terraform apply
[0m[1mnull_resource.execute: Provisioning with 'file'...[0m[0m
[31m╷[0m[0m
[31m│[0m [0m[1m[31mError: [0m[0m[1mfile provisioner error[0m
[31m│[0m [0m
[31m│[0m [0m[0m with null_resource.execute,
[31m│[0m [0m on main.tf line 184, in resource "null_resource" "execute":
[31m│[0m [0m 184: provisioner "file" [4m{[0m[0m
[31m│[0m [0m
[31m│[0m [0mhost for provisioner cannot be empty
Please help me to resolve this issue. Thanks in advance!
According to the Azure provider documentation [1], when the public IP allocation type is Dynamic, you should use the data source to get the IP address:
data "azurerm_public_ip" "myterraformpublicip" {
name = azurerm_public_ip.myterraformpublicip.name
resource_group_name = azurerm_linux_virtual_machine.myterraformvm.resource_group_name
}
Then, in the host argument of the null_resource you should set the following:
host = data.azurerm_public_ip.myterraformpublicip.ip_address
However, this might not fix the issue you have as it seems there is a problem with this version of Azure provider for Linux VMs [2]:
In this release there's a known issue where the public_ip_address and public_ip_addresses fields may not be fully populated for Dynamic Public IP's.
The second part of the question was related to generating an SSH key which can be used later on to access a VM. In your question you have this code:
resource "tls_private_key" "example_ssh" {
algorithm = "RSA"
rsa_bits = 4096
}
output "tls_private_key" {
value = tls_private_key.example_ssh.private_key_pem
sensitive = true
}
The output is not needed based on the answer you linked in the comments [3]. This can be used to create a private key in the same directory:
resource "tls_private_key" "example_ssh" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "local_file" "private_key_file" {
content = tls_private_key.example_ssh.private_key_pem
filename = "${path.root}/private-key.pem"
}
Then, in the null_resource, you should add the following:
resource "null_resource" "execute" {
connection {
type = "ssh"
agent = false
user = "azureuser"
host = data.azurerm_public_ip.myterraformpublicip.ip_address
private_key = "${path.root}/private-key.pem"
}
provisioner "file" {
source = "./config"
destination = "~/"
}
provisioner "remote-exec" {
inline = [
"chmod 755 ~/scripts/*",
"sudo sh ~/scripts/foreman_prerequisite_config.sh",
]
}
}
depends_on = [azurerm_linux_virtual_machine.myterraformvm]
}
Note that you probably should not use the tls_private_key resource for production environments [4]:
The private key generated by this resource will be stored unencrypted in your Terraform state file. Use of this resource for production deployments is not recommended. Instead, generate a private key file outside of Terraform and distribute it securely to the system where Terraform will be run.
[1] https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/public_ip#example-usage-retrieve-the-dynamic-public-ip-of-a-new-vm
[2] https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/linux_virtual_machine#:~:text=In%20this%20release%20there%27s%20a%20known%20issue%20where%20the%20public_ip_address%20and%20public_ip_addresses%20fields%20may%20not%20be%20fully%20populated%20for%20Dynamic%20Public%20IP%27s.
[3] https://stackoverflow.com/a/67379867/8343484
[4] https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/private_key

Trouble sshing into Azure Linux Virtual Machine

I followed the following guide to set up a Linux Virtual Machine using Terraform:
https://learn.microsoft.com/en-us/azure/developer/terraform/create-linux-virtual-machine-with-infrastructure
Everything was sucessfully created in Azure. I am having trouble with the last step of being able to ssh into the virtual machine. I use the following command in Windows powershell:
ssh azureuser#public_ip_here
It gives me the following error:
azureuser#52.186.144.190: Permission denied (publickey).
I've tried using the RDP file from the Azure portal by downloading the RDP file and importing it in RDP but I get the following error:
Things I've tried:
Using the normal ssh command as above
Trying to put the private key in a .pem file and assigning it restricted permissions. Then passing this key in using the ssh -i command. This doesn't work either
Using RDP file downloaded from Azure portal (error shown below)
Ran the test connection feature for the Virtual Machine in the Azure portal and that shows connection successful but I'm still not able to access the VM.
I'm wondering if I have to somehow configure the Azure portal to allow myself to be able to ssh in the VM.
My main.tf code is:
provider "azurerm" {
# The "feature" block is required for AzureRM provider 2.x.
# If you're using version 1.x, the "features" block is not allowed.
version = "~>2.0"
features {}
}
resource "azurerm_resource_group" "myterraformgroup" {
name = "myResourceGroup"
location = "eastus"
tags = {
environment = "Terraform Demo"
}
}
resource "azurerm_virtual_network" "myterraformnetwork" {
name = "myVnet"
address_space = ["10.0.0.0/16"]
location = "eastus"
resource_group_name = azurerm_resource_group.myterraformgroup.name
tags = {
environment = "Terraform Demo"
}
}
resource "azurerm_subnet" "myterraformsubnet" {
name = "mySubnet"
resource_group_name = azurerm_resource_group.myterraformgroup.name
virtual_network_name = azurerm_virtual_network.myterraformnetwork.name
address_prefixes = ["10.0.1.0/24"]
}
resource "azurerm_public_ip" "myterraformpublicip" {
name = "myPublicIP"
location = "eastus"
resource_group_name = azurerm_resource_group.myterraformgroup.name
allocation_method = "Dynamic"
tags = {
environment = "Terraform Demo"
}
}
resource "azurerm_network_security_group" "myterraformnsg" {
name = "myNetworkSecurityGroup"
location = "eastus"
resource_group_name = azurerm_resource_group.myterraformgroup.name
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags = {
environment = "Terraform Demo"
}
}
resource "azurerm_network_interface" "myterraformnic" {
name = "myNIC"
location = "eastus"
resource_group_name = azurerm_resource_group.myterraformgroup.name
ip_configuration {
name = "myNicConfiguration"
subnet_id = azurerm_subnet.myterraformsubnet.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.myterraformpublicip.id
}
tags = {
environment = "Terraform Demo"
}
}
resource "azurerm_network_interface_security_group_association" "example" {
network_interface_id = azurerm_network_interface.myterraformnic.id
network_security_group_id = azurerm_network_security_group.myterraformnsg.id
}
resource "random_id" "randomId" {
keepers = {
# Generate a new ID only when a new resource group is defined
resource_group = azurerm_resource_group.myterraformgroup.name
}
byte_length = 8
}
resource "azurerm_storage_account" "mystorageaccount" {
name = "diag${random_id.randomId.hex}"
resource_group_name = azurerm_resource_group.myterraformgroup.name
location = "eastus"
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "Terraform Demo"
}
}
resource "tls_private_key" "example_ssh" {
algorithm = "RSA"
rsa_bits = 4096
}
output "tls_private_key" { value = tls_private_key.example_ssh.private_key_pem }
resource "azurerm_linux_virtual_machine" "myterraformvm" {
name = "myVM"
location = "eastus"
resource_group_name = azurerm_resource_group.myterraformgroup.name
network_interface_ids = [azurerm_network_interface.myterraformnic.id]
size = "Standard_DS1_v2"
os_disk {
name = "myOsDisk"
caching = "ReadWrite"
storage_account_type = "Premium_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
computer_name = "myvm"
admin_username = "azureuser"
disable_password_authentication = true
admin_ssh_key {
username = "azureuser"
public_key = tls_private_key.example_ssh.public_key_openssh
}
boot_diagnostics {
storage_account_uri = azurerm_storage_account.mystorageaccount.primary_blob_endpoint
}
tags = {
environment = "Terraform Demo"
}
}
Any help/pointers would be greatly appreciated!
After my validation, you could save the output private pem key to a file named key.pem in the home directory. for example, C:\Users\username\ in Windows 10 or /home/username/ in Linux.
Then you can access the Azure VM via the command in the shell.
ssh -i "C:\Users\username\key.pem" azureuser#23.x.x.x
Result
In addition, the private key generated by tls_private_key will be stored unencrypted in your Terraform state file. It's recommended to generate a private key file outside of Terraform and distribute it securely to the system where Terraform will be run.
You can use ssh-keygen in PowerShell in Windows 10 to create the key pair on the client machine. The key pair is saved into the directory C:\Users\username\.ssh.
For example, then you can send the public key to the Azure VM with Terraform function file:
admin_ssh_key {
username = "azureuser"
public_key = file("C:\\Users\\someusername\\.ssh\\id_rsa.pub")
#tls_private_key.example_ssh.public_key_openssh
}
First create the key.
ssh-keygen -t rsa -b 2048 -C email#example.com
Second add the path of key.
admin_ssh_key {
username = "azureuser"
public_key = file("C:\\Users\\someusername\\.ssh\\id_rsa.pub")
}
Finally login.
ssh -i "C:\Users\someusername.ssh\id_rsa" azureuser#20.x.x.x

Azure The 'resourceTargetId' property of endpoint 'vm1-TF' is invalid or missing

I want to implement traffic manager in Terraform between two vm's from different locations (West Europe and North Europe). I attach my code, but I don't know how to configure "target_resource_id" for each vm, because the vm's were created in a for loop, also the networks. The traffic manager would switch to the secondary vm, in case of failure of the first vm. Any ideas?
My code:
variable "subscription_id" {}
variable "tenant_id" {}
variable "environment" {}
variable "azurerm_resource_group_name" {}
variable "locations" {
type = map(string)
default = {
vm1 = "North Europe"
vm2 = "West Europe"
}
}
# Configure the Azure Provider
provider "azurerm" {
subscription_id = var.subscription_id
tenant_id = var.tenant_id
version = "=2.10.0"
features {}
}
resource "azurerm_virtual_network" "main" {
for_each = var.locations
name = "${each.key}-network"
address_space = ["10.0.0.0/16"]
location = each.value
resource_group_name = var.azurerm_resource_group_name
}
resource "azurerm_subnet" "internal" {
for_each = var.locations
name = "${each.key}-subnet"
resource_group_name = var.azurerm_resource_group_name
virtual_network_name = azurerm_virtual_network.main[each.key].name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_public_ip" "example" {
for_each = var.locations
name = "${each.key}-pip"
location = each.value
resource_group_name = var.azurerm_resource_group_name
allocation_method = "Static"
idle_timeout_in_minutes = 30
tags = {
environment = "dev01"
}
}
resource "azurerm_network_interface" "main" {
for_each = var.locations
name = "${each.key}-nic"
location = each.value
resource_group_name = var.azurerm_resource_group_name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal[each.key].id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.example[each.key].id
}
}
resource "random_password" "password" {
length = 16
special = true
override_special = "_%#"
}
resource "azurerm_virtual_machine" "main" {
for_each = var.locations
name = "${each.key}t-vm"
location = each.value
resource_group_name = var.azurerm_resource_group_name
network_interface_ids = [azurerm_network_interface.main[each.key].id]
vm_size = "Standard_D2s_v3"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
storage_os_disk {
name = "${each.key}-myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "${each.key}-hostname"
admin_username = "testadmin"
admin_password = random_password.password.result
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "dev01"
}
}
resource "random_id" "server" {
keepers = {
azi_id = 1
}
byte_length = 8
}
resource "azurerm_traffic_manager_profile" "example" {
name = random_id.server.hex
resource_group_name = var.azurerm_resource_group_name
traffic_routing_method = "Priority"
dns_config {
relative_name = random_id.server.hex
ttl = 100
}
monitor_config {
protocol = "http"
port = 80
path = "/"
interval_in_seconds = 30
timeout_in_seconds = 9
tolerated_number_of_failures = 3
}
tags = {
environment = "dev01"
}
}
resource "azurerm_traffic_manager_endpoint" "first-vm" {
for_each = var.locations
name = "${each.key}-TF"
resource_group_name = var.azurerm_resource_group_name
profile_name = "${azurerm_traffic_manager_profile.example.name}"
target_resource_id = "[azurerm_network_interface.main[each.key].id]"
type = "azureEndpoints"
priority = "${[each.key] == "vm1" ? 1 : 2}"
}
My error:
Error: trafficmanager.EndpointsClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error.
Status=400 Code="BadRequest" Message="The 'resourceTargetId' property of endpoint 'vm1-TF' is invalid or missing.
The property must be specified only for the following endpoint types: AzureEndpoints, NestedEndpoints.
You must have read access to the resource to which it refers."
According to this documentation:
Azure endpoints are used for Azure-based services in Traffic Manager. The following Azure resource types are supported:
PaaS cloud services
Web Apps
Web App Slots
PublicIPAddress resources (which can be connected to VMs either directly or via an Azure Load Balancer). The publicIpAddress must have a DNS name assigned to be used in a Traffic Manager profile.
In your code, you need to change the Target ResourceId to PublicIPAddress. You should not pass the Network Interface ID.
resource "azurerm_traffic_manager_endpoint" "first-vm" {
for_each = var.locations
name = "${each.key}-TF"
resource_group_name = var.azurerm_resource_group_name
profile_name = "${azurerm_traffic_manager_profile.example.name}"
target_resource_id = azurerm_public_ip.example[each.key].id
type = "azureEndpoints"
priority = "${[each.key] == "vm1" ? 1 : 2}"
}
Please Note: The publicIpAddress must have a DNS name assigned to be used in a Traffic Manager profile.
Also you can check this ARM Template which is equivalent to this terraform template.
Hope this helps!

Unable to remote-exec in AzureVM using Terraform

I want to copy file and run some shell commands after VM is created in Azure. I use provisioner 'file' and provisioner 'remote-exec' and created VM using ssh keys. Everything works fine till provisoner file and I get following error:
Error: timeout - last error: dial tcp :22: connect: connection refused
When I do ssh -i id_rsa <username>#<ip_address> it works fine. I get this IP address from Azure Portal.
Here is my tf file:
resource "azurerm_resource_group" "myterraformgroup" {
name = "terrafromresources"
location = "eastus"
}
resource "azurerm_virtual_network" "myterraformnetwork" {
name = "terraformvnet"
address_space = ["10.0.0.0/16"]
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
}
resource "azurerm_network_security_group" "myterraformnsg" {
name = "terraformNetworkSecurityGroup"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
security_rule {
name = "SSH"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_public_ip" "myterraformpublicip" {
name = "myPublicIP"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
allocation_method = "Dynamic"
}
resource "azurerm_linux_virtual_machine" "myterraformvm" {
name = "terraformVM"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
network_interface_ids = ["${azurerm_network_interface.myterraformnic.id}"]
size = "Standard_DS1_v2"
computer_name = "terrafromvm"
admin_username = "azureuser"
disable_password_authentication = true
admin_ssh_key {
username = "azureuser"
public_key = "${file("id_rsa.pub")}"
}
connection {
type = "ssh"
user = "azureuser"
host = "${azurerm_public_ip.myterraformpublicip.fqdn}"
private_key = "${file("id_rsa")}"
timeout = "5m"
}
provisioner "file" {
source = "example_file.txt"
destination = "/tmp/example_file.yml"
}
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
]
}
}
id_rsa and id_rsa.pub are in same folder are .tf file.
Also tried higher timeouts with 10m and 15m.
Thanks
This github issue addresses the same problem as yours and proper explanation is provided for this issue.
The fix for this problem is updating the allocation_method to "Static".
Hope this helps!

Creating Neo4j vm Terraform Message="Creating a virtual machine from Marketplace image requires Plan information in the request

Script i am using to create Vm from marketplace giving error
Error: Code="VMMarketplaceInvalidInput" Message="Creating a virtual machine from Marketplace image requires Plan information in the request. VM: '/subscriptions/bc8afca8-32ba-48ac-b418-77de827c2bc1/resourceGroups/NexxeNeo4j-rg/providers/Microsoft.Compute/virtualMachines/NexxeNeo4j4'."
provider "azurerm" {
subscription_id = "**************************************"
features {}
}
# Use existing resource group
data "azurerm_resource_group" "gepgroup1" {
name = "NexxeNeo4j-rg"
}
# Use Existing virtual network
data "azurerm_virtual_network" "gepnetwork1" {
name = "DEVRnD"
resource_group_name = "RnDdev"
}
# Use Existing subnet
data "azurerm_subnet" "gepsubnet" {
name = "subnet"
resource_group_name = "RnDdev"
virtual_network_name = data.azurerm_virtual_network.gepnetwork1.name
}
# Create public IPs NexxeNeo4j
resource "azurerm_public_ip" "geppublicip2" {
name = "NexxeNeo4jPublicIP"
location = "eastus"
resource_group_name = "NexxeNeo4j-rg"
allocation_method = "Dynamic"
tags = {
environment = "Dev-Direct"
}
}
# Create network interface NexxeNeo4j2
resource "azurerm_network_interface" "gepnic3" {
name = "NexxeNeo4jNIC"
location = "eastus"
resource_group_name = "NexxeNeo4j-rg"
ip_configuration {
name = "NexxeNeo4jConfiguration"
subnet_id = data.azurerm_subnet.gepsubnet.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.geppublicip2.id
}
tags = {
environment = "Dev-Direct"
}
}
# Create virtual machine NexxeNeo4j
resource "azurerm_virtual_machine" "gepvm4" {
name = "NexxeNeo4j"
location = "eastus"
resource_group_name = "NexxeNeo4j-rg"
network_interface_ids = [azurerm_network_interface.gepnic3.id]
vm_size = "Standard_DS3_v2"
plan {
  name= "neo4j_3_5_13_apoc"
  publisher= "neo4j"
product= "neo4j-enterprise-3_5"
    }
storage_os_disk {
name = "NexxeNeo4j_OsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_image_reference {
publisher = "neo4j"
offer = "neo4j-enterprise-3_5"
sku = "neo4j_3_5_13_apoc"
version = "3.5.13"
}
os_profile {
computer_name = "NexxeNeo4j"
admin_username = "gep"
admin_password = "Nexxegep#07066"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "Dev-Direct"
}
}
You need to add PLAN block in your Terraform HCL script.
Something similar to
resource "azurerm_virtual_machine" "gepvm4" {
# ...
plan {
publisher = "neo4j"
name = "neo4j-enterprise-3_5"
product = "neo4j_3_5_13_apoc"
}
# ...
}
I tried your configuration file in the Azure cloud shell. It did work except I need to run these Powershell commands to accept legal terms before run terraform apply again.
Get-AzMarketplaceTerms -Publisher neo4j -Product neo4j-enterprise-3_5 -Name neo4j_3_5_13_apoc | Set-AzMarketplaceTerms -Accept -SubscriptionId <subscription-id>
I suggest removing terraform.tfstate terraform.tfstate.backup files and run terraform init, plan, apply again.

Resources