I have created a virtual machine using the below terraform code:
Here is the VM code:
# demo instance
resource "azurerm_virtual_machine" "demo-instance" {
name = "${var.prefix}-vm"
location = var.resource_group_location
resource_group_name = var.resource_group_name
network_interface_ids = [
azurerm_network_interface.demo-instance.id]
vm_size = "Standard_A1_v2"
# this is a demo instance, so we can delete all data on termination
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
storage_image_reference {
publisher = "RedHat"
offer = "RHEL"
sku = "7-RAW"
version = "7.5.2018042521"
}
storage_os_disk {
name = "RED-HAT-osdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "MyOS"
admin_username = "MyUsername"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
}
resource "azurerm_network_interface" "demo-instance" {
name = "${var.prefix}-instance1"
location = var.resource_group_location
resource_group_name = var.resource_group_name
ip_configuration {
name = "instance1"
subnet_id = azurerm_subnet.demo-internal-1.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.demo-instance.id
}
}
resource "azurerm_network_interface_security_group_association" "allow-ssh" {
network_interface_id = azurerm_network_interface.demo-instance.id
network_security_group_id = azurerm_network_security_group.allow-ssh.id
}
resource "azurerm_public_ip" "demo-instance" {
name = "instance1-public-ip"
location = var.resource_group_location
resource_group_name = var.resource_group_name
allocation_method = "Dynamic"
}
and here is the network config:
resource "azurerm_virtual_network" "demo" {
name = "${var.prefix}-network"
location = var.resource_group_location
resource_group_name = var.resource_group_name
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "demo-internal-1" {
name = "${var.prefix}-internal-1"
resource_group_name = var.resource_group_name
virtual_network_name = azurerm_virtual_network.demo.name
address_prefixes = ["10.0.0.0/24"]
}
resource "azurerm_network_security_group" "allow-ssh" {
name = "${var.prefix}-allow-ssh"
location = var.resource_group_location
resource_group_name = var.resource_group_name
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = var.ssh-source-address
destination_address_prefix = "*"
}
}
As a result, i am able to connect to the virtual-machine using SSH. However, when i try to connect using RDP, i face with the below error:
What i have tried:
I read this document and added an inbound role into my network
However, i am not still able to get connect with RDP.
So, far i know that my VM is in network because it has a password and i know it is running because i can connect using SSH. But, i still don't know why the RDP does not work.
Since this is a Linux VM, you can only connect via SSH protocol even though you have allowed both 3389 and 22 in the NSG.
I see from the screenshot that you have allowed RDP traffic in the VM you are creating now. But the VM you create is RHEL server, you won't be able to take RDP into that, you can SSH only. Only windows vm can be logged in by using RDP.
If you want to login RHEL server from a particular Windows Jump box, that is possible, deploy a windows VM with opening RDP port and add one rule for RHEL server where source IP would be the windows VM. Then you can login to windows VM as bastion and take ssh to RHEL from this bastion. Let me know if your query is cleared.
Related
I have an azure web app which I want to restrict the access to its URL and allow access exclusively through my application gateway. One of the option is to use "Access Restriction" but I would like to achieve this using the security group as will give me more freedom and customisation as I have a lot of app services.
Using terraform I configured the application gateway, app gateway subnet and the app service gateway as follow
resource "azurerm_virtual_network" "VNET" {
address_space = ["VNET-CIDR"]
location = var.location
name = "hri-prd-VNET"
resource_group_name = azurerm_resource_group.rg-hri-prd-eur-app-gate.name
}
resource "azurerm_subnet" "app-gate" {
name = "app-gateway-subnet"
resource_group_name = azurerm_resource_group.app-gate.name
virtual_network_name = azurerm_virtual_network.VNET.name
address_prefixes = ["SUBNET-CIDR"]
}
resource "azurerm_subnet" "app-service" {
name = "app-service-subnet"
resource_group_name = azurerm_resource_group.app-gate.name
virtual_network_name = azurerm_virtual_network.hri-prd-VNET.name
address_prefixes = ["APP_CIDR"]
delegation {
name = "app-service-delegation"
service_delegation {
name = "Microsoft.Web/serverFarms"
actions = ["Microsoft.Network/virtualNetworks/subnets/action"]
}
}
}
while in my security group I configured the mapping as follow:
resource "azurerm_network_security_group" "app-service-sg" {
location = var.app-service-loc
name = "app-service-sg"
resource_group_name = azurerm_resource_group.app-service.name
security_rule {
access = "Allow"
direction = "Inbound"
name = "application_gateway_access"
priority = 100
protocol = "Tcp"
destination_port_range = "80"
source_port_range = "*"
source_address_prefixes = ["app-gate-CIDR"]
destination_address_prefixes = ["app-service-CIDR"]
}
}
resource "azurerm_subnet_network_security_group_association" "app-service-assoc" {
network_security_group_id = azurerm_network_security_group.app-service-sg.id
subnet_id = azurerm_subnet.app-service.id
}
The configuration runs without any issue with terraform, but when I hit the web app url directly I am able to access it.
What am I doing wrong at this stage? because I would like to be able to reach the web app url only though my application gateway.
Thank you so much for any help guys
You have just created networks and security groups. You need to use Application Gateway integration with service endpoints
Additionally you will need to make further configuration.
Here is a diagram how your solution should look like.
https://learn.microsoft.com/en-us/azure/app-service/networking/app-gateway-with-service-endpoints
Create App Service using Terraform code and add IP restrictions.
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/app_service#ip_restriction
resource "azurerm_app_service_plan" "example" {
name = "example-app-service-plan"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
sku {
tier = "Standard"
size = "S1"
}
}
resource "azurerm_app_service" "example" {
name = "example-app-service"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
app_service_plan_id = azurerm_app_service_plan.example.id
site_config {
ip_restriction {
ip_address = "0.0.0.0"
}
}
Link App Service to your Network
resource "azurerm_app_service_virtual_network_swift_connection" "example" {
app_service_id = azurerm_app_service.example.id
subnet_id = azurerm_subnet.app-service.id
}
Create the access restriction using service endpoints.
https://learn.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions#set-a-service-endpoint-based-rule
Below is the complete code that I am using to create the SQL virtual machine, while creating the resources I get the below mentioned error, I tried to debug by
pinning the azurerm to a specific version,
increased the quota limit of the subscription for the location.
It was working well previously and has suddenly throwing the errors.
#Database Server 1
provider "azurerm" {
version = "2.10"
features {}
}
resource "azurerm_resource_group" "RG" {
name = "resource_db"
location = var.location
}
resource "azurerm_virtual_network" "VN" {
name = "vnet_db"
resource_group_name = azurerm_resource_group.RG.name
location = azurerm_resource_group.RG.location
address_space = ["10.10.0.0/16"]
}
resource "azurerm_subnet" "DBSN" {
name = "snet_db"
resource_group_name = azurerm_resource_group.RG.name
virtual_network_name = azurerm_virtual_network.VN.name
address_prefixes = ["10.10.2.0/24"]
}
resource "azurerm_public_ip" "DBAZPIP" {
name = "pip_db"
resource_group_name = azurerm_resource_group.RG.name
location = azurerm_resource_group.RG.location
allocation_method = "Static"
}
resource "azurerm_network_security_group" "NSGDB" {
name = "nsg_db"
location = azurerm_resource_group.RG.location
resource_group_name = azurerm_resource_group.RG.name
# RDP
security_rule {
name = "RDP"
priority = 300
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "3389"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "SQL"
priority = 310
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "1433"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_subnet_network_security_group_association" "mainDB" {
subnet_id = azurerm_subnet.DBSN.id
network_security_group_id = azurerm_network_security_group.NSGDB.id
}
resource "azurerm_network_interface" "vmnicprimary" {
name = "nic_db"
location = azurerm_resource_group.RG.location
resource_group_name = azurerm_resource_group.RG.name
ip_configuration {
name = "ipConfig_db"
subnet_id = azurerm_subnet.DBSN.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.DBAZPIP.id
}
}
resource "azurerm_virtual_machine" "DatabaseServer" {
name = "vm_db"
location = azurerm_resource_group.RG.location
resource_group_name = azurerm_resource_group.RG.name
network_interface_ids = [azurerm_network_interface.vmnicprimary.id,]
vm_size = "Standard_D4s_v3"
storage_image_reference {
publisher = "MicrosoftSQLServer"
offer = "SQL2017-WS2016"
sku = "Enterprise"
version = "latest"
}
storage_os_disk {
name = "osdisk_db"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
os_profile {
computer_name = "compdb"
admin_username = "vmadmin"
admin_password = "P#ssW0rd123456"
}
os_profile_windows_config {
provision_vm_agent = true
enable_automatic_upgrades = true
}
}
resource "azurerm_mssql_virtual_machine" "example" {
virtual_machine_id = azurerm_virtual_machine.DatabaseServer.id
sql_license_type = "PAYG"
sql_connectivity_type = "PUBLIC"
}
Running the above code throws the following error:
Error: retrieving Sql Virtual Machine (Sql Virtual Machine Name "vm_m2m80" / Resource Group "resource_m2m80"): sqlvirtualmachine.SQLVirtualMachinesClient#Get: Failure responding to request: StatusCode=500 -- Original Error: autorest/azure: Service returned an error. Status=500 Code="InternalServerError" Message="An unexpected error occured while processing the request. Tracking ID: '9a1622b0-f7d1-4070-96c0-ca67d66a3522'"
on main.tf line 117, in resource "azurerm_mssql_virtual_machine" "example":
117: resource "azurerm_mssql_virtual_machine" "example" {
TLDR: It has been fixed!!
Update from Microsoft:
The fix has been released
"Hope this finds you well.
We have confirmed internally, there will be a fix for this issue soon. I will update you once it is deployed."
We have the same thing, failing on every single build, using various Terraform and Azure API versions, this started happening two days ago for us. When trying to import to state it timeouts out as well..
Error: reading Sql Virtual Machine (Sql Virtual Machine Name "sqlvmname" / Resource Group "resource group"): sqlvirtualmachine.SQLVirtualMachinesClient#Get: Failure sending request: StatusCode=500 -- Original Error: context deadline exceeded
I believe this is an API issue. We engaged Microsoft Support and they are able to reproduce the issue using this page(thank you :) ). They are checking internally and are engaging more resources at Microsoft to check it. In the meantime I don't think there is anything that can be done.
One possible work around - seeing as this actually does create the resource in Azure may be to create it using Terraform then comment out your code - and since it's not in state it wont delete it. Not pretty..
Problem statement
I am in the process to create an Azure VM cluster of windows os. till now I can create an Azure file share. and azure windows cluster. I want to attach file share created to each VM in my cluster. unable to find reference how to add same on windows VM.
code for this
resource "azurerm_storage_account" "main" {
name = "stor${var.environment}${var.cost_centre}${var.project}"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
account_tier = "${var.storage_account_tier}"
account_replication_type = "${var.storage_replication_type}"
}
resource "azurerm_storage_share" "main" {
name = "storageshare${var.environment}${var.cost_centre}${var.project}"
resource_group_name = "${azurerm_resource_group.main.name}"
storage_account_name = "${azurerm_storage_account.main.name}"
quota = "${var.storage_share_quota}"
}
resource "azurerm_virtual_machine" "vm" {
name = "vm-${var.location_id}-${var.environment}-${var.cost_centre}-${var.project}-${var.seq_id}-${count.index}"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
availability_set_id = "${azurerm_availability_set.main.id}"
vm_size = "${var.vm_size}"
network_interface_ids = ["${element(azurerm_network_interface.main.*.id, count.index)}"]
count = "${var.vm_count}"
storage_image_reference {
publisher = "${var.image_publisher}"
offer = "${var.image_offer}"
sku = "${var.image_sku}"
version = "${var.image_version}"
}
storage_os_disk {
name = "osdisk${count.index}"
create_option = "FromImage"
}
os_profile {
computer_name = "${var.vm_name}-${count.index}"
admin_username = "${var.admin_username}"
admin_password = "${var.admin_password}"
}
os_profile_windows_config {}
depends_on = ["azurerm_network_interface.main"]
}
Azure doesnt offer anything like that, so uou cannot do that natively, you need to create a script and run that script on the vm use script extension\dsc extension or if terraform supports that - with terraform.
https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-linux
I have been looking through the Terraform.io docs and its not really clear.
I know how to add a VM to a LB through the Azure portal, just trying to figure out how to do with with Terraform.
I do not see an option in the azurerm_availability_set or azurerm_lb to add a VM.
Please let me know if anyone has any ideas.
Devon
I'd take a look at this example I created. After you've created the LB, when creating each NIC make sure you add a backlink to the LB.
load_balancer_backend_address_pool_ids = ["${azurerm_lb_backend_address_pool.webservers_lb_backend.id}"]
Terraform load balanced server
resource "azurerm_lb_backend_address_pool" "backend_pool" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.lb.id}"
name = "BackendPool1"
}
resource "azurerm_lb_nat_rule" "tcp" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.lb.id}"
name = "RDP-VM-${count.index}"
protocol = "tcp"
frontend_port = "5000${count.index + 1}"
backend_port = 3389
frontend_ip_configuration_name = "LoadBalancerFrontEnd"
count = 2
}
You can get the whole file at this link. I think the code above is the most import thing. For more details about Load Balancer NAT rule, see azurerm_lb_nat_rule.
Might be late to answer this, but here goes. Once you have created LB and VM, you can use this snippet to associate NIC and LB backend pool:
resource "azurerm_network_interface_backend_address_pool_association" "vault" {
network_interface_id = "${azurerm_network_interface.nic.id}"
ip_configuration_name = "nic_ip_config"
backend_address_pool_id = "${azurerm_lb_backend_address_pool.nic.id}"
}
Ensure that the VMs sit across availability set. Else you won't be able to register the VMs into the LB.
I think this is what you need. You then need to create the association between the network interface and the virtual machine thru the subnet.
All of the answers here appear to be outdated. azurerm_network_interface_nat_rule_association should be used as of 2021-08-22:
resource "azurerm_lb_nat_rule" "example" {
resource_group_name = azurerm_resource_group.example.name
loadbalancer_id = azurerm_lb.example.id
name = "RDPAccess"
protocol = "Tcp"
frontend_port = 3389
backend_port = 3389
frontend_ip_configuration_name = "primary"
}
resource "azurerm_network_interface" "example" {
name = "example-nic"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_network_interface_nat_rule_association" "example" {
network_interface_id = azurerm_network_interface.example.id
ip_configuration_name = "testconfiguration1"
nat_rule_id = azurerm_lb_nat_rule.example.id
}
I have successfully created a VM as part of a Resource Group on Azure using Terraform. Next step is to ssh in the new machine and run a few commands. For that, I have created a provisioner as part of the VM resource and set up an SSH connection:
resource "azurerm_virtual_machine" "helloterraformvm" {
name = "terraformvm"
location = "West US"
resource_group_name = "${azurerm_resource_group.helloterraform.name}"
network_interface_ids = ["${azurerm_network_interface.helloterraformnic.id}"]
vm_size = "Standard_A0"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "14.04.2-LTS"
version = "latest"
}
os_profile {
computer_name = "hostname"
user = "some_user"
password = "some_password"
}
os_profile_linux_config {
disable_password_authentication = false
}
provisioner "remote-exec" {
inline = [
"sudo apt-get install docker.io -y"
]
connection {
type = "ssh"
user = "some_user"
password = "some_password"
}
}
}
If I run "terraform apply", it seems to get into an infinite loop trying to ssh unsuccessfully, repeating this log over and over:
azurerm_virtual_machine.helloterraformvm (remote-exec): Connecting to remote host via SSH...
azurerm_virtual_machine.helloterraformvm (remote-exec): Host:
azurerm_virtual_machine.helloterraformvm (remote-exec): User: testadmin
azurerm_virtual_machine.helloterraformvm (remote-exec): Password: true
azurerm_virtual_machine.helloterraformvm (remote-exec): Private key: false
azurerm_virtual_machine.helloterraformvm (remote-exec): SSH Agent: true
I'm sure I'm doing something wrong, but I don't know what it is :(
EDIT:
I have tried setting up this machine without the provisioner, and I can SSH to it no problems with the given username/passwd. However I need to look up the host name in the Azure portal because I don't know how to retrieve it from Terraform. It's suspicious that the "Host:" line in the log is empty, so I wonder if it has anything to do with that?
UPDATE:
I've tried with different things like indicating the host name in the connection with
host = "${azurerm_public_ip.helloterraformip.id}"
and
host = "${azurerm_public_ip.helloterraformips.ip_address}"
as indicated in the docs, but with no success.
I've also tried using ssh-keys instead of password, but same result - infinite loop of connection tries, with no clear error message as of why it's not connecting.
I have managed to make this work. I changed several things:
Gave name of host to connection.
Configured SSH keys properly - they need to be unencrypted.
Took the connection element out of the provisioner element.
Here's the full working Terraform file, replacing the data like SSH keys, etc.:
# Configure Azure provider
provider "azurerm" {
subscription_id = "${var.azure_subscription_id}"
client_id = "${var.azure_client_id}"
client_secret = "${var.azure_client_secret}"
tenant_id = "${var.azure_tenant_id}"
}
# create a resource group if it doesn't exist
resource "azurerm_resource_group" "rg" {
name = "sometestrg"
location = "ukwest"
}
# create virtual network
resource "azurerm_virtual_network" "vnet" {
name = "tfvnet"
address_space = ["10.0.0.0/16"]
location = "ukwest"
resource_group_name = "${azurerm_resource_group.rg.name}"
}
# create subnet
resource "azurerm_subnet" "subnet" {
name = "tfsub"
resource_group_name = "${azurerm_resource_group.rg.name}"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
address_prefix = "10.0.2.0/24"
#network_security_group_id = "${azurerm_network_security_group.nsg.id}"
}
# create public IPs
resource "azurerm_public_ip" "ip" {
name = "tfip"
location = "ukwest"
resource_group_name = "${azurerm_resource_group.rg.name}"
public_ip_address_allocation = "dynamic"
domain_name_label = "sometestdn"
tags {
environment = "staging"
}
}
# create network interface
resource "azurerm_network_interface" "ni" {
name = "tfni"
location = "ukwest"
resource_group_name = "${azurerm_resource_group.rg.name}"
ip_configuration {
name = "ipconfiguration"
subnet_id = "${azurerm_subnet.subnet.id}"
private_ip_address_allocation = "static"
private_ip_address = "10.0.2.5"
public_ip_address_id = "${azurerm_public_ip.ip.id}"
}
}
# create storage account
resource "azurerm_storage_account" "storage" {
name = "someteststorage"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "ukwest"
account_type = "Standard_LRS"
tags {
environment = "staging"
}
}
# create storage container
resource "azurerm_storage_container" "storagecont" {
name = "vhd"
resource_group_name = "${azurerm_resource_group.rg.name}"
storage_account_name = "${azurerm_storage_account.storage.name}"
container_access_type = "private"
depends_on = ["azurerm_storage_account.storage"]
}
# create virtual machine
resource "azurerm_virtual_machine" "vm" {
name = "sometestvm"
location = "ukwest"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_interface_ids = ["${azurerm_network_interface.ni.id}"]
vm_size = "Standard_A0"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk"
vhd_uri = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.storagecont.name}/myosdisk.vhd"
caching = "ReadWrite"
create_option = "FromImage"
}
os_profile {
computer_name = "testhost"
admin_username = "testuser"
admin_password = "Password123"
}
os_profile_linux_config {
disable_password_authentication = false
ssh_keys = [{
path = "/home/testuser/.ssh/authorized_keys"
key_data = "ssh-rsa xxx email#something.com"
}]
}
connection {
host = "sometestdn.ukwest.cloudapp.azure.com"
user = "testuser"
type = "ssh"
private_key = "${file("~/.ssh/id_rsa_unencrypted")}"
timeout = "1m"
agent = true
}
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install docker.io -y",
"git clone https://github.com/somepublicrepo.git",
"cd Docker-sample",
"sudo docker build -t mywebapp .",
"sudo docker run -d -p 5000:5000 mywebapp"
]
}
tags {
environment = "staging"
}
}
According to your description, Azure Custom Script Extension is an option for you.
The Custom Script Extension downloads and executes scripts on Azure
virtual machines. This extension is useful for post deployment
configuration, software installation, or any other configuration /
management task.
Remove provisioner "remote-exec" instead of below:
resource "azurerm_virtual_machine_extension" "helloterraformvm" {
name = "hostname"
location = "West US"
resource_group_name = "${azurerm_resource_group.helloterraformvm.name}"
virtual_machine_name = "${azurerm_virtual_machine.helloterraformvm.name}"
publisher = "Microsoft.OSTCExtensions"
type = "CustomScriptForLinux"
type_handler_version = "1.2"
settings = <<SETTINGS
{
"commandToExecute": "apt-get install docker.io -y"
}
SETTINGS
}
Note: Command is executed by root user, don't use sudo.
More information please refer to this link: azurerm_virtual_machine_extension.
For a list of possible extensions, you can use the Azure CLI command az vm extension image list -o table
Update: The above example only supports single command. If you need to multiple commands. Like install docker on your VM, you need
apt-get update
apt-get install docker.io -y
Save it as a file named script.sh and save it to Azure Storage account or GitHub(The file should be public). Modify terraform file like below:
settings = <<SETTINGS
{
"fileUris": ["https://gist.githubusercontent.com/Walter-Shui/dedb53f71da126a179544c91d267cdce/raw/bb3e4d90e3291530570eca6f4ff7981fdcab695c/script.sh"],
"commandToExecute": "sh script.sh"
}
SETTINGS