I have environment in Azure with VNet configuration and resources.
I wanto automate the deployment of Azure Windows VM with existing vnet configuration using Terraform.
Firstofall, if you use the existing vnet configuration in the terraform code, it will throw the error code: "Resource already exists". To resolve this, one can run terraform import to import this resource into the "terraform state file" and let terraform import it into the current executable file.
terraform import from terraform registry:
terraform import azurerm_virtual_network.existing /subscriptions/<subscriptionID>/resourceGroups/<resourcegroup>/providers/Microsoft.Network/virtualNetworks/<vnet>
When trying to deploy existing resources in Terraform, you should include a data block. Check the complete script below for your issue, and it was successfully deployed as follows.
data "azurerm_virtual_network" "existing"{
name = "jahnavivnet"
resource_group_name = "example-resources"
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_virtual_network" "existing" {
name = "jahnavivnet"
address_space = ["10.0.0.0/16"]
resource_group_name = "example-resources"
location = "West Europe"
}
resource "azurerm_subnet" "existing" {
name = "default"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.existing.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "example" {
name = "NICxxx"
location = "West Europe"
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "default"
subnet_id = azurerm_subnet.existing.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_windows_virtual_machine" "example" {
name = "xxxxxnameofVM"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
size = "Standard_F2"
admin_username = "user"
admin_password = "xxxx"
network_interface_ids = [
azurerm_network_interface.example.id,
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2019-Datacenter"
version = "latest"
}
}
Executed terraform init:
Executed terraform plan:
Executed terraform apply --auto-approve:
Deployed virtual machine with the existing virtual network configuration.
Refer terraform templates.
Related
I have an issue creating a VM on Azure using Terraform.
We have a policy restricting from creating certain vm sizes for our subscription, but we created an exemption for a specific ResourceGroup.
I can create VM with the wanted size using my ServicePrincipal and with the following command:
$ az login --service-principal -u ... -p ... --tenant ...
$ az vm create --resource-group ... --name ... --image ... --admin-username ... --generate-ssh-keys --location ... --size ...
The VM is created successfully with the wanted size.
But, when I try to create the VM using Terraform, with the same VM size, I'm getting the following error:
level=error msg=Error: creating Linux Virtual Machine "..." (Resource Group "..."): compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status= Code="SkuNotAvailable" Message="The requested size for resource '/subscriptions/.../resourceGroups/.../providers/Microsoft.Compute/virtualMachines/...' is currently not available in location '...' zones '...' for subscription '...'. Please try another size or deploy to a different location or zones. See https://aka.ms/azureskunotavailable for details."
After running
az vm list-skus --location ... --size ... --all --output table
The output for the wanted size is:
restrictions
---
NotAvailableForSubscription, type: Zone, locations: ..., zones: 1,2,3
It looks like the size is unavailable, but using the CLI or Azure portal, I'm able to create VM with this size.
The terraform is running with the same service principal as the CLI command, in the same subscription, tenant and resource group.
Do you have an idea what can cause this problem creating the VM using terraform?
Thanks
Here is the terraform script to create a VM with specified configurations
location = "East US"
vm_size = "Standard_NC12s_v3"
Step1: Copy the below code in "main tf" file
provider "azurerm" {
features {}
}
variable "prefix" {
default = "rg_swarna"
}
resource "azurerm_resource_group" "example" {
name = "${var.prefix}-resources"
location = "East US"
}
resource "azurerm_virtual_network" "main" {
name = "${var.prefix}-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = "rg_swarna-resources"//azurerm_resource_group.example.name
virtual_network_name = "rg_swarna-network"//azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "main" {
name = "${var.prefix}-nic"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "main" {
name = "${var.prefix}-vm"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.main.id]
//vm_size = "Standard_DS1_v2"
vm_size = "Standard_NC12s_v3"
# Uncomment this line to delete the OS disk automatically when deleting the VM
# delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
# delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
Step2:
run below commands
terraform plan
terraform apply -auto-approve
Step3:
Verify the results in Azure Portal
I want to assign User Assigned managed Identity to VMSS created in MC resource group so that all the pods created in K8S have access to associated Key Vault.
I have done it through powershell script, Here's the Script:
$aksNodeVmss = Get-AzVmss -ResourceGroupName "$aksMcRg"
Update-AzVmss -ResourceGroupName $aksMcRg -Name $aksNodeVmss.Name -IdentityType UserAssigned -IdentityID $id
But I want to do it in Terraform but I'm unable to find a solution to it.
The VMSS identity is the kubelet identity of your nodepool. AKS nowadays supporting "bring your own" kubelet identity while creating the cluster, so no need for updating the identities.
resource "azurerm_user_assigned_identity" "kubelet" {
name = "uai-kubelet"
location = <YOUR_LOCATION>
resource_group_name = <YOUR_RG>
}
resource "azurerm_user_assigned_identity" "aks" {
name = "uai-aks"
location = <YOUR_LOCATION>
resource_group_name = <YOUR_RG>
}
# This can be also a custom role with Microsoft.ManagedIdentity/userAssignedIdentities/assign/action allowed
resource "azurerm_role_assignment" "this" {
scope = <YOUR_RG>
role_definition_name = "Managed Identity Operator"
principal_id = azurerm_user_assigned_identity.aks.principal_id
}
Then assign the identity to the kublet:
resource "azurerm_kubernetes_cluster" "aks" {
...
identity {
type = "UserAssigned"
identity_ids = [azurerm_user_assigned_identity.aks.id]
}
kubelet_identity {
client_id = azurerm_user_assigned_identity.kubelet.client_id
object_id = azurerm_user_assigned_identity.kubelet.principal_id
user_assigned_identity_id = azurerm_user_assigned_identity.kubelet.id
}
}
I would like to create ADF and storage account using terraform which I know how to do it. After this I want to give ADF identity access to storage account. I can do this using powershell. But idempotency issues will be there when I use powershell. Is it possible to implement access with terraform itself without using powershell?
You should create an azurerm_role_assignment to grant ADF access to the Azure Storage account.
Kindly check the example below. This code snippet assigns Storage Blob Data Reader role to the ADF.
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_data_factory" "example" {
name = "example524657"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
identity {
type = "SystemAssigned"
}
}
resource "azurerm_storage_account" "example" {
name = "examplestr524657"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "RAGRS"
}
resource "azurerm_role_assignment" "example" {
scope = azurerm_storage_account.example.id
role_definition_name = "Storage Blob Data Reader"
principal_id = azurerm_data_factory.example.identity[0].principal_id
}
I am provisioning a Windows VM using Terraform v0.12.9 on Azure cloud. On that VM I want to perform below tasks using Terraform. Basically to avoid RDP to the VM and perform manual srcipts execution.
1. Enable PSRemoting
2. Create a new FirewallRule
3. Create a SelfSignedCertificate
I have a vm_provisioning.tf as follows:
resource "azurerm_virtual_machine" "vm" {
#count = "${var.env == "dev" ? 0 : 1}"
count = "${var.env == "dev" ? 0 : 1}"
name = var.vm_name
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_interface_ids = ["${azurerm_network_interface.network-interface[count.index].id}"]
vm_size = "Standard_D13_v2"
storage_image_reference {
publisher = "MicrosoftWindowsDesktop"
offer = "Windows-10"
sku = "rs4-pro"
version = "latest"
}
storage_os_disk {
name = "Primary-disk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
disk_size_gb = "127"
}
os_profile {
computer_name = var.vm_name
admin_username = "${var.vm-username}"
admin_password = "${random_password.vm_password.result}"
}
os_profile_windows_config {
}
provisioner "remote-exec" {
connection {
host = "${element(azurerm_public_ip.PublicIP.*.ip_address, count.index)}"
type = "winrm"
user = var.vm-username
password = "${random_password.vm_password.result}"
agent = "false"
insecure = "true"
}
**inline = [
"powershell.exe Set-ExecutionPolicy Bypass -force",
"powershell.exe $DNSName = $env:COMPUTERNAME",
"powershell.exe Enable-PSRemoting -Force",
"powershell.exe New-NetFirewallRule -Name 'WinRM HTTPS' -DisplayName 'WinRM HTTPS' -Enabled True -Profile 'Any' -Action 'Allow' -Direction 'Inbound' -LocalPort 5986 -Protocol 'TCP'",
"powershell.exe $thumbprint = (New-SelfSignedCertificate -DnsName $DNSName -CertStoreLocation Cert:/LocalMachine/My).Thumbprint",
"powershell.exe $cmd = 'winrm create winrm/config/Listener?Address=*+Transport=HTTPS #{Hostname=''$DNSName''; CertificateThumbprint=''$thumbprint''}'",
"powershell.exe cmd.exe /C $cmd"
]**
}
}
I tried with azurerm_virtual_machine_extension as well.
resource "azurerm_virtual_machine_extension" "winrm" {
name = var.name
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
virtual_machine_name = var.vm_name
publisher = "Microsoft.Azure.Extensions"
type = "CustomScriptExtension"
type_handler_version = "2.0"
settings = <<SETTINGS
{
"commandToExecute": "hostname && uptime"
}
SETTINGS
}
With azurerm_virtual_machine_extension I am getting below error.
##[error]Terraform command 'apply' failed with exit code '1'.: compute.VirtualMachineExtensionsClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="This operation cannot be performed when extension operations are disallowed. To allow, please ensure VM Agent is installed on the VM and the osProfile.allowExtensionOperations property is true."
According to the error message, you need to include an os_profile_windows_config block. It supports the following:
provision_vm_agent - (Optional) Should the Azure Virtual Machine Guest
Agent be installed on this Virtual Machine? Defaults to false.
os_profile_windows_config {
provision_vm_agent = true
}
Edit
This example provisions a Virtual Machine running Windows Server 2016 with a Public IP Address and runs a remote-exec provisioner via WinRM.
main.tf
locals {
custom_data_params = "Param($ComputerName = \"${local.virtual_machine_name}\")"
custom_data_content = "${local.custom_data_params} ${file("./files/winrm.ps1")}"
}
winrm.ps1
I'm trying to automate some processes in our project which includes some steps like create the VM, connect to the newly created VM and run some commands remotely.
Now previously what I used to do is run this commands in sequence manually.
1.Create a VM
New-AzureRmResourceGroupDeployment -Name VmDeployment -ResourceGroupName XYZ`
-TemplateFile "C:\Templates\template.json" `
-TemplateParameterFile "C:\Templates\parameters.json"
2.Connect to the VM.
Set-Item WSMan:\localhost\Client\TrustedHosts -Value 100.9.4.12
$UserName = "100.9.4.12\admin"
$Password = ConvertTo-SecureString "admin#123" -AsPlainText -Force
$psCred = New-Object System.Management.Automation.PSCredential($UserName, $Password)
$s = New-PSSession -ComputerName 100.9.4.12 -Credential $psCred
Invoke-Command -Session $s -ScriptBlock {Get-Service 'ServiceName'}
In this the IP address is used to add that in the trusted hosts on the client.I used to check the generated IP address on Azure Portal, replace that IP in the command and run them manually. But now , since I'm automating, there will be no manual intervention in the process.
So how should I retrieve the IP address of the newly created VM?
Not directly related to your question, but have you thought about using Terraform to automate the creation of your resources? http://terraform.io
Terraform is very similar to ARM (only much nicer) here's an example of a VM creation and the public IP export:
resource "azurerm_resource_group" "main" {
name = "test-resources"
location = "West US 2"
}
resource "azurerm_virtual_network" "main" {
name = "test-network"
address_space = ["10.0.0.0/16"]
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = "${azurerm_resource_group.main.name}"
virtual_network_name = "${azurerm_virtual_network.main.name}"
address_prefix = "10.0.2.0/24"
}
resource "azurerm_public_ip" "test" {
name = "test-public-ip"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
public_ip_address_allocation = "dynamic"
tags {
environment = "production"
}
}
resource "azurerm_network_interface" "main" {
name = "test-nic"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
ip_configuration {
name = "testconfiguration1"
subnet_id = "${azurerm_subnet.internal.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.test.id}"
}
}
resource "azurerm_virtual_machine" "main" {
name = "test-vm"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
network_interface_ids = ["${azurerm_network_interface.main.id}"]
vm_size = "Standard_DS1_v2"
# Uncomment this line to delete the OS disk automatically when deleting the VM
# delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
# delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags {
environment = "production"
}
}
output "vm-ip" {
value = "${azurerm_public_ip.test.ip_address}"
}
why do you care about the ip address? just use dns name? you always know that one (since you define it when you create vm, at least you can do that). Another option is to output the ip address as part of the arm template.
or query the ip address in powershell:
Get-AzureRmVM -ResourceGroupName ‘HSG-ResourceGroup’ -Name ‘HSG-LinuxVM’ | Get-AzureRmPublicIpAddress
You can use the Powershell command to get all the private IP of the VMs. And the command will like this:
Get-AzureRmNetworkInterface -ResourceGroupName resourceGroupName | Select-Object {$_.IpConfigurations.PrivateIpAddress}
Update
Also the command to set a variable like this:
$vm = Get-AzureRmNetworkInterface -ResourceGroupName charles | Select-Object -Property #{Name="Ip"; Expression = {$_.IpConfigurations.PrivateIpAddress}}
$vm[0].Ip
Then you will only get the IP address.