Terraform with Azure Marketplace - azure

I've started to get into Terraform and am loving it, since for cost reasons I have my services across a number of infrastructure providers, so it makes it easy to replicate full services without issues across IaaS providers.
I use some third-party services through the Azure marketplace, similar to Heroku's Add-Ons. I see a facility in Terraform for Heroku Add-On declarations, but not for Azure marketplace subscriptions. How can I do this?
Update:
How do I create an Azure marketplace order/subscription via Terraform?

If I understand your preoblem correctly I think the key is to create declare VM with the following sections with placeholder replaced;
plan {
publisher = "${publisher}" // e.g. bitnami
product = "${offer}" // e.g. elk
name = "${sku}" // e.g. 46
}
storage_image_reference {
publisher = "${publisher}" // e.g. bitnami
offer = "${offer}" // e.g. elk
sku = "${sku}" // e.g. 46
version = "${version}" // e.g. latest
}
So a complete VM resource definition would lok something like this.
resource "azurerm_virtual_machine" "virtual_machine" {
count = "${var.vm_count}"
name = "${element(module.template.vm_names, count.index)}"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
network_interface_ids = ["${element(azurerm_network_interface.network_interface.*.id, count.index)}"]
vm_size = "${var.vm_size}"
delete_data_disks_on_termination = true
delete_os_disk_on_termination = true
plan {
publisher = "${var.publisher}"
product = "${var.offer}"
name = "${var.sku}"
}
boot_diagnostics {
enabled = true
storage_uri = "${var.boot_diagnostics_storage_url}"
}
storage_image_reference {
publisher = "${var.publisher}"
offer = "${var.offer}"
sku = "${var.sku}"
version = "${var.version}"
}
storage_os_disk {
name = "primarydisk"
vhd_uri = "${join("", list(var.disks_container_url, "/" , element(module.template.vm_names, count.index), ".vhd"))}"
caching = "ReadWrite"
create_option = "FromImage"
}
os_profile {
computer_name = "${element(module.template.vm_names, count.index)}"
admin_username = "${element(module.template.user_names, count.index)}"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys = [{
path = "/home/${element(module.template.user_names, count.index)}/.ssh/authorized_keys"
key_data = "${replace(file("../vars/keys/vm.pub"),"\n","")}"
}]
}
tags {
environment = "${var.resource_group_name}"
}
}

resource "azurerm_marketplace_agreement"

Related

How to create shared image based off existing VM in Azure?

I have an existing Virtual Machine running in Azure that has customised software installed. I want to use Terraform to create an image of this virtual machine and store it in an image gallery. The problem is, I dont understand how Terraform uniquely identifies the virtual machine in question.
Currently, I have the following:
// Get VM I want to create an image for (how can I use this as the image reference?)
data "azurerm_virtual_machine" "example" {
name = "example"
resource_group_name = "rg-example"
}
resource "azurerm_shared_image_gallery" "example" {
name = "example_image_gallery"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
description = "Shared images and things."
}
resource "azurerm_shared_image" "example" {
name = "my-image"
gallery_name = azurerm_shared_image_gallery.example.name
resource_group_name = "rg-example"
location = "australiacentral"
os_type = "Linux"
identifier {
publisher = "teradata"
offer = "vantage-teradata-viewpoint"
sku = "teradata-viewpoint-single-system-hourly-new"
}
specialized = true
}
As far as I can tell, Terraform can only create the image based on the identifier block. But this does not uniquely identify my virtual machine. Am I missing something obvious?
My goal is to perform the "Capture" operation that is available via the Azure Portal via Terraform. How do I specify my source VM?
Through additional research, I found I needed an azurerm_shared_image_version resource. Here, I was able to reference my existing Virtual Machine via managed_image_id:
// Get clienttools VM information
data "azurerm_virtual_machine" "example" {
name = "test-virtual-machine"
resource_group_name = "rg-example"
}
resource "azurerm_shared_image_gallery" "example" {
name = "myGallery"
resource_group_name = "rg-example"
location = "australiacentral"
description = "Shared images and things."
}
resource "azurerm_shared_image" "example" {
name = "my-image"
gallery_name = azurerm_shared_image_gallery.example.name
resource_group_name = "rg-example"
location = "australiacentral"
os_type = "Windows"
identifier {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2019-datacenter-gensecond"
}
// Set this as it defaults to V1
hyper_v_generation = "V2"
specialized = true
}
resource "azurerm_shared_image_version" "example" {
name = "0.0.1"
gallery_name = azurerm_shared_image_gallery.example.name
image_name = azurerm_shared_image.example.name
resource_group_name = "rg-example"
location = "australiacentral"
managed_image_id = data.azurerm_virtual_machine.example.id
target_region {
name = "australiacentral"
regional_replica_count = 1
storage_account_type = "Standard_LRS"
}
}

Terraform deployment for 'Work pace based Application Insight' on Azure

I have been trying to figure out a way to prepare a terraform template for my app service/az function where I can connect it to application Insight while creating them through Terraform. Well the it worked, BUT the application Insight shows
Migrate this resource to Workspace-based Application Insights to gain support for all of the capabilities of Log Analytics, including Customer-Managed Keys and Commitment Tiers. Click here to learn more and migrate in a few clicks.
How do I acheive it from terraform? As from the documentation page of terraform there is no mention of such setup. Appreciate you help on this.
Here is the terraform code for az-function
resource "azurerm_linux_function_app" "t_funcapp" {
name = "t-function-app"
location = local.resource_location
resource_group_name = local.resource_group_name
service_plan_id = azurerm_service_plan.t_app_service_plan.id
storage_account_name = azurerm_storage_account.t_funcstorage.name
storage_account_access_key = azurerm_storage_account.t_funcstorage.primary_access_key
site_config {
application_stack {
java_version = "11"
}
remote_debugging_enabled = false
ftps_state = "AllAllowed"
}
app_settings = {
APPINSIGHTS_INSTRUMENTATIONKEY = "${azurerm_application_insights.t_appinsights.instrumentation_key}"
}
depends_on = [
azurerm_resource_group.t_rg,
azurerm_service_plan.t_app_service_plan,
azurerm_storage_account.t_funcstorage,
azurerm_application_insights.t_appinsights
]
}
Here is the terraform code for app insight
resource "azurerm_application_insights" "t_appinsights" {
name = "t-appinsights"
location = local.resource_location
resource_group_name = local.resource_group_name
application_type = "web"
depends_on = [
azurerm_log_analytics_workspace.t_workspace
]
}
output "instrumentation_key" {
value = azurerm_application_insights.t_appinsights.instrumentation_key
}
output "app_id" {
value = azurerm_application_insights.t_appinsights.app_id
}
You must create a Log Analytics Workspace and add it to your Application Insights.
For example
resource "azurerm_log_analytics_workspace" "example" {
name = "workspace-test"
location = local.resource_location
resource_group_name = local.resource_group_name
sku = "PerGB2018"
retention_in_days = 30
}
resource "azurerm_application_insights" "t_appinsights" {
name = "t-appinsights"
location = local.resource_location
resource_group_name = local.resource_group_name
workspace_id = azurerm_log_analytics_workspace.example.id
application_type = "web"
}
output "instrumentation_key" {
value = azurerm_application_insights.t_appinsights.instrumentation_key
}
output "app_id" {
value = azurerm_application_insights.t_appinsights.app_id
}
Hope this helps!

Is version mandatory while creating an Azure VM using terraform?

So I have been working with terraform since last 3 weeks and have been trying to use it to create self hosted GitHub Actions runners in our Azure account.
We have a shared windows VM image in Azure Compute Gallery that I'm planning to use as base image for the GA runner. I have noticed that these shared windows VM images do not generally have any versions attached to them they just have a publisher, offer and SKU attached.
I also verified by creating a new image from a VM to check if somebody missed attaching the version to the VM, but no shared images do not really have a version attached.
Yeah they do have versions but it is not attached as it is for Microsoft Platform Images.
Example of a shared image:
Now I found that in terraform, runners can be created by using both: azurerm_windows_virtual_machine and azurerm_virtual_machine resources.
I used both of them to test the runner creation, below are the terraform code used:
data "azurerm_shared_image" "win19_gold_image" {
provider = azurerm.gi
name = "Windows-2019_base"
gallery_name = data.azurerm_shared_image_gallery.cap_win_gold_image_gallery.name
resource_group_name = "gi-rg"
}
resource "azurerm_virtual_machine" "win_runners_gold_image_based" {
provider = azurerm.og
name = "ga-win-gold-1"
location = "East US"
count = "1" # if I need to increase the number of VMs.
resource_group_name = data.azurerm_resource_group.dts_rg.name
network_interface_ids = [azurerm_network_interface.azure_win_runner_gold_nic[count.index].id,]
vm_size = "Standard_D4ads_v5"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
storage_image_reference {
publisher = data.azurerm_shared_image.win19_gold_image.identifier[0].publisher
offer = data.azurerm_shared_image.win19_gold_image.identifier[0].offer
sku = data.azurerm_shared_image.win19_gold_image.identifier[0].sku
# Here I get the error: Error: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter" Message="The value of parameter imageReference.version is invalid." Target="imageReference.version"
}
storage_os_disk {
name = "ga-win-gold-os-disk-1"
caching = "None"
create_option = "FromImage"
managed_disk_type = "StandardSSD_LRS"
}
os_profile {
computer_name = "ga-win-gold-1"
admin_username = "svc"
admin_password = var.WINDOWS_ADMIN_PASS
}
os_profile_windows_config {
enable_automatic_upgrades = true
provision_vm_agent = true
}
storage_data_disk {
name = "ga-win-gold-data-disk-1"
caching = "None"
create_option = "Empty"
disk_size_gb = var.disk_size_gb
lun = 0
managed_disk_type = "StandardSSD_LRS"
}
}
OR
data "azurerm_shared_image" "win19_gold_image" {
provider = azurerm.gi
name = "Windows-2019_base"
gallery_name = data.azurerm_shared_image_gallery.cap_win_gold_image_gallery.name
resource_group_name = "gi-rg"
}
resource "azurerm_windows_virtual_machine" "azure_win_runner" {
provider = azurerm.og
name = "vm-github-actions-win-${count.index}"
resource_group_name = data.azurerm_resource_group.dts_rg.name
location = "East US"
size = var.windows-vm-size
count = "${var.number_of_win_az_instances}"
network_interface_ids = [
azurerm_network_interface.azure_win_runner_nic[count.index].id,
]
computer_name = "vm-ga-win-${count.index}"
admin_username = var.windows-admin-username
admin_password = var.WINDOWS_ADMIN_PASS
os_disk {
name = "vm-github-actions-win-${count.index}-os-disk"
caching = "None"
storage_account_type = "StandardSSD_LRS"
}
source_image_reference {
publisher = data.azurerm_shared_image.win19_gold_image.identifier[0].publisher
offer = data.azurerm_shared_image.win19_gold_image.identifier[0].offer
sku = data.azurerm_shared_image.win19_gold_image.identifier[0].sku
version = data.azurerm_shared_image.win19_gold_image.identifier[0].version # says this object does not have a version attached to it.
# or version = "latest" or any other correct version string will throw error at time of apply that such a version does not exist.
}
enable_automatic_updates = true
provision_vm_agent = true
}
If I'm using azurerm_virtual_machine then if I ignore the version in storage_image_reference I receive the error:
Error: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter" Message="The value of parameter imageReference.version is invalid." Target="imageReference.version"
And if I add the version then I receive the error
Error: Unsupported attribute.
This object does not have an attribute named "version".
When using azurerm_windows_virtual_machine if I remove the version argument terraform complains that version is required and when provided a sting such as 1.0.0 or latest, while applying(terraform apply) it would complain that such a version does not exist.
And if I pull the version from data.azurerm_shared_image.cap_win19_gold_image it would complain that this object does not have a version.
I am confused as to how to use shared images for VM creation using terraform if version is mandatory yet if version is not available for azure shared images. Please advise on what am I missing?
Any help would be appreciated.
Thanks,
Sekhar
It seems to get a version of the image you need to use another resource [1] and another data source [2]:
data "azurerm_image" "win19_gold_image" {
name = "Windows-2019_base"
resource_group_name = "gi-rg"
}
resource "azurerm_shared_image_version" "win19_gold_image" {
name = "0.0.1"
gallery_name = data.azurerm_shared_image.win19_gold_image.gallery_name
image_name = data.azurerm_shared_image.win19_gold_image.name
resource_group_name = data.azurerm_shared_image.win19_gold_image.resource_group_name
location = data.azurerm_shared_image.win19_gold_image.location
managed_image_id = data.azurerm_image.win19_gold_image.id
}
And then in the source_image_reference block in the azurerm_windows_virtual_machine resource:
source_image_reference {
publisher = data.azurerm_shared_image.win19_gold_image.identifier[0].publisher
offer = data.azurerm_shared_image.win19_gold_image.identifier[0].offer
sku = data.azurerm_shared_image.win19_gold_image.identifier[0].sku
version = azurerm_shared_image_version.win19_gold_image.name
}
As it seems the name argument is actually the version of the image [3]:
name - (Required) The version number for this Image Version, such as 1.0.0. Changing this forces a new resource to be created.
[1] https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/shared_image_version
[2] https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/image
[3] https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/shared_image_version#name
Hi All who come across this question,
I found the solution to my issue. All I had to do was define a azurerm_shared_image_version data and then use source_image_id in azurerm_windows_virtual_machine in place of source_image_reference{} block.
Below is what I did:
data "azurerm_shared_image_gallery" "win_gold_image_gallery" {
provider = azurerm.gi
name = "golden_image_gallery"
resource_group_name = "gi-rg"
}
data "azurerm_shared_image" "win19_gold_image" {
provider = azurerm.gi
name = "Windows-2019_base"
gallery_name = data.azurerm_shared_image_gallery.win_gold_image_gallery.name
resource_group_name = data.azurerm_shared_image_gallery.win_gold_image_gallery.resource_group_name
}
data "azurerm_shared_image_version" "win19_gold_image_version" {
provider = azurerm.gi
name = "latest" # "recent" is also a tag to use the most recent image version
image_name = data.azurerm_shared_image.win19_gold_image.name
gallery_name = data.azurerm_shared_image.win19_gold_image.gallery_name
resource_group_name = data.azurerm_shared_image.win19_gold_image.resource_group_name
}
resource "azurerm_windows_virtual_machine" "azure_win_gi_runner" {
provider = azurerm.dep
name = "vm-github-actions-win-gi-${count.index}"
resource_group_name = data.azurerm_resource_group.dts_rg.name
location = "East US"
size = var.windows-vm-size
count = "${var.number_of_win_gi_az_instances}"
network_interface_ids = [
azurerm_network_interface.azure_win_gi_runner_nic[count.index].id,
]
computer_name = "ga-win-gi-${count.index}"
admin_username = var.windows-admin-username
admin_password = var.WINDOWS_ADMIN_PASS
os_disk {
name = "vm-github-actions-win-gi-${count.index}-os-disk"
caching = "None"
storage_account_type = "StandardSSD_LRS"
}
source_image_id = data.azurerm_shared_image_version.win19_gold_image_version.id
# This is the thing I was missing.
enable_automatic_updates = true
provision_vm_agent = true
tags = {
whichVM = var.gh_windows_runner
environment = var.environment
}
}

Ho to provision Terraform Azure Linux VM - You have not accepted the legal terms on this subscription

I am using terraform with azure to provision an ubuntu virtual machine and I am getting the below error:
creating Linux Virtual Machine: (Name "test-bastion" / Resource Group "ssi-test"): compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="ResourcePurchaseValidationFailed" Message="User failed validation to purchase resources. Error message: 'You have not accepted the legal terms on this subscription: 'xxxxx-xxxxx-xxxxx-xxxx' for this plan.
I can spin up VM's through azure portal but not with terraform.
Here's my terraform module
resource "azurerm_linux_virtual_machine" "linux_virtual_machine" {
name = join("-", [var.environment, "bastion"])
resource_group_name = var.resource_group_name
location = var.location
size = var.bastion_size
admin_username = var.bastion_admin_username
computer_name = join("-", [var.project, var.environment, "bastion"])
custom_data = filebase64(var.bastion_custom_data_path)
network_interface_ids = [
azurerm_network_interface.bastion_nic.id
]
admin_ssh_key {
username = var.bastion_admin_username
public_key = file(var.bastion_public_key_path)
}
source_image_reference {
publisher = var.bastion_publisher
offer = var.bastion_offer
sku = var.bastion_sku
version = var.bastion_version
}
plan {
name = var.bastion_sku
publisher = var.bastion_publisher
product = var.bastion_offer
}
os_disk {
name = join("-", [var.project, var.environment, "bastion-os-disk"])
storage_account_type = "Standard_LRS"
caching = "ReadWrite"
disk_size_gb = var.bastion_os_disk_size_gb
}
}
# Create network interface
resource "azurerm_network_interface" "bastion_nic" {
name = join("-", [var.project, var.environment, "bastion-nic"])
location = var.location
resource_group_name = var.resource_group_name
depends_on = [azurerm_public_ip.bastion_public_ip]
ip_configuration {
name = join("-", [var.project, var.environment, "bastion-nic-conf"])
subnet_id = var.bastion_subnet_id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.bastion_public_ip.id
}
tags = var.default_tags
}
and here are the variable values (some are removed)
bastion_admin_username = "ubuntu"
bastion_os_disk_size_gb = "60"
bastion_public_key_path = "./data/keys/bastion.pub"
bastion_size = "Standard_B2s"
bastion_publisher = "canonical"
bastion_offer = "0001-com-ubuntu-server-focal"
bastion_sku = "20_04-lts-gen2"
bastion_version = "latest"
bastion_custom_data_path = "./data/scripts/bastion.sh"
Can someone help me?
Plan block is mostly for BYOS images like RedHat, Arista & Palo Alto. Below flavor doesn't need any plan as this can be used without accepting marketplace terms first before using them via automation.
> az vm image list-skus -l westeurope -p canonical -f 0001-com-ubuntu-server-focal
{
"extendedLocation": null,
"id": "/Subscriptions/b500a058-6396-45db-a15d-3f31913e84a5/Providers/Microsoft.Compute/Locations/westeurope/Publishers/canonical/ArtifactTypes/VMImage/Offers/0001-com-ubuntu-server-focal/Skus/20_04-lts-gen2",
"location": "westeurope",
"name": "20_04-lts-gen2",
"properties": {
"automaticOSUpgradeProperties": {
"automaticOSUpgradeSupported": false
}
},
"tags": null
}
If you remove below plan block from azurerm_linux_virtual_machine resource, it should work for the image flavor you picked.
plan {
name = var.bastion_sku
publisher = var.bastion_publisher
product = var.bastion_offer
}
The reason why it's working via portal because ARM template doesn't add plan block there. You can download and verify ARM template before creating VM on portal if you want.
Accept the agreement first, with this resource: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/marketplace_agreement

Forcing azurerm extension to wait until vm is deployed

I am running into an issue that is preventing my use of Terraform at the moment and wanted to see if anyone has seen the same behavior. I am using count to deploy multiple VM's along with a dsc extension for each VM.
Because I need the dsc extension to run on the first machine before running on the second machine, I attempted to use the depends_on property for the extension but due to the way I an using interpolation for machine naming, it fails due interpolation not being supported in depends_on.
Does anyone know a way around this? I have tested also tested pushing the machine names into a data resource but once again, I need the depends_on property to support interpolation.
resource "azurerm_virtual_machine" "Server" {
name = "${format("${var.customerProject}${var.environment}${var.machineAcronyms["Server"]}%02d", count.index + 1)}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_interface_ids = ["${element(azurerm_network_interface.Server_NIC.*.id, count.index)}"]
vm_size = "${var.Server_Specs["ServerType"]}"
count = "${var.Server_Specs["Number_of_Machines"]}"
storage_image_reference {
publisher = "${var.Server_Specs["Image_Publisher"]}"
offer = "${var.Server_Specs["Image_Offer"]}"
sku = "${var.Server_Specs["Image_sku"]}"
version = "${var.Server_Specs["Image_Version"]}"
}
plan {
name = "${var.Server_Specs["Plan_Name"]}"
publisher = "${var.Server_Specs["Plan_Publisher"]}"
product = "${var.Server_Specs["Plan_Product"]}"
}
os_profile {
computer_name = "${format("${var.customerProject}${var.environment}${var.machineAcronyms["Server"]}%02d", count.index + 1)}"
admin_username = "${var.AdminCredentials["Username"]}"
admin_password = "${var.AdminCredentials["Password"]}"
}
os_profile_windows_config {
provision_vm_agent = "true"
}
}
resource "azurerm_virtual_machine_extension" "Server_DSC" {
name = "${format("${var.customerProject}${var.environment}${var.machineAcronyms["Server"]}%02d", count.index + 1)}-dsc"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
virtual_machine_name = "${format("${var.customerProject}${var.environment}${var.machineAcronyms["Server"]}%02d", count.index + 1)}"
publisher = "Microsoft.Powershell"
type = "DSC"
type_handler_version = "${var.dsc_extension}"
auto_upgrade_minor_version = true
depends_on = ["azurerm_storage_share.fileShare"]
count = "${var.Server_Specs["Number_of_Machines"]}"
settings = <<SETTINGS
{
"configuration": {
"url": "${var.resourceStore["fileShareUrl"]}${var.resourceStore["dscArchiveName"]}${var.azureCredentials["storageKey"]}",
"function": "contenthostingha",
"script": "contenthostingha.ps1"
},
"configurationArguments": {
"ExternalDNS": "${var.externalDNS}",
"NumberOfMachines": "${var.Server_Specs["Number_of_Machines"]}",
"AzureFileUrl": "azurerm_storage_share.fileShare.url",
"AzureFileShareKey": "${azurerm_storage_account.storageAccount.secondary_access_key}"
}
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"configurationArguments": {}
}
PROTECTED_SETTINGS
}
I haven't tried, but you might duplicate the Server_DSC resource (e.g. Server_DSC_0 and Server_DSC_nth), the _0 will use a fixed "0" instead of count and the number of instances should be lessen by 1, e.g using a local variable that subtract one to the original variable.

Resources