create compute instance in azure machine learning workspace terraform - azure

I am using terraform module machine_learning_workspace to create machine learning workspace in Azure. Relevent code for that is as follows:
resource "azurerm_machine_learning_workspace" "example" {
name = "${var.ML_workspace_name}"
location = data.azurerm_resource_group.resource_group.location
resource_group_name = data.azurerm_resource_group.resource_group.name
application_insights_id = azurerm_application_insights.azurem_application_insight.id
key_vault_id = azurerm_key_vault.key_vault.id
storage_account_id = azurerm_storage_account.storage_account.id
identity {
type = "SystemAssigned"
}
}
This is working fine.
Now I want to create a compute instance within this workspace. terraform does not support this or I am not able to find it in docs.
are there other ways to automate its creation?

Related

How to get the System Assigned Identity of Azure Synapse Workspace using Terraform?

I am trying to get the System Assigned Managed identity for Azure Synapse
I have the following Terraform Code
// Create Synapse Workspace
resource "azurerm_synapse_workspace" "synapsews" {
name = var.dat_synapse_ws
resource_group_name = azurerm_resource_group.resource_group.name
location = azurerm_resource_group.resource_group.location
storage_data_lake_gen2_filesystem_id = azurerm_storage_data_lake_gen2_filesystem.datalake-config.id
sql_identity_control_enabled = true
sql_administrator_login = var.synapse_sql_administrator_login
sql_administrator_login_password = var.synapse_sql_administrator_login_password
managed_virtual_network_enabled = true
aad_admin {
login = data.azuread_user.user.mail
object_id = data.azuread_user.user.object_id
tenant_id = data.azurerm_client_config.current.tenant_id
}
identity {
type = "SystemAssigned"
}
depends_on = [ azurerm_storage_account.datalake ]
}
How to get the System Assigned Identity of Azure Synapse Workspace using Terraform?
After applying your terraform code, you can reference the Managed Service Identity of this Synapse Workspace like this azurerm_synapse_workspace.synapsews.principal_id
More info:
Source: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/synapse_workspace#principal_id

Use terraform to add a VM to the new Azure Monitoring (without OMS Agent)

When I configure Azure Monitoring using the OMS solution for VMs with this answer Enable Azure Monitor for existing Virtual machines using terraform, I notice that this feature is being deprecated and Azure prefers you move to the new monitoring solution (Not using the log analytics agent).
Azure allows me to configure VM monitoring using this GUI, but I would like to do it using terraform.
Is there a particular setup I have to use in terraform to achieve this? (I am using a Linux VM btw)
Yes, that is correct. The omsagent has been marked as legacy and Azure now has a new monitoring agent called "Azure Monitor agent" . The solution given below is for Linux, Please check the Official Terraform docs for Windows machines.
We need three things to do the equal UI counterpart in Terraform.
azurerm_log_analytics_workspace
azurerm_monitor_data_collection_rule
azurerm_monitor_data_collection_rule_association
Below is the example code:
data "azurerm_virtual_machine" "vm" {
name = var.vm_name
resource_group_name = var.az_resource_group_name
}
resource "azurerm_log_analytics_workspace" "workspace" {
name = "${var.project}-${var.env}-log-analytics"
location = var.az_location
resource_group_name = var.az_resource_group_name
sku = "PerGB2018"
retention_in_days = 30
}
resource "azurerm_virtual_machine_extension" "AzureMonitorLinuxAgent" {
name = "AzureMonitorLinuxAgent"
publisher = "Microsoft.Azure.Monitor"
type = "AzureMonitorLinuxAgent"
type_handler_version = 1.0
auto_upgrade_minor_version = "true"
virtual_machine_id = data.azurerm_virtual_machine.vm.id
}
resource "azurerm_monitor_data_collection_rule" "example" {
name = "example-rules"
resource_group_name = var.az_resource_group_name
location = var.az_location
destinations {
log_analytics {
workspace_resource_id = azurerm_log_analytics_workspace.workspace.id
name = "test-destination-log"
}
azure_monitor_metrics {
name = "test-destination-metrics"
}
}
data_flow {
streams = ["Microsoft-InsightsMetrics"]
destinations = ["test-destination-log"]
}
data_sources {
performance_counter {
streams = ["Microsoft-InsightsMetrics"]
sampling_frequency_in_seconds = 60
counter_specifiers = ["\\VmInsights\\DetailedMetrics"]
name = "VMInsightsPerfCounters"
}
}
}
# associate to a Data Collection Rule
resource "azurerm_monitor_data_collection_rule_association" "example1" {
name = "example1-dcra"
target_resource_id = data.azurerm_virtual_machine.vm.id
data_collection_rule_id = azurerm_monitor_data_collection_rule.example.id
description = "example"
}
Reference:
monitor_data_collection_rule
monitor_data_collection_rule_association

Terraform can't create free web app on azure

Trying to setup my first web app using terraform on Azure using there freetier.
The Resource group, and app service plan were able to be created but the app creation gives an error that says:
creating Linux Web App: (Site Name "testazurermjay" / Resource Group "test-resources"): web.AppsClient#C. Status=<nil> <nil>
Here is the terraform main.tf file:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "test" {
name = "test-resources"
location = "Switzerland North"
}
resource "azurerm_service_plan" "test" {
name = "test"
resource_group_name = azurerm_resource_group.test.name
location = "UK South" #azurerm_resource_group.test.location
os_type = "Linux"
sku_name = "F1"
}
resource "azurerm_linux_web_app" "test" {
name = "testazurermjay"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_service_plan.test.location
service_plan_id = azurerm_service_plan.test.id
site_config {}
}
At first I thought the name was the issue for the azurerm_linux_web_app so I changed it from test to testazurermjay however that was not able to work.
I was able to get it to work BUT I had to use a depreciated resource called azurerm_app_service instead of azurerm_linux_web_app. I ALSO had to make sure that my resource-group and app service plan were in the same location. When I originally tried to set both the resource group and the app plan to Switzerland North it would give me an error when creating the app service plan (That is why you see me change the plan to UK South in the Original question). HOWEVER - after I set BOTH resource group and app service plan to UK South they were able to be created in the same location. Then I used azurerm_app_service to create a free tier service using the use_32_bit_worker_process = true variable in the site_config object.
Here is the full terraform file now:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "test" {
name = "test-resources"
location = "UK South"
}
resource "azurerm_service_plan" "test" {
name = "test"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_resource_group.test.location
os_type = "Linux"
sku_name = "F1"
}
resource "azurerm_app_service" "test" {
name = "sofcvlepsaipd"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
app_service_plan_id = azurerm_service_plan.test.id
site_config {
use_32_bit_worker_process = true
}
}
I MUST STRESS THAT THIS ISN'T BEST PRACTICE AS THE azurerm_app_service IS GOING TO BE REMOVED IN THE NEXT VERSION. THIS SEEMS TO INDICATE THAT TERRAFORM WONT BE ABLE TO CREATE FREE TIER APP SERVICES IN THE NEXT UPDATE.
If someone knows how to do this with azurerm_linux_web_app or knows a better approach to this let me know.
I just encountered a similar issue, "always_on" setting defaults to true, but that is not supported in the free tier. As stated here, you must explicitly set it to false when using free tier
resource "azurerm_linux_web_app" "test" {
name = "testazurermjay"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_service_plan.test.location
service_plan_id = azurerm_service_plan.test.id
site_config {
always_on = false
}
}

Terraform tried creating a "implicit dependency" but the next stage of my code still fails to find the Azure resource group just created

Would be grateful for any assistance, I thought I had nailed this one when I stumbled across the following link ...
Creating a resource group with terraform in azure: Cannot find resource group directly after creating it
However, the next stage of my code is still failing...
Error: Code="ResourceGroupNotFound" Message="Resource group 'ShowTell' could not be found
# We strongly recommend using the required_providers block to set the
# Azure Provider source and version being used
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.64.0"
}
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
variable "resource_group_name" {
type = string
default = "ShowTell"
description = ""
}
# Create your resource group
resource "azurerm_resource_group" "example" {
name = var.resource_group_name
location = "UK South"
}
# Should be accessible from LukesContainer.uksouth.azurecontainer.io
resource "azurerm_container_group" "LukesContainer" {
name = "LukesContainer"
location = "UK South"
resource_group_name = "${var.resource_group_name}"
ip_address_type = "public"
dns_name_label = "LukesContainer"
os_type = "Linux"
container {
name = "hello-world"
image = "microsoft/aci-helloworld:latest"
cpu = "0.5"
memory = "1.5"
ports {
port = "443"
protocol = "TCP"
}
}
container {
name = "sidecar"
image = "microsoft/aci-tutorial-sidecar"
cpu = "0.5"
memory = "1.5"
}
tags = {
environment = "testing"
}
}
In order to create an implicit dependency you must refer directly to the object that the dependency relates to. In your case, that means deriving the resource group name from the resource group object itself, rather than from the variable you'd used to configure that object:
resource "azurerm_container_group" "LukesContainer" {
name = "LukesContainer"
location = "UK South"
resource_group_name = azurerm_resource_group.example.name
# ...
}
With the configuration you included in your question, both the resource group and the container group depend on var.resource_group_name but there was no dependency between azurerm_container_group.LukesContainer and azurerm_resource_group.example, and so Terraform is therefore free to create those two objects in either order.
By deriving the container group's resource group name from the resource group object you tell Terraform that the resource group must be processed first, and then its results used to populate the container group.

Azure Databricks workspace using terraform

Trying to create Databricks workspace using terraform but unsupported arguments:
resource "azurerm_databricks_workspace" "workspace" {
name = "testdata"
resource_group_name = "cloud-terraform"
location = "east us"
sku = "premium"
virtual_network_id = azurerm_virtual_network.vnet.id
public_subnet_name = "databrickpublicsubnet"
public_subnet_cidr = "10.0.0.0/22"
private_subnet_name = "databrickprivatesubnet"
private_subnet_cidr = "10.0.0.0/22"
tags = {
Environment = "terraformtest"
}
}
Error: An argument named "virtual_network_id" is not expected here. An argument named "public_subnet_name" is not expected here. An argument named "public_subnet_cidr" is not expected here.
I haven't tried to set up databricks via Terraform, but I believe (per the docs) you need add those properties in a block:
resource "azurerm_databricks_workspace" "workspace" {
name = "testdata"
resource_group_name = "cloud-terraform"
location = "east us"
sku = "premium"
custom_parameters {
virtual_network_id = azurerm_virtual_network.vnet.id
public_subnet_name = "databrickpublicsubnet"
private_subnet_name = "databrickprivatesubnet"
}
tags = {
Environment = "terraformtest"
}
}
The two cidr entries aren't part of the TF documentation.
true. you can add terraform commands to create the subnets (assuming vnet already exists, you can use data azurerm_virtual_network then create the two new subnets, then reference the names of the two new public/private subnets.
Then you run into what seems to be a chicken/egg issue though.
You get Error: you must define a value for 'public_subnet_network_security_group_association_id' if 'public_subnet_name' is set.
Problem is, the network security group is typically auto-generated on creation of the databrick workspace (like databricksnsgrandomstring), which works when creating it in the portal, but via terraform, I have to define it to create the workspace, but it doesn't yet exist until I create the workspace. The fix is to not let it generate it's own nsg name, but name it yourself with an nsg resource block.
below is code I use (dbname means databricks name!). here I'm
adding to an existing resource group 'qa' and existing vnet as well, only showing the public subnet and nsg association, you can easily add the private ones). just copy/modify in your own tf file(s). and you'll definitely need to change the address_prefix to your own CIDR values that works within your vnet and not stomp on existing subnets within.
resource "azurerm_subnet" "public" {
name = "${var.dbname}-public-subnet"
resource_group_name = data.azurerm_resource_group.qa.name
virtual_network_name = data.azurerm_virtual_network.vnet.name
address_prefixes = ["1.2.3.4/24"]
delegation {
name = "databricks_public"
service_delegation {
name = "Microsoft.Databricks/workspaces"
}
}
}
resource "azurerm_network_security_group" "nsg" {
name = "${var.dbname}-qa-databricks-nsg"
resource_group_name = data.azurerm_resource_group.qa.name
location= data.azurerm_resource_group.qa.location
}
resource "azurerm_subnet_network_security_group_association" "nsga_public" {
network_security_group_id = azurerm_network_security_group.nsg.id
subnet_id = azurerm_subnet.public.id
}
Then in your databricks_workspace block, replace your custom parameters with
custom_parameters {
public_subnet_name = azurerm_subnet.public.name
public_subnet_network_security_group_association_id = azurerm_subnet_network_security_group_association.nsga_public.id
private_subnet_name = azurerm_subnet.private.name
private_subnet_network_security_group_association_id = azurerm_subnet_network_security_group_association.nsga_private.id
virtual_network_id = data.azurerm_virtual_network.vnet.id
}

Resources