Issue provisioning Databricks workspace resources using Terraform - terraform

I have defined resource to provision databricks workspace on Azure using Terraform as follows which consumes the list ( of inputs from tfvar file for # of workspaces) and provision them.
resource "azurerm_databricks_workspace" "workspace" {
for_each = { for r in var.databricks_workspace_list : r.workspace_nm => r}
name = each.key
resource_group_name = each.value.resource_group_name
location = each.value.location
sku = "standard"
tags = {
Environment = "Dev"
}
}
I am trying to create additional resource as below
resource "databricks_instance_pool" "smallest_nodes" {
instance_pool_name = "Smallest Nodes"
min_idle_instances = 0
max_capacity = 300
node_type_id = data.databricks_node_type.smallest.id // data block is defined
idle_instance_autotermination_minutes = 10
}
To create instance pool, I need to pass workspace id in databricks provider block as below
provider "databricks" {
azure_client_id= *******
azure_client_secret= *******
azure_tenant_id= *******
azure_workspace_resource_id = azurerm_databricks_workspace.workspace.id
}
But when I do terraform plan, it fails with below error
Missing resource instance key
azure_workspace_resource_id = azurerm_databricks_workspace.workspace.id
Because azure_workspace_resource_id = azurerm_databricks_workspace has for_each set, its attribute must be accessed on specific instances.
For example, to correlate indices , use :
azurerm_databricks_workspace[each.key]
I couldnt use for_each in provider block, also not able to find out way to index workspace id in provider block.
Appreciate your inputs.
TF version : 0.13
Azure RM : 3.10.0
Databricks : 0.5.7

The problem is that you can create multiple workspaces when you're using for_each in the azurerm_databricks_workspace resource. But your provider block is trying to refer to a "generic" resource instance, so it's complaining.
The solution here would be either:
Remove for_each if you're creating just one workspace
instead of azurerm_databricks_workspace.workspace.id, you need to refer azurerm_databricks_workspace.workspace[<name>].id where the <name> is the specific instance of Databricks from the list of workspaces.
P.S. Your databricks_instance_pool resource doesn't have explicit depends_on, so the operation will fail with authentication error as described here.

Related

A resource with the ID "/subscriptions/.../resourceGroups/rgaks/providers/Microsoft.Storage/storageAccounts/aksbackupstorage" already exists

I have created storage account and container inside it to store my aks backup using terraform. I have created child module for the storage account and container.I am creating the storage account and continer calling it from root module from "main.tf".i have created two modules such as module Ex:"module aks_backup_storage" and "module aks_backup_conatiner". The module have been created successfully after applying the terraform command "terraform apply" but at the end it is raising the following errors are mentioned bellow in the console.
A resource with the ID "/subscriptions/...../resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_storage_account" for more information.
failed creating container: failed creating container: containers.Client#Create: Failure sending request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="ContainerAlreadyExists" Message="The specified container already exists.\nRequestId:f.........\nTime:2022-12-28T12:52:08.2075701Z"
root module
module "aks_backup_storage" {
source = "../modules/aks_pv_storage_container"
rg_aks_backup_storage = var.rg_aks_backup_storage
aks_backup_storage_account = var.aks_backup_storage_account
aks_backup_container = var.aks_backup_container
rg_aks_backup_storage_location = var.rg_aks_backup_storage_location
aks_backup_retention_days = var.aks_backup_retention_days
}
Child module
resource "azurerm_resource_group" "rg_aksbackup" {
name = var.rg_aks_backup_storage
location = var.rg_aks_backup_storage_location
}
resource "azurerm_storage_account" "aks_backup_storage" {
name = var.aks_backup_storage_account
resource_group_name = var.rg_aks_backup_storage
location = var.rg_aks_backup_storage_location
account_kind = "StorageV2"
account_tier = "Standard"
account_replication_type = "ZRS"
access_tier = "Hot"
enable_https_traffic_only = true
min_tls_version = "TLS1_2"
#allow_blob_public_access = false
allow_nested_items_to_be_public = false
is_hns_enabled = false
blob_properties {
container_delete_retention_policy {
days = var.aks_backup_retention_days
}
delete_retention_policy {
days = var.aks_backup_retention_days
}
}
}
# Different container can be created for the different backup level such as cluster, Namespace, PV
resource "azurerm_storage_container" "aks_backup_container" {
#name = "aks-backup-container"
name = var.aks_backup_container
#storage_account_name = azurerm_storage_account.aks_backup_storage.name
storage_account_name= var.aks_backup_storage_account
}
I have also try to import the resource using the bellow command
terraform import ['azurerm_storage_account.aks_backup_storage /subscriptions/a3ae2713-0218-47a2-bb72-c6198f50c56f/resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage']
But it also saying ZSH command not found
zsh: no matches found: [azurerm_storage_account.aks_backup_storage /subscriptions/a3ae2713-0218-47a2-bb72-c6198f50c56f/resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage/]
I had no issue when i was creating the resources using the same code without declaring any module.
Now, I have several modules in root module in the main.tf file
here is my project directory structure
I really appreciate any suggestions thanks in advance
variable.tf
variable "rg_aks_backup_storage" {
type = string
description = "storage account name for the backup"
default = "rg-aks-backup-storage"
}
variable "aks_backup_storage_account" {
type = string
description = "storage account name for the backup"
default = "aksbackupstorage"
}
variable "aks_backup_container" {
type = string
description = "storage container name "
#default = "aks-storage-container"
default = "aksbackupstoragecontaine"
}
variable "rg_aks_backup_storage_location" {
type = string
default = "westeurope"
}
variable "aks_backup_retention_days" {
type = number
default = 90
}
The storage account name that you use must be unique within Azure (see naming restrictions). I checked, and the default storage account name that you are using is already taken. Have you tried changing the name to something you know is unique?
A way to consistently do this would be to add a random suffix at the end of the name, eg:
resource "random_string" "random_suffix" {
length = 6
special = false
upper = false
}
resource "azurerm_storage_account" "aks_backup_storage" {
name = join("", tolist([var.aks_backup_storage_account, random_string.random_suffix.result]))
...
}
I also received the same error when I tried to run terraform apply while creating container registry.
It usually occurs when the terraform state file (running locally) does not match the Portal terraform state file resources.
Even if a resource with the same name does not exist in the portal or resource group, it will appear in terraform state files if it was deployed previously. If you've received these types of issues, verify the tf state file in portal. If the resource is not existent, use the following command to import it.
Note: Validate that the terraform state files are identical. Run terraform init & terraform apply once you are done with the changes.
To resolve this error, Use terraform import .
Here I tried to import the container registry (let's say) and it imported successfully.
terraform import azurerm_container_registry.acr "/subscriptions/<subscriptionID>/resourceGroups/<resourceGroup>/providers/Microsoft.ContainerRegistry/registries/xxxxcontainerRegistry1"
Output:
After that I applied terraform apply and successfully deployed the resource without any errors.
Deployed successfully in Portal:

Terraform Error 'the number of path segments is not divisible by 2' creating Azure Configuration Keys

I'm trying to create an Azure App Configuration service and keys through Terraform, but when I run my Terraform through my pipeline I get an error running terraform plan. This is my tf script for creating the service and keys:
resource "azurerm_app_configuration" "appconf" {
name = var.name
location = var.location
resource_group_name = var.resource_group_name
tags = var.tags
sku = "standard"
}
resource "azurerm_app_configuration_key" "MainAPI" {
configuration_store_id = azurerm_app_configuration.appconf.id
key = "MainAPI"
value = var.picking_api_url
type = "kv"
label = var.environment_name
}
# other keys omitted
This is the error I see:
Error: while parsing resource ID: while parsing resource ID: the number of path segments is not divisible by 2 in "subscriptions/[SubscriptionId]/resourceGroups/[rgName]/providers/Microsoft.AppConfiguration/configurationStores/[AppConfigServiceName]/AppConfigurationKey/MainAPI/Label"
I get this error regardless of whether I explicitly include a label argument for the key in my TF script. I've bumped up my version of the Terraform ARM provider to 2.90 in case it was a bug in the provider, but still get the error.
resource "azurerm_app_configuration_key" depends on azurerm_role_assignment. You need to define a resource for that as well for assigning role to app_configuration.
The following code from the Hashicorp Terraform documentation for azurerm_app_configuration_key demonstrates how to do this:
resource "azurerm_role_assignment" "appconf_dataowner" {
scope = azurerm_app_configuration.appconf.id
role_definition_name = "App Configuration Data Owner"
principal_id = data.azurerm_client_config.current.object_id
}
resource "azurerm_app_configuration_key" "test" {
configuration_store_id = azurerm_app_configuration.appconf.id
key = "appConfKey1"
label = "somelabel"
value = "a test"
depends_on = [
azurerm_role_assignment.appconf_dataowner
]
}

Azure Terraform Build Generic Components using Infrastructure as Code

I am new to Terraform and Azure. I am trying to build a Resource Group / Resources using Terraform. Below is the design for the same.
I have written Terraform code to build Log Analytics workspace and Automation account.
Now below are my questions :
Cost Mgmt / Azure Monitor / Network Watcher / Defender for Cloud ? Can I build all these using Terraform code in this resource group or they need to manually built from Azure portal. When we create any resource on the left hand side options like Cost estimator / management are already available. Does that mean they can be easily selected from there on usage and no need to build from Terraform code ?
How does we apply Role Entitlement / Policy Assignment from Terraform code ?
Here is my code what I have written to build Automation account / Log Analytics
terraform {
required_version = ">=0.12"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~>2.0"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "management" {
# Mandatory resource attributes
name = "k8s-log-analytics-test"
location = "eastus"
}
resource "random_id" "workspace" {
keepers = {
# Generate a new id each time we switch to a new resource group
group_name = azurerm_resource_group.management.name
}
byte_length = 8
}
resource "azurerm_log_analytics_workspace" "management" {
# Mandatory resource attributes
name = "k8s-workspace-${random_id.workspace.hex}"
location = azurerm_resource_group.management.location
resource_group_name = azurerm_resource_group.management.name
# Optional resource attributes
retention_in_days = 30
sku = "PerGB2018"
}
resource "azurerm_log_analytics_solution" "management" {
# Mandatory resource attributes
solution_name = "mgmyloganalytsolution"
location = azurerm_resource_group.management.location
resource_group_name = azurerm_resource_group.management.name
workspace_resource_id = azurerm_log_analytics_workspace.management.id
workspace_name = azurerm_log_analytics_workspace.management.name
plan {
publisher = "Microsoft"
product = "OMSGallery/ContainerInsights"
}
}
resource "azurerm_automation_account" "management" {
# Mandatory resource attributes
name = "mgmtautomationaccount"
location = azurerm_resource_group.management.location
resource_group_name = azurerm_resource_group.management.name
sku_name = "Basic"
}
resource "azurerm_log_analytics_linked_service" "management" {
# Mandatory resource attributes
resource_group_name = azurerm_resource_group.management.name
workspace_id = azurerm_log_analytics_workspace.management.id
read_access_id = azurerm_automation_account.management.id
}
Cost Mgmt / Azure Monitor / Network Watcher / Defender for Cloud ? Can
I build all these using Terraform code in this resource group or they
need to manually built from Azure portal. When we create any resource
on the left hand side options like Cost estimator / management are
already available. Does that mean they can be easily selected from
there on usage and no need to build from Terraform code ?
Yes , you can create Network Watcher , Azure Monitor resources & Cost Management using terraform resource blocks as azurerm_network_watcher , azurerm_network_watcher_flow_log ,azurerm_monitor_metric_alert ... , azurerm_resource_group_cost_management_export, azurerm_consumption_budget_resource_group etc. Defender for Cloud can't be built from terraform . Yes you are correct , cost management ,monitoring etc are also available on portal but there is a need for its resources to be created like budget alert etc. for simplification it has been added as a blade in portal.
How does we apply Role Entitlement / Policy Assignment from Terraform
code ?
You can use azurerm_role_assignment to assign built-in roles and use azurerm_role_definition to create a custom role and then assign it . For Policy assignment you can use this azurerm_resource_policy_assignment and remediate using azurerm_policy_insights_remediation.
For all the azure resource block you can refer the Official Registry Documentation of Terraform AzureRM Provider & Terraform AzureAD Provider.

How to add resource dependencies in terraform

I have created a gcp kubernetes cluster using terraform and configured a few kubernetes resources such as namespaces and helm releases. I would like terraform to automatically destroy/recreate all the kubernetes cluster resources if the gcp cluster is destroyed/recreated but I cant seem to figure out how to do it.
The behavior I am trying to recreate is similar to what you would get if you used triggers with null_resources. Is this possible with normal resources?
resource "google_container_cluster" "primary" {
name = "marcellus-wallace"
location = "us-central1-a"
initial_node_count = 3
resource "kubernetes_namespace" "example" {
metadata {
annotations = {
name = "example-annotation"
}
labels = {
mylabel = "label-value"
}
name = "terraform-example-namespace"
#Something like this, but this only works with null_resources
triggers {
cluster_id = "${google_container_cluster.primary.id}"
}
}
}
In your specific case, you don't need to specify any explicit dependencies. They will be set automatically because you have cluster_id = "${google_container_cluster.primary.id}" in your second resource.
In case when you need to set manual dependency you can use depends_on meta-argument.

How to iterate multiple resources over the same list?

New to Terraform here. I'm trying to create multiple projects (in Google Cloud) using Terraform. The problem is I've to execute multiple resources to completely set up a project. I tried count, but how can I tie multiple resources sequentially using count? Here are the following resources I need to execute per project:
Create project using resource "google_project"
Enable API service using resource "google_project_service"
Attach the service project to a host project using resource "google_compute_shared_vpc_service_project" (I'm using shared VPC)
This works if I want to create a single project. But, if I pass a list of projects as input, how can I execute all the above resources for each project in that list sequentially?
Eg.
Input
project_list=["proj-1","proj-2"]
Execute the following sequentially:
resource "google-project" for "proj-1"
resource "google_project_service" for "proj-1"
resource "google_compute_shared_vpc_service_project" for "proj-1"
resource "google-project" for "proj-2"
resource "google_project_service" for "proj-2"
resource "google_compute_shared_vpc_service_project" for "proj-2"
I'm using Terraform version 0.11 which does not support for loops
In Terraform, you can accomplish this using count and the two interpolation functions, element() and length().
First, you'll give your module an input variable:
variable "project_list" {
type = "list"
}
Then, you'll have something like:
resource "google_project" {
count = "${length(var.project_list)}"
name = "${element(var.project_list, count.index)}"
}
resource "google_project_service" {
count = "${length(var.project_list)}"
name = "${element(var.project_list, count.index)}"
}
resource "google_compute_shared_vpc_service_project" {
count = "${length(var.project_list)}"
name = "${element(var.project_list, count.index)}"
}
And of course you'll have your other configuration in those resource declarations as well.
Note that this pattern is described in Terraform Up and Running, Chapter 5, and there are other examples of using count.index in the docs here.
A small update to this question/answer (terraform 0.13 and above). The count or length is not advisable to use anymore due to the way that terraforms works, let's imagine the next scenario:
Suppose you have an array with 3 elements: project_list=["proj-1","proj-2","proj-3"], once you apply that if you want to delete the "proj-2" item from your array once you run the plan, terraform will modify your second element to "proj-3" instead of removing It from the list (more info in this good post). The solution to get the proper behavior is to use the for_each function as follow:
variable "project_list" {
type = list(string)
}
resource "google_project" {
for_each = toset(var.project_list)
name = each.value
}
resource "google_project_service" {
for_each = toset(var.project_list)
name = each.value
}
resource "google_compute_shared_vpc_service_project" {
for_each = toset(var.project_list)
name = each.value
}
Hope this helps! 👍

Resources