I am data bricks cluster using terraform with below code.
resource "azurerm_resource_group" "myresourcegroup" {
name = "${var.applicationName}-${var.environment}-rg"
location = var.location
tags = {
environment = var.environment
}
}
resource "azurerm_databricks_workspace" "dbworkspace" {
name = "${var.applicationName}-${var.environment}-workspace"
resource_group_name = "${var.applicationName}-${var.environment}-rg"
location = var.location
sku = var.databricks_sku
custom_parameters {
no_public_ip = "true"
virtual_network_id = azurerm_virtual_network.vnet.id
public_subnet_name = azurerm_subnet.public_subnet.name
private_subnet_name = azurerm_subnet.private_subnet.name
public_subnet_network_security_group_association_id = azurerm_subnet.public_subnet.id
private_subnet_network_security_group_association_id = azurerm_subnet.private_subnet.id
}
depends_on = [azurerm_resource_group.myresourcegroup, azurerm_network_security_group.nsg, azurerm_virtual_network.vnet, azurerm_subnet.public_subnet, azurerm_subnet.private_subnet, azurerm_subnet_network_security_group_association.public-sn-nsg-assoc, azurerm_subnet_network_security_group_association.private-sn-nsg-assoc]
}
# Databricks Cluster
resource "databricks_cluster" "dbcluster" {
cluster_name = "${var.applicationName}-${var.environment}-cluster"
spark_version = "10.4.x-scala2.12"
node_type_id = "Standard_DS3_v2"
autotermination_minutes = 10
enable_local_disk_encryption = true
is_pinned = "true"
autoscale {
min_workers = 1
max_workers = 8
}
# spark_conf = {
# "spark.databricks.delta.optimizeWrite.enabled": true,
# "spark.databricks.delta.autoCompact.enabled": true,
# "spark.databricks.delta.preview.enabled": true,
# }
depends_on = [azurerm_resource_group.myresourcegroup, azurerm_network_security_group.nsg, azurerm_virtual_network.vnet, azurerm_subnet.public_subnet, azurerm_subnet.private_subnet, azurerm_subnet_network_security_group_association.public-sn-nsg-assoc, azurerm_subnet_network_security_group_association.private-sn-nsg-assoc, azurerm_databricks_workspace.dbworkspace]
}
My resource group and databricks workspace are creating fine but data bricks cluster is not getting created. When I see plan and apply I can see it is creating. I don't know what I am missing.
Related
I have created an AKS cluster using the following Terraform code
resource "azurerm_virtual_network" "test" {
name = var.virtual_network_name
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
address_space = [var.virtual_network_address_prefix]
subnet {
name = var.aks_subnet_name
address_prefix = var.aks_subnet_address_prefix
}
tags = var.tags
}
data "azurerm_subnet" "kubesubnet" {
name = var.aks_subnet_name
virtual_network_name = azurerm_virtual_network.test.name
resource_group_name = azurerm_resource_group.rg.name
depends_on = [azurerm_virtual_network.test]
}
# Create Log Analytics Workspace
module "log_analytics_workspace" {
source = "./modules/log_analytics_workspace"
count = var.enable_log_analytics_workspace == true ? 1 : 0
app_or_service_name = "log"
subscription_type = var.subscription_type
environment = var.environment
resource_group_name = azurerm_resource_group.rg.name
location = var.location
instance_number = var.instance_number
sku = var.log_analytics_workspace_sku
retention_in_days = var.log_analytics_workspace_retention_in_days
tags = var.tags
}
resource "azurerm_kubernetes_cluster" "k8s" {
name = var.aks_name
location = azurerm_resource_group.rg.location
dns_prefix = var.aks_dns_prefix
resource_group_name = azurerm_resource_group.rg.name
http_application_routing_enabled = false
linux_profile {
admin_username = var.vm_user_name
ssh_key {
key_data = file(var.public_ssh_key_path)
}
}
default_node_pool {
name = "agentpool"
node_count = var.aks_agent_count
vm_size = var.aks_agent_vm_size
os_disk_size_gb = var.aks_agent_os_disk_size
vnet_subnet_id = data.azurerm_subnet.kubesubnet.id
}
service_principal {
client_id = local.client_id
client_secret = local.client_secret
}
network_profile {
network_plugin = "azure"
dns_service_ip = var.aks_dns_service_ip
docker_bridge_cidr = var.aks_docker_bridge_cidr
service_cidr = var.aks_service_cidr
}
# Enabled the cluster configuration to the Azure kubernets with RBAC
azure_active_directory_role_based_access_control {
managed = var.azure_active_directory_role_based_access_control_managed
admin_group_object_ids = var.active_directory_role_based_access_control_admin_group_object_ids
azure_rbac_enabled = var.azure_rbac_enabled
}
oms_agent {
log_analytics_workspace_id = module.log_analytics_workspace[0].id
}
timeouts {
create = "20m"
delete = "20m"
}
depends_on = [data.azurerm_subnet.kubesubnet,module.log_analytics_workspace]
tags = var.tags
}
and I want to send the AKS application Cluster, Node, Pod, Container metrics to Log Analytics workspace so that it will be available in Azure Monitoring.
I have configured the diagnostic setting as mentioned below
resource "azurerm_monitor_diagnostic_setting" "aks_cluster" {
name = "${azurerm_kubernetes_cluster.k8s.name}-audit"
target_resource_id = azurerm_kubernetes_cluster.k8s.id
log_analytics_workspace_id = module.log_analytics_workspace[0].id
log {
category = "kube-apiserver"
enabled = true
retention_policy {
enabled = false
}
}
log {
category = "kube-controller-manager"
enabled = true
retention_policy {
enabled = false
}
}
log {
category = "cluster-autoscaler"
enabled = true
retention_policy {
enabled = false
}
}
log {
category = "kube-scheduler"
enabled = true
retention_policy {
enabled = false
}
}
log {
category = "kube-audit"
enabled = true
retention_policy {
enabled = false
}
}
metric {
category = "AllMetrics"
enabled = false
retention_policy {
enabled = false
}
}
}
Is that all needed? I did come across an article where they were using azurerm_application_insights and I don't understand why azurerm_application_insights is needed to capture the cluster level metrics?
You do not need Application Insights, it really depends if you want application level monitoring.
This is probably want you read:
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/application_insights
"Manages an Application Insights component."
Application Insights provides complete monitoring of applications running on AKS and other environments.
https://learn.microsoft.com/en-us/azure/aks/monitor-aks#level-4--applications
According to good practice, you need to enable a few others:
guard should be enabled assuming you use AAD.
enable AllMetrics.
consider kube-audit-admin for reduced logging events.
consider csi-azuredisk-controller.
consider cloud-controller-manager for the cloud-node-manager component.
See more here:
https://learn.microsoft.com/en-us/azure/aks/monitor-aks#configure-monitoring
https://learn.microsoft.com/en-us/azure/aks/monitor-aks-reference
I want to deploy some AKS in Azure using Terraform but it is taking too much time to create more than 1 hour and the terraform job never finish. In the portal the AKS stay in Creating status. I'm deploying the AKS in the EastUS2 region
The vm size that I used are not too big so I don't know understand what could be the problem
This is my main file:
main.tf
# Configure the Microsoft Azure Provider.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 2.26"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 1.24.6"
}
azapi = {
source = "azure/azapi"
version = ">=1.1.0"
}
}
required_version = ">= 0.14.9"
}
provider "azurerm" {
features {}
}
provider "kubernetes" {
host = module.aks.host
username = module.aks.username
password = module.aks.password
client_certificate = module.aks.client_certificate
client_key = base64decode(module.aks.client_key)
cluster_ca_certificate = base64decode(module.aks.cluster_ca_certificate)
}
provider "azapi" {
# subscription_id = data.azurerm_client_config.current.subscription_id
# tentat_id = data.azurerm_client_config.current.tenant_id
}
data "azurerm_client_config" "current" {}
module "ResourceGroup" {
source = "./ResourceGroup"
}
module "Networks" {
source = "./Networks"
resource_group_name = module.ResourceGroup.rg_name_out
location = module.ResourceGroup.rg_location_out
}
module "acr" {
source = "./ACR"
resource_group_name = module.ResourceGroup.rg_name_out
location = module.ResourceGroup.rg_location_out
name = var.acr_name
sku = var.acr_sku
admin_enabled = var.acr_admin_enabled
georeplication_locations = var.acr_georeplication_locations
soft_delete_policy_status = var.acr_soft_delete_policy_status
soft_delete_policy_days = var.acr_soft_delete_policy_days
identity_name = var.acr_identity_name
tags = var.tags
# depends_on = [module.StorageAccount]
}
module "aks" {
source = "./AKS"
location = module.ResourceGroup.rg_location_out
resource_group_name = module.ResourceGroup.rg_name_out
acr_id = module.acr.id
name = var.aks_cluster_name
kubernetes_version = var.aks_kubernetes_version
dns_prefix = lower(var.aks_cluster_name)
private_cluster_enabled = var.aks_private_cluster_enabled
automatic_channel_upgrade = var.aks_automatic_channel_upgrade
sku_tier = var.aks_sku_tier
identity_name = var.aks_identity_name
api_server_authorized_ip_ranges = [] #module.Networks.subnet_address_bastion
azure_policy_enabled = var.aks_azure_policy_enabled
http_application_routing_enabled = var.aks_http_application_routing_enabled
network_profile = var.aks_network_profile
aci_connector_linux = var.aks_aci_connector_linux
azure_ad_rbac_managed = var.aks_azure_ad_rbac_managed
tenant_id = data.azurerm_client_config.current.tenant_id
admin_group_object_ids = var.aks_admin_group_object_ids
azure_rbac_enabled = var.aks_azure_rbac_enabled
admin_username = var.aks_admin_username
ssh_public_key = var.aks_ssh_public_key
tags = var.tags
depends_on = [module.Networks, module.acr]
default_node_pool = {
name = "system"
vm_size = "Standard_D2s_v3"
node_count = 1
enable_auto_scaling = true
max_count = 1
min_count = 1
max_surge = "50%"
max_pods = 36
os_disk_size_gb = 50
os_disk_type = "Managed"
ultra_ssd_enabled = true
zones = ["1", "2","3"]
node_labels = { "workload" = "system" }
node_taints = [ "workload=system:NoSchedule" ]
vnet_subnet_id = module.Networks.subnet_id
orchestrator_version = var.aks_kubernetes_version
}
node_pools = [
{
name = "batch"
mode = "User"
vm_size = "Standard_D2s_v3"
node_count = 1
enable_auto_scaling = true
max_count = 1
min_count = 1
max_surge = "50%"
max_pods = 36
os_disk_size_gb = 50
os_disk_type = "Managed"
ultra_ssd_enabled = true
zones = ["1", "2","3"]
node_labels = { "workload" = "batch" }
node_taints = [ "workload=batch:NoSchedule" ]
vnet_subnet_id = module.Networks.subnet_id
orchestrator_version = var.aks_kubernetes_version
}
]
}
I have .tfvars file below with below contents as the the input variables
aks_configuration = {
aks1 = {
name = "cnitest"
location = "westeurope"
kubernetes_version = "1.22.4"
dns_prefix = "cnitest"
default_nodepool_name = "general"
default_nodepool_size = "Standard_B2s"
default_nodepool_count = 2
default_node_pool_autoscale = true
default_node_pool_autoscale_min_count = 1
default_node_pool_autoscale_max_count = 2
aks_zones = null
network_plugin = null
network_policy = null
vnet_name = null
vnet_enabled = false
subnet_name = null
objectID= ["*********]
nodepool = [
{
name = "dts"
vm_size = "Standard_B2s"
enable_auto_scaling = true
mode = "user"
node_count = 1
max_count = 2
min_count = 1
}
]
}
}
Now I need to create an aks cluster with conditions to choose azurecni or kubenet as the part of the network configuration.
if the vnet_enabled is false it should disable data resource in the terraform and give the value as null for the below configuration
#get nodepool vnet subnet ID
data "azurerm_subnet" "example" {
for_each = local.aks_config.vnet_enabled
name = each.value.subnet_name
virtual_network_name = each.value.vnet_name
resource_group_name = var.rg-name
}
resource "azurerm_kubernetes_cluster" "example" {
for_each = local.aks_config
name = each.value.name
location = each.value.location
resource_group_name = var.rg-name
dns_prefix = each.value.dns_prefix
default_node_pool {
name = each.value.default_nodepool_name
node_count = each.value.default_nodepool_count
vm_size = each.value.default_nodepool_size
enable_auto_scaling = each.value.default_node_pool_autoscale
min_count = each.value.default_node_pool_autoscale_min_count
max_count = each.value.default_node_pool_autoscale_max_count
vnet_subnet_id = data.azurerm_subnet.example[each.key].id
zones = each.value.aks_zones
}
identity {
type = "SystemAssigned"
}
network_profile {
network_plugin = each.value.network_plugin
network_policy = each.value.network_policy
}
# azure_active_directory_role_based_access_control {
# managed = true
# admin_group_object_ids = [each.value.objectID]
# }
}
If the vnet integration needs to be done which is the parameter vnet_enabled is set to true then the data block fetches the details of subnet to be used by the AKS Cluster and then uses it and also sets the network plugin to azure . If its false then the data block is not utilized and the subnet is provided null value in AKS cluster and it uses `kubenet. You don't need to add network plugin and network policy in the locals , you can leverage the same in the conditional values.
To achieve the above , You have to use something like below :
data "azurerm_subnet" "example" {
for_each = local.aks_configuration.vnet_enabled ? 1 : 0
name = each.value.subnet_name
virtual_network_name = each.value.vnet_name
resource_group_name = var.rg-name
}
resource "azurerm_kubernetes_cluster" "example" {
for_each = local.aks_configuration
name = each.value.name
location = each.value.location
resource_group_name = var.rg-name
dns_prefix = each.value.dns_prefix
default_node_pool {
name = each.value.default_nodepool_name
node_count = each.value.default_nodepool_count
vm_size = each.value.default_nodepool_size
enable_auto_scaling = each.value.default_node_pool_autoscale
min_count = each.value.default_node_pool_autoscale_min_count
max_count = each.value.default_node_pool_autoscale_max_count
vnet_subnet_id = each.value.vnet_enabled ? data.azurerm_subnet.example[0].id : null
zones = each.value.aks_zones
}
identity {
type = "SystemAssigned"
}
network_profile {
network_plugin = each.value.vnet_enabled ? "azure" : "kubenet"
network_policy = each.value.vnet_enabled ? "azure" : "calico"
}
# azure_active_directory_role_based_access_control {
# managed = true
# admin_group_object_ids = [each.value.objectID]
# }
}
I need a bit of help as I have a terraform script and I want to add multiple VM and change the name of the network card like node_name-NIC and do the same thing even for the other resources but is failing and i cant fine the proper way to do it.
below is the terraform script
terraform {
required_providers {
azurerm = {
// source = "hashicorp/azurerm"
version = "=1.44"
}
}
}
locals {
rsname = "testing-new-terraform-modules"
node_name = ["server1","server2"]
clustersize = 2
node_size = "Standard_B4ms"
av_set_name = "Windows-AV-Set"
vnet_name = "VNET_1"
vnet_rg = "RG_VNET_D"
gw_subnet = "SUB_GW_INT"
vm_subnet = "SUB_WIN"
image_rg = "RG__TEMPLATE"
common_tags = {
lbuildingblock = "GENERAL"
customer = "IND"
}
}
module "resource_group" {
source = "../modules/resources/azure/data-resource-group"
rsname = local.rsname
}
data "azurerm_virtual_network" "virtual_network" {
name = local.vnet_name
resource_group_name = local.vnet_rg
}
# GatewayZone subnet, for the Load Balancer frontend IP address
module "gw_subnet" {
source = "../modules/resources/azure/data-subnet"
subnet-name = local.gw_subnet
vnet-name = data.azurerm_virtual_network.virtual_network.name
rs-name = data.azurerm_virtual_network.virtual_network.resource_group_name
}
module "windows_subnet" {
source = "../modules/resources/azure/data-subnet"
// We will use the SUB_LHIND_P_APP subnet, no need to create a new subnet just for two servers
subnet-name = local.vm_subnet
rs-name = local.vnet_rg
vnet-name = local.vnet_name
}
//data "azurerm_network_security_group" "app_nsg" {
//
// name = "SUB_LHIND_D_APP_NSG"
// resource_group_name = data.azurerm_virtual_network.virtual_network.resource_group_name
//}
module "nic" {
source = "../modules/resources/azure/network-interface"
location = module.resource_group.rs_group_location
name = "${local.node_name[0]}-NIC"
nic_count = local.clustersize
resource_group = module.resource_group.rs_group_name
subnet_id = module.windows_subnet.subnet_id
tags = local.common_tags
}
module "av_set" {
source = "../modules/resources/azure/availability-set"
av_name = local.av_set_name
resource_group = module.resource_group.rs_group_name
location = module.resource_group.rs_group_location
}
module "template_image" {
source = "../modules/resources/azure/data-templates"
template_name = "WindowsServer2019"
resource_group = local.image_rg
}
module "windows" {
source = "../modules/resources/azure/windows-server"
location = module.resource_group.rs_group_location
network_interface_ids = module.nic.nic_id
node_count = local.clustersize
node_name = local.node_name
node_size = local.node_size
av_set_id = module.av_set.availability_set_id
resource_group = module.resource_group.rs_group_name
template_id = module.template_image.template_id
username = var.username
password = var.password
domain_user = var.domain_user
domain_pass = var.domain_pass
}
is failing with the below error
Error: Invalid index
on ../modules/resources/azure/network-interface/main.tf line 10, in resource "azurerm_network_interface" "nic":
10: name = var.name[count.index]
|----------------
| count.index is 0
| var.name is "SW-AZLHIND-580-NIC"
This value does not have any indices.
and the resource Network-Interface is like below
resource "azurerm_network_interface" "nic" {
count = var.nic_count
location = var.location
name = var.name[count.index]
resource_group_name = var.resource_group
tags = var.tags
// network_security_group_id = var.network_security_group_id
ip_configuration {
name = var.name[count.index]
private_ip_address_allocation = "dynamic"
subnet_id = var.subnet_id
}
}
You can use the following:
name = "{var.name}-${count.index}"
I have the following azurerm_function_app terrform section:
resource "azurerm_function_app" "main" {
name = "${var.storage_function_name}"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
app_service_plan_id = "${azurerm_app_service_plan.main.id}"
storage_connection_string = "${azurerm_storage_account.main.primary_connection_string}"
https_only = true
app_settings {
"APPINSIGHTS_INSTRUMENTATIONKEY" = "${azurerm_application_insights.main.instrumentation_key}"
}
}
How can I specify the OS is linux?
Since there is not much documentation, I used following technique to construct terraform template.
Create the type of function app you want in azure portal
Import same resource using terraform import command.
terraform import azurerm_function_app.functionapp1
/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.Web/sites/functionapp1
following information will be retrieved
id = /subscriptions/xxxx/resourceGroups/xxxxxx/providers/Microsoft.Web/sites/xxxx
app_service_plan_id = /subscriptions/xxx/resourceGroups/xxxx/providers/Microsoft.Web/serverfarms/xxxx
app_settings.% = 3
app_settings.FUNCTIONS_WORKER_RUNTIME = node
app_settings.MACHINEKEY_DecryptionKey = xxxxx
app_settings.WEBSITE_NODE_DEFAULT_VERSION = 10.14.1
client_affinity_enabled = false
connection_string.# = 0
default_hostname = xxxx.azurewebsites.net
enable_builtin_logging = false
enabled = true
https_only = false
identity.# = 0
kind = functionapp,linux,container
location = centralus
name = xxxxx
outbound_ip_addresses = xxxxxx
resource_group_name = xxxx
site_config.# = 1
site_config.0.always_on = true
site_config.0.linux_fx_version = DOCKER|microsoft/azure-functions-node8:2.0
site_config.0.use_32_bit_worker_process = true
site_config.0.websockets_enabled = false
site_credential.# = 1
site_credential.0.password =xxxxxx
site_credential.0.username = xxxxxx
storage_connection_string = xxxx
tags.% = 0
version = ~2
From this I build following terraform template
provider "azurerm" {
}
resource "azurerm_resource_group" "linuxnodefunction" {
name = "azure-func-linux-node-rg"
location = "westus2"
}
resource "azurerm_storage_account" "linuxnodesa" {
name = "azurefunclinuxnodesa"
resource_group_name = "${azurerm_resource_group.linuxnodefunction.name}"
location = "${azurerm_resource_group.linuxnodefunction.location}"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_app_service_plan" "linuxnodesp" {
name = "azure-func-linux-node-sp"
location = "${azurerm_resource_group.linuxnodefunction.location}"
resource_group_name = "${azurerm_resource_group.linuxnodefunction.name}"
kind = "Linux"
reserved = true
sku {
capacity = 1
size = "P1v2"
tier = "PremiunV2"
}
}
resource "azurerm_function_app" "linuxnodefuncapp" {
name = "azure-func-linux-node-function-app"
location = "${azurerm_resource_group.linuxnodefunction.location}"
resource_group_name = "${azurerm_resource_group.linuxnodefunction.name}"
app_service_plan_id = "${azurerm_app_service_plan.linuxnodesp.id}"
storage_connection_string = "${azurerm_storage_account.linuxnodesa.primary_connection_string}"
app_settings {
FUNCTIONS_WORKER_RUNTIME = "node"
WEBSITE_NODE_DEFAULT_VERSION = "10.14.1"
}
site_config {
always_on = true
linux_fx_version = "DOCKER|microsoft/azure-functions-node8:2.0"
use_32_bit_worker_process = true
websockets_enabled = false
}
}
Let us know your experience with this. I will try to test few things with this.
I think you need to specify that in app_service_plan block
Kind = "Linux"
kind - (Optional) The kind of the App Service Plan to create. Possible values are Windows (also available as App), Linux and FunctionApp (for a Consumption Plan). Defaults to Windows. Changing this forces a new resource to be created.
NOTE: When creating a Linux App Service Plan, the reserved field must be set to true.
Example from Terraform doc
resource "azurerm_resource_group" "test" {
name = "azure-functions-cptest-rg"
location = "westus2"
}
resource "azurerm_storage_account" "test" {
name = "functionsapptestsa"
resource_group_name = "${azurerm_resource_group.test.name}"
location = "${azurerm_resource_group.test.location}"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_app_service_plan" "test" {
name = "azure-functions-test-service-plan"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
kind = "Linux"
sku {
tier = "Dynamic"
size = "Y1"
}
properties {
reserved = true
}
}
resource "azurerm_function_app" "test" {
name = "test-azure-functions"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
app_service_plan_id = "${azurerm_app_service_plan.test.id}"
storage_connection_string = "${azurerm_storage_account.test.primary_connection_string}"
}