How to put different aks deployment within the same resource group/cluster? - azure

Current state:
I have all services within a cluster and under just one resource_group. My problem is that I have to push all the services every time and my deploy is getting slow.
What I want to do: I want to split every service within my directory so I can deploy it separately. Now I have a backend to each service, so that can have his own remote state and won't change things when I deploy. However, can I push still have all the services within the same resource_group? If yes, how can I achieve that? If I need to create a resource group for each service that I want to deploy separately, can I still use the same cluster?
main.tf
provider "azurerm" {
version = "2.23.0"
features {}
}
resource "azurerm_resource_group" "main" {
name = "${var.resource_group_name}-${var.environment}"
location = var.location
timeouts {
create = "20m"
delete = "20m"
}
}
resource "tls_private_key" "key" {
algorithm = "RSA"
}
resource "azurerm_kubernetes_cluster" "main" {
name = "${var.cluster_name}-${var.environment}"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
dns_prefix = "${var.dns_prefix}-${var.environment}"
node_resource_group = "${var.resource_group_name}-${var.environment}-worker"
kubernetes_version = "1.18.6"
linux_profile {
admin_username = var.admin_username
ssh_key {
key_data = "${trimspace(tls_private_key.key.public_key_openssh)} ${var.admin_username}#azure.com"
}
}
default_node_pool {
name = "default"
node_count = var.agent_count
vm_size = "Standard_B2s"
os_disk_size_gb = 30
}
role_based_access_control {
enabled = "false"
}
addon_profile {
kube_dashboard {
enabled = "true"
}
}
network_profile {
network_plugin = "kubenet"
load_balancer_sku = "Standard"
}
timeouts {
create = "40m"
delete = "40m"
}
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
tags = {
Environment = "Production"
}
}
provider "kubernetes" {
version = "1.12.0"
load_config_file = "false"
host = azurerm_kubernetes_cluster.main.kube_config[0].host
client_certificate = base64decode(
azurerm_kubernetes_cluster.main.kube_config[0].client_certificate,
)
client_key = base64decode(azurerm_kubernetes_cluster.main.kube_config[0].client_key)
cluster_ca_certificate = base64decode(
azurerm_kubernetes_cluster.main.kube_config[0].cluster_ca_certificate,
)
}
backend.tf (for main)
terraform {
backend "azurerm" {}
}
client.tf (service that I want to deploy separately)
resource "kubernetes_deployment" "client" {
metadata {
name = "client"
labels = {
serviceName = "client"
}
}
timeouts {
create = "20m"
delete = "20m"
}
spec {
progress_deadline_seconds = 600
replicas = 1
selector {
match_labels = {
serviceName = "client"
}
}
template {
metadata {
labels = {
serviceName = "client"
}
}
}
}
}
}
resource "kubernetes_service" "client" {
metadata {
name = "client"
}
spec {
selector = {
serviceName = kubernetes_deployment.client.metadata[0].labels.serviceName
}
port {
port = 80
target_port = 80
}
}
}
backend.tf (for client)
terraform {
backend "azurerm" {
resource_group_name = "test-storage"
storage_account_name = "test"
container_name = "terraform"
key="test"
}
}
deployment.sh
terraform -v
terraform init \
-backend-config="resource_group_name=$TF_BACKEND_RES_GROUP" \
-backend-config="storage_account_name=$TF_BACKEND_STORAGE_ACC" \
-backend-config="container_name=$TF_BACKEND_CONTAINER" \
terraform plan
terraform apply -target="azurerm_resource_group.main" -auto-approve \
-var "environment=$ENVIRONMENT" \
-var "tag_version=$TAG_VERSION" \
PS: I can build the test resource-group from scratch if needed. Don't worry about his current state.
PS2: The state files are being saved into the right place, no issue about that.

If you want to deploy resources separately, you could take a look at terraform apply with this option.
-target=resource Resource to target. Operation will be limited to this
resource and its dependencies. This flag can be used
multiple times.
For example, just deploy a resource group and its dependencies like this,
terraform apply -target="azurerm_resource_group.main"

Related

how to automatically deploy to aks resource created with terraform

I would like a guide on how to automatically deploy to a newly provisioned aks cluster after provisioning with terraform. for more context, i am building a one click full infrastructure provisioning and deployment all in one script. below is my structure for more understanding
main.tf
resource "azurerm_kubernetes_cluster" "aks" {
name = var.cluster_name
kubernetes_version = var.kubernetes_version
location = var.location
resource_group_name = var.resource_group_name
dns_prefix = var.cluster_name
default_node_pool {
name = "system"
node_count = var.system_node_count
vm_size = "Standard_DS2_v2"
type = "VirtualMachineScaleSets"
availability_zones = [1, 2, 3]
enable_auto_scaling = false
}
identity {
type = "SystemAssigned"
}
network_profile {
load_balancer_sku = "Standard"
network_plugin = "kubenet"
}
role_based_access_control {
enabled = true
}
}
output.tf
resource "local_file" "kubeconfig" {
depends_on = [azurerm_kubernetes_cluster.aks]
filename = "kubeconfig"
content = azurerm_kubernetes_cluster.aks.kube_config_raw
}
deployment.tf
resource "kubernetes_deployment" "sdc" {
metadata {
name = "sdc"
labels = {
app = "serviceName"
#version = "v1.0"
}
namespace = "default"
}
spec {
replicas = 1
selector {
match_labels = {
app = "serviceName"
}
}
template {
metadata {
labels = {
app = "serviceName"
# version = "v1.0"
}
}
spec {
container {
image = "myImage"
name = "serviceName"
port {
container_port = 80
}
}
}
}
}
depends_on = [
azurerm_kubernetes_cluster.aks
]
}
Everything works perfectly, my kubeconfig file is created and downloaded. my major headache is how to make the terraform apply process use the kubeconfig file created and also run the deployment. making my terraform script fully automated. I basically want to provision and deploy into the newly provisioned cluster all in one run.
Looking forward to good help.
Thanks guys

How to handle json files for terraform deployment

I am a new in terraform and using below terraform template to create Azure App service plan, App service and App insight together
# Configure the Azure provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.98"
}
}
required_version = ">= 1.1.6"
}
provider "azurerm" {
features { }
}
resource "azurerm_application_insights" "appService-app_insights" {
name ="${var.prefix}-${var.App_Insights}"
location = var.Location
resource_group_name = var.ResourceGroup
application_type = "web" # Node.JS ,java
}
resource "azurerm_app_service" "appservice" {
name ="${var.prefix}-${var.appservice_name}"
location = var.Location
resource_group_name = var.ResourceGroup
app_service_plan_id = azurerm_app_service_plan.appserviceplan.id
https_only = true
site_config {
linux_fx_version = "NODE|10.14"
}
app_settings = {
# "SOME_KEY" = "some-value"
"APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.appService-app_insights.instrumentation_key
}
depends_on = [
azurerm_app_service_plan.appserviceplan,
azurerm_application_insights.appService-app_insights
]
}
# create the AppService Plan for the App Service hosting our website
resource "azurerm_app_service_plan" "appserviceplan" {
name ="${var.prefix}-${var.app_service_plan_name}"
location = var.Location
resource_group_name = var.ResourceGroup
kind ="linux"
reserved = true
sku {
tier = "Standard" #
size = "S1"
}
}
I am generating a variable.tf file at runtime which is quite simple in this case
variable "ResourceGroup" {
default = "TerraRG"
}
variable "Location" {
default = "westeurope"
}
variable "app_service_plan_name" {
default = "terra-asp"
}
variable "appservice_name" {
default = "terra-app"
}
variable "prefix" {
default = "pre"
}
variable "App_Insights" {
default = "terra-ai"
}
Everything working good till here.
No I am trying to extend my infra and I want to go with multiple App + App Service Plan + App Insight which might look like below Json
{
"_comment": "Web App Config",
"webapps": [
{
"Appservice": "app1",
"Appserviceplan": "asp1",
"InstrumentationKey": "abc"
},
{
"Appservice": "app2",
"Appserviceplan": "asp2",
"InstrumentationKey": "def"
},
{
"Appservice": "app3",
"Appserviceplan": "asp2",
"InstrumentationKey": "def"
}
]
}
How can I target such a resource creation.
Should I think on creating App Service Plan First and App Insight and then should plan creating Apps. What could be a better approach for this scenario.
Since app1,app2,app3 are not globally unique i have tried with different name.
I have tried with app service name testapprahuluni12345,testapp12346and testapp12347.
main.tf
# Configure the Azure provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.98"
}
}
}
provider "azurerm" {
features { }
}
resource "azurerm_application_insights" "appService-app_insights" {
name ="${var.prefix}-${var.App_Insights}"
location = var.Location
resource_group_name = var.ResourceGroup
application_type = "web" # Node.JS ,java
}
resource "azurerm_app_service_plan" "appserviceplan" {
count = length(var.app_service_plan_name)
name = var.app_service_plan_name[count.index]
location = var.Location
resource_group_name = var.ResourceGroup
kind ="linux"
reserved = true
sku {
tier = "Standard" #
size = "S1"
}
}
# create the AppService Plan for the App Service hosting our website
resource "azurerm_app_service" "appservice" {
count = length(var.app_names)
name = var.app_names[count.index]
location = var.Location
resource_group_name = var.ResourceGroup
app_service_plan_id = azurerm_app_service_plan.appserviceplan[count.index].id
https_only = true
site_config {
linux_fx_version = "NODE|10.14"
}
app_settings = {
# "SOME_KEY" = "some-value"
"APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.appService-app_insights.instrumentation_key
}
depends_on = [
azurerm_app_service_plan.appserviceplan,
azurerm_application_insights.appService-app_insights
]
}
variable.tf
variable "ResourceGroup" {
default = "v-XXXXX--ree"
}
variable "Location" {
default = "West US 2"
}
/*variable "app_service_plan_name" {
default = "terra-asp"
}
variable "appservice_name" {
default = "terra-app"
}
*/
variable "prefix" {
default = "pre"
}
variable "App_Insights" {
default = "terra-ai"
}
variable "app_names" {
description = "App Service Names"
type = list(string)
default = ["testapprahuluni12345", "testapp12346", "testapp12347"]
}
variable "app_service_plan_name" {
description = "App Service Plan Name"
type = list(string)
default = ["asp1", "asp2", "asp2"]
}
OutPut--

Creating Azure Data Factory Linked Service with Terraform Creates Link in Live Mode

When I create a linked service in Azure Data Factory (ADF) for Databricks with terraform (using azurerm_data_factory_linked_service_azure_databricks) the linked service shows up only in live mode.
How can I make the linked service available in GIT mode where all the other ADF pipeline configurations are stored?
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.97.0"
}
databricks = {
source = "databrickslabs/databricks"
}
}
}
provider "azurerm" {
features {}
}
provider "databricks" {
host = azurerm_databricks_workspace.this.workspace_url
}
data "azurerm_client_config" "this" {
}
resource "azurerm_data_factory" "this" {
name = "myadf-9182371362"
resource_group_name = "testrg"
location = "East US"
identity {
type = "SystemAssigned"
}
vsts_configuration {
account_name = "mydevopsorg"
branch_name = "main"
project_name = "adftest"
repository_name = "adftest"
root_folder = "/adf/"
tenant_id = data.azurerm_client_config.this.tenant_id
}
}
resource "azurerm_databricks_workspace" "this" {
name = "mydbworkspace"
resource_group_name = "testrg"
location = "East US"
sku = "standard"
}
data "databricks_node_type" "smallest" {
local_disk = true
depends_on = [
azurerm_databricks_workspace.this
]
}
data "databricks_spark_version" "latest_lts" {
long_term_support = true
depends_on = [
azurerm_databricks_workspace.this
]
}
resource "databricks_cluster" "this" {
cluster_name = "Single Node"
spark_version = data.databricks_spark_version.latest_lts.id
node_type_id = data.databricks_node_type.smallest.id
autotermination_minutes = 20
spark_conf = {
"spark.databricks.cluster.profile" : "singleNode"
"spark.master" : "local[*]"
}
depends_on = [
azurerm_databricks_workspace.this
]
custom_tags = {
"ResourceClass" = "SingleNode"
}
}
data "azurerm_resource_group" "this" {
name = "testrg"
}
resource "azurerm_role_assignment" "example" {
scope = data.azurerm_resource_group.this.id
role_definition_name = "Contributor"
principal_id = azurerm_data_factory.this.identity[0].principal_id
}
resource "azurerm_data_factory_linked_service_azure_databricks" "msi_linked" {
name = "ADBLinkedServiceViaMSI"
data_factory_id = azurerm_data_factory.this.id
resource_group_name = "testrg"
description = "ADB Linked Service via MSI"
adb_domain = "https://${azurerm_databricks_workspace.this.workspace_url}"
existing_cluster_id = databricks_cluster.this.id
msi_work_space_resource_id = azurerm_databricks_workspace.this.id
}
result in git mode
result in live mode

Terraform AKS load balancer forwarding pod issue: Internal Server Error

I've created the solution below with Azure Kubernetes Services (AKS) and Terraform. For some reason the Load Balancer gives me the error on the HTTP page: "Internal Server Error".
My current findings are:
The Load Balancer is connected to the Front-end.
The frontend pod is up and running
The backend pod is already up and running
What is going wrong here?
The screenshots below will give you an overview how the setup looks in the Azure Portal. Secondly, I've also included the AKS/Terraform code to give you an idea how it looks like.
Pods
Service and Ingress
Infra.tf
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "infra" {
name = "${var.resourcegroup}"
location = "${var.location}"
}
resource "azurerm_kubernetes_cluster" "infra" {
name = "${var.prefix}-aks"
location = "${var.location}"
resource_group_name = "${var.resourcegroup}"
dns_prefix = "${var.prefix}-aks"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_DS2_v2"
}
identity {
type = "SystemAssigned"
}
depends_on = [
azurerm_resource_group.infra,
]
}
App.tf
provider "azurerm" {
features {}
}
data "azurerm_kubernetes_cluster" "aks" {
name = "${var.prefix}-aks"
resource_group_name = var.resourcegroup
}
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.aks.kube_config[0].host
client_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
}
resource "kubernetes_namespace" "azurevote" {
metadata {
annotations = {
name = "azurevote-annotation"
}
labels = {
mylabel = "azurevote-value"
}
name = var.namespace
}
}
resource "kubernetes_service" "aks-azurevote-front" {
metadata {
name = "azure-vote-front"
namespace = var.namespace
}
spec {
selector = {
app = kubernetes_pod.aks-azurevote-back.metadata.0.labels.app
}
session_affinity = "ClientIP"
port {
port = 80
target_port = 80
}
type = "LoadBalancer"
}
depends_on = [
kubernetes_namespace.azurevote,
]
}
resource "kubernetes_pod" "aks-azurevote-front" {
metadata {
name = "azure-vote-front"
namespace = var.namespace
labels = {
app = "azure-vote-front"
}
}
spec {
container {
image = "mcr.microsoft.com/azuredocs/azure-vote-front:v1"
name = "front"
env {
name = "ALLOW_EMPTY_PASSWORD"
value = "yes"
}
}
}
depends_on = [
kubernetes_namespace.azurevote,
]
}
resource "kubernetes_pod" "aks-azurevote-back" {
metadata {
name = "azure-vote-back"
namespace = var.namespace
labels = {
app = "azure-vote-back"
}
}
spec {
container {
image = "mcr.microsoft.com/oss/bitnami/redis:6.0.8"
name = "back"
env {
name = "ALLOW_EMPTY_PASSWORD"
value = "yes"
}
}
}
depends_on = [
kubernetes_namespace.azurevote,
]
}
resource "kubernetes_service" "aks-azurevote-back" {
metadata {
name = "azure-vote-back"
namespace = var.namespace
}
spec {
selector = {
app = kubernetes_pod.aks-azurevote-back.metadata.0.labels.app
}
session_affinity = "ClientIP"
port {
port = 6379
target_port = 6379
}
type = "ClusterIP"
}
depends_on = [
kubernetes_namespace.azurevote,
]
}

Terraform - AKS POD should be able to add and delete external DNS records

I want AKS POD to add and delete DNS records whenever a Service is created. I have achieved the same via GUI, But I want to use terraform to do the same.
Created AKS Cluster:
resource "azurerm_kubernetes_cluster" "aks_cluster" {
name = "${azurerm_resource_group.my-res-grp-in-tf.name}-cluster"
location = azurerm_resource_group.my-res-grp-in-tf.location
resource_group_name = azurerm_resource_group.my-res-grp-in-tf.name
dns_prefix = "${azurerm_resource_group.my-res-grp-in-tf.name}-cluster"
kubernetes_version = data.azurerm_kubernetes_service_versions.current.latest_version
node_resource_group = "${azurerm_resource_group.my-res-grp-in-tf.name}-nrg"
default_node_pool {
name = "systempool"
vm_size = "standard_d2s_v3"
orchestrator_version = data.azurerm_kubernetes_service_versions.current.latest_version
availability_zones = [1, 2, 3]
enable_auto_scaling = true
max_count = 1
min_count = 1
os_disk_size_gb = 30
type = "VirtualMachineScaleSets"
node_labels = {
"nodepool-type" = "system"
"environment" = var.env
"nodepoolos" = "linux"
"app" = "system-apps"
}
tags = {
"nodepool-type" = "system"
"environment" = var.env
"nodepoolos" = "linux"
"app" = "system-apps"
}
}
# Identity (one of either identity or service_principal blocks must be specified.)
identity {
type = "SystemAssigned"
}
# Add On Profiles
addon_profile {
azure_policy {
enabled = true
}
kube_dashboard {
enabled = false
}
http_application_routing {
enabled = false
}
oms_agent {
enabled = true
log_analytics_workspace_id = azurerm_log_analytics_workspace.insights.id
}
}
# RBAC and Azure AD Integration Block
role_based_access_control {
enabled = true
azure_active_directory {
managed = true
admin_group_object_ids = [azuread_group.aks_administrators.id]
}
}
# Windows Profile
windows_profile {
admin_username = var.windows_admin_username
admin_password = var.windows_admin_password
}
# Linux Profile
linux_profile {
admin_username = "ubuntu"
ssh_key {
key_data = file(var.ssh_public_key)
}
}
# Network Profile
network_profile {
network_plugin = "azure"
load_balancer_sku = "Standard"
}
tags = {
Environment = var.env
}
# login into cluster
provisioner "local-exec" {
command = "az aks get-credentials --name ${azurerm_kubernetes_cluster.aks_cluster.name} --resource-group ${azurerm_resource_group.my-res-grp-in-tf.name} --admin"
}
}
I have created a resource group named "dns-zone-rg" specifically for this task.
resource "azurerm_resource_group" "dns-zone-rg-tf" {
name = "dns-zone-rg"
location = var.location
}
Created a DNS zone in "dns-zone-rg" resource group
resource "azurerm_dns_zone" "public-domain-dns-zone" {
name = "mydomain.xyz"
resource_group_name = azurerm_resource_group.dns-zone-rg-tf.name
}
Created Manage Identity "mi-for-dns-zone-rg" in "dns-zone-rg" resource group
resource "azurerm_user_assigned_identity" "manage-identity-tf" {
resource_group_name = azurerm_resource_group.dns-zone-rg-tf.name
location = var.location
name = "mi-for-dns-zone-rg"
}
Assigned "Contributor" role to manage identity "mi-for-dns-zone-rg" and given a scope to manage resources in resource group "dns-zone-rg".
resource "azurerm_role_assignment" "assign-reader-to-manage-identity" {
scope = azurerm_resource_group.dns-zone-rg-tf.id
role_definition_name = "Contributor"
principal_id = azurerm_user_assigned_identity.manage-identity-tf.principal_id
}
Now I want to associate this Manage Identity "mi-for-dns-zone-rg" to the "SystemNode Pool" created by AKS. I am not able to figure out how to do that. and how to fetch node pool details created by AKS.
Currently, It's not possible in Terraform.
You have to use local-exec in Terraform and Azure CLI commands to achieve the same.
resource "null_resource" "node-pool-name"{
depends_on = [azurerm_kubernetes_cluster.aks_cluster,azurerm_role_assignment.assign-reader-to-manage-identity]
provisioner "local-exec" {
command = "az vmss list -g ${azurerm_kubernetes_cluster.aks_cluster.node_resource_group} --query \"[?contains(name,'aks-systempool')].name\" --out tsv > ${path.module}/system-node-poolname.txt"
}
provisioner "local-exec" {
command = "az vmss identity assign -g ${azurerm_kubernetes_cluster.aks_cluster.node_resource_group} -n `cat ${path.module}/system-node-poolname.txt` --identities ${azurerm_user_assigned_identity.manage-identity-tf.id}"
}
}

Resources