I am deploying my infra with terraform, but for AKS I use ARM templating because it has some features that are not in TF yet.
So in my tf template I have the following resource defined to deploy an ARM template:
resource "azurerm_template_deployment" "k8s" {
name = "${var.environment}-aks-deployment"
resource_group_name = "${azurerm_resource_group.kubernetes.name}"
parameters = {
workspaceResourceId = "${azurerm_log_analytics_workspace.k8s-law.id}"
aksClusterName = "fntm-k8s-${var.environment}"
subnetKubernetes = "${azurerm_subnet.kubernetes.id}"
servicePrincipal = "${azuread_service_principal.k8s_sp.application_id}"
clientSecret = "${random_string.sp_password.result}"
clientAppID = "${var.clientAppID}"
serverAppID = "${var.serverAppID}"
tenantID = "${var.tenant_id}"
serverAppSecret = "${var.serverAppSecret}"
}
template_body = "${file("kubernetes/azuredeploy.json")}"
deployment_mode = "Incremental"
}
The deployment of the cluster goes fine, but after that I need to get data from the AKS cluster which will be used by a different module.
If I use the data resource for AKS it tries to get the cluster data before it is deployed. So the below part doesn't work.
data "azurerm_kubernetes_cluster" "kubernetes" {
name = "fntm-k8s-${var.environment}"
resource_group_name = "${azurerm_resource_group.kubernetes.name}"
}
I thought maybe a depends_on but that is not supported in the data resource.
Anybody maybe an idea how I can get the data attribute node_resource_group from the AKS cluster with output? Or any other thoughts/solutions?
output "k8s_resource_group" {
value = "${lookup(azurerm_template_deployment.k8s.outputs, "?????")}"
}
In your azuredeploy.json use this for the output:
"outputs": {
"aksClusterName": {
"type": "string",
"value": "[parameters('aksClusterName')]"
}
}
And in your tf file use:
output "aksClusterName" {
value = "${azurerm_template_deployment.k8s.outputs["aksClusterName"]}"
}
data "azurerm_kubernetes_cluster" "kubernetes" {
name = ""
resource_group_name = "${azurerm_resource_group.kubernetes.name}"
}
output "k8s_resource_group" {
value = "${data.azurerm_kubernetes_cluster.kubernetes.node_resource_group}"
}
Related
I have been trying to figure out a way to prepare a terraform template for my app service/az function where I can connect it to application Insight while creating them through Terraform. Well the it worked, BUT the application Insight shows
Migrate this resource to Workspace-based Application Insights to gain support for all of the capabilities of Log Analytics, including Customer-Managed Keys and Commitment Tiers. Click here to learn more and migrate in a few clicks.
How do I acheive it from terraform? As from the documentation page of terraform there is no mention of such setup. Appreciate you help on this.
Here is the terraform code for az-function
resource "azurerm_linux_function_app" "t_funcapp" {
name = "t-function-app"
location = local.resource_location
resource_group_name = local.resource_group_name
service_plan_id = azurerm_service_plan.t_app_service_plan.id
storage_account_name = azurerm_storage_account.t_funcstorage.name
storage_account_access_key = azurerm_storage_account.t_funcstorage.primary_access_key
site_config {
application_stack {
java_version = "11"
}
remote_debugging_enabled = false
ftps_state = "AllAllowed"
}
app_settings = {
APPINSIGHTS_INSTRUMENTATIONKEY = "${azurerm_application_insights.t_appinsights.instrumentation_key}"
}
depends_on = [
azurerm_resource_group.t_rg,
azurerm_service_plan.t_app_service_plan,
azurerm_storage_account.t_funcstorage,
azurerm_application_insights.t_appinsights
]
}
Here is the terraform code for app insight
resource "azurerm_application_insights" "t_appinsights" {
name = "t-appinsights"
location = local.resource_location
resource_group_name = local.resource_group_name
application_type = "web"
depends_on = [
azurerm_log_analytics_workspace.t_workspace
]
}
output "instrumentation_key" {
value = azurerm_application_insights.t_appinsights.instrumentation_key
}
output "app_id" {
value = azurerm_application_insights.t_appinsights.app_id
}
You must create a Log Analytics Workspace and add it to your Application Insights.
For example
resource "azurerm_log_analytics_workspace" "example" {
name = "workspace-test"
location = local.resource_location
resource_group_name = local.resource_group_name
sku = "PerGB2018"
retention_in_days = 30
}
resource "azurerm_application_insights" "t_appinsights" {
name = "t-appinsights"
location = local.resource_location
resource_group_name = local.resource_group_name
workspace_id = azurerm_log_analytics_workspace.example.id
application_type = "web"
}
output "instrumentation_key" {
value = azurerm_application_insights.t_appinsights.instrumentation_key
}
output "app_id" {
value = azurerm_application_insights.t_appinsights.app_id
}
Hope this helps!
An error occurs when I try to create a AKS with Terraform. The AKS was created but the error still comes at the end, which is ugly.
│ Error: retrieving Access Profile for Cluster: (Managed Cluster Name
"aks-1" / Resource Group "pengine-aks-rg"):
containerservice.ManagedClustersClient#GetAccessProfile: Failure responding to request:
StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400
Code="BadRequest" Message="Getting static credential is not allowed because this cluster
is set to disable local accounts."
This is my terraform code:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.96.0"
}
}
}
resource "azurerm_resource_group" "aks-rg" {
name = "aks-rg"
location = "West Europe"
}
resource "azurerm_kubernetes_cluster" "aks-1" {
name = "aks-1"
location = azurerm_resource_group.aks-rg.location
resource_group_name = azurerm_resource_group.aks-rg.name
dns_prefix = "aks1"
local_account_disabled = "true"
default_node_pool {
name = "nodepool1"
node_count = 3
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
Environment = "Test"
}
}
Is this a Terraform bug? Can I avoid the error?
If you disable local accounts you need to activate AKS-managed Azure Active Directory integration as you have no more local accounts to authenticate against AKS.
This example enables RBAC, Azure AAD & Azure RBAC:
resource "azurerm_kubernetes_cluster" "aks-1" {
...
role_based_access_control {
enabled = true
azure_active_directory {
managed = true
tenant_id = data.azurerm_client_config.current.tenant_id
admin_group_object_ids = ["OBJECT_IDS_OF_ADMIN_GROUPS"]
azure_rbac_enabled = true
}
}
}
If you dont want AAD integration you need set local_account_disabled = "false".
As a very first step of my release process I run the following terraform code
resource "azurerm_automation_account" "automation_account" {
for_each = data.terraform_remote_state.pod_bootstrap.outputs.ops_rg
name = "${local.automation_account_prefix}-${each.key}"
location = each.key
resource_group_name = each.value.name
sku_name = "Basic"
tags = {
environment = "development"
}
}
The automation accounts created as expected and I can see those in Azure portal.
I also have terraform code that creates a couple of windows VMs,each VM creation accompained by the following
resource "azurerm_virtual_machine_extension" "dsc" {
name = "DevOpsDSC"
virtual_machine_id = var.vm_id
publisher = "Microsoft.Powershell"
type = "DSC"
type_handler_version = "2.83"
settings = <<SETTINGS_JSON
{
"configurationArguments": {
"RegistrationUrl": "${var.dsc_server_endpoint}",
"NodeConfigurationName": "${var.dsc_config}",
"ConfigurationMode": "${var.dsc_mode}",
"ConfigurationModeFrequencyMins": 15,
"RefreshFrequencyMins": 30,
"RebootNodeIfNeeded": false,
"ActionAfterReboot": "continueConfiguration",
"AllowModuleOverwrite": true
}
}
SETTINGS_JSON
protected_settings = <<PROTECTED_SETTINGS_JSON
{
"configurationArguments": {
"RegistrationKey": {
"UserName": "PLACEHOLDER_DONOTUSE",
"Password": "${var.dsc_primary_access_key}"
}
}
}
PROTECTED_SETTINGS_JSON
}
The result is the following
So VM extension is created for each VM and the status says that provisioning succeeded.
For the next step I run the following terraform code
resource "azurerm_automation_dsc_configuration" "iswebserver" {
for_each = data.terraform_remote_state.pod_bootstrap.outputs.ops_rg
name = "iswebserver"
resource_group_name = each.value.name
automation_account_name = data.terraform_remote_state.ops.outputs.automation_account[each.key].name
location = each.key
content_embedded = "configuration iswebserver {}"
}
resource "azurerm_automation_dsc_nodeconfiguration" "iswebserver" {
for_each = data.terraform_remote_state.pod_bootstrap.outputs.ops_rg
name = "iswebserver.localhost"
resource_group_name = each.value.name
automation_account_name = data.terraform_remote_state.ops.outputs.automation_account[each.key].name
depends_on = [azurerm_automation_dsc_configuration.iswebserver]
content_embedded = file("${path.cwd}/iswebserver.mof")
}
The mof file content is the following
/*
#TargetNode='IsWebServer'
#GeneratedBy=P120bd0
#GenerationDate=02/25/2021 17:33:16
#GenerationHost=D-MJ05UA54
*/
instance of MSFT_RoleResource as $MSFT_RoleResource1ref
{
ResourceID = "[WindowsFeature]IIS";
IncludeAllSubFeature = True;
Ensure = "Present";
SourceInfo = "D:\\DSC\\testconfig.ps1::5::9::WindowsFeature";
Name = "Web-Server";
ModuleName = "PsDesiredStateConfiguration";
ModuleVersion = "1.0";
ConfigurationName = "TestConfig";
};
instance of OMI_ConfigurationDocument
{
Version="2.0.0";
MinimumCompatibleVersion = "1.0.0";
CompatibleVersionAdditionalProperties= {"Omi_BaseResource:ConfigurationName"};
Author="P120bd0";
GenerationDate="02/25/2021 17:33:16";
GenerationHost="D-MJ05UA54";
Name="TestConfig";
};
After running the code I have got the following result
The configuration is created as expected, clicking on configuration entry in UI grid, leads to the following
Meaning that node configuration is created as well. My expectation was that for each VM I will see the Node configured to run configuration provided in mof file but Nodes UI shows empty Nodes
So I was trying to configure node manually to connect all peaces together
and that fails with the following
So I am totally confisued. On the one hand there's azurerm_virtual_machine_extension that allows to create extension and bind it to the automation account. In addition there are azurerm_automation_dsc_configuration and azurerm_automation_dsc_nodeconfiguration that allows to create configuration and node configuration. But the bottom line is that you cannot connect all those dots to be able to create node.
Just to confirm that configuration is valid, I create additional vm without using azurerm_virtual_machine_extension and I was able succesfully add this MV to created node configuration
The problem was in azurerm_virtual_machine_extension dsc_configuration parameter. The value needs to be the same as name property of the azurerm_automation_dsc_nodeconfiguration resource.
I have the following terraform module to setup app services under the same plan:
provider "azurerm" {
}
variable "env" {
type = string
description = "The SDLC environment (qa, dev, prod, etc...)"
}
variable "appsvc_names" {
type = list(string)
description = "The names of the app services to create under the same app service plan"
}
locals {
location = "eastus2"
resource_group_name = "app505-dfpg-${var.env}-web-${local.location}"
acr_name = "app505dfpgnedeploycr88836"
}
resource "azurerm_app_service_plan" "asp" {
name = "${local.resource_group_name}-asp"
location = local.location
resource_group_name = local.resource_group_name
kind = "Linux"
reserved = true
sku {
tier = "Basic"
size = "B1"
}
}
resource "azurerm_app_service" "appsvc" {
for_each = toset(var.appsvc_names)
name = "${local.resource_group_name}-${each.value}-appsvc"
location = local.location
resource_group_name = local.resource_group_name
app_service_plan_id = azurerm_app_service_plan.asp.id
site_config {
linux_fx_version = "DOCKER|${local.acr_name}/${each.value}:latest"
}
app_settings = {
DOCKER_REGISTRY_SERVER_URL = "https://${local.acr_name}.azurecr.io"
}
}
output "hostnames" {
value = {
for appsvc in azurerm_app_service.appsvc: appsvc.name => appsvc.default_site_hostname
}
}
I am invoking it through the following configuration:
terraform {
backend "azurerm" {
}
}
locals {
appsvc_names = ["gateway"]
}
module "web" {
source = "../../modules/web"
env = "qa"
appsvc_names = local.appsvc_names
}
output "hostnames" {
description = "The hostnames of the created app services"
value = module.web.hostnames
}
The container registry has the images I need:
C:\> az acr login --name app505dfpgnedeploycr88836
Login Succeeded
C:\> az acr repository list --name app505dfpgnedeploycr88836
[
"gateway"
]
C:\> az acr repository show-tags --name app505dfpgnedeploycr88836 --repository gateway
[
"latest"
]
C:\>
When I apply the terraform configuration everything is created fine, but inspecting the created app service resource in Azure Portal reveals that its Container Settings show no docker image:
Now, I can manually switch to another ACR and then back to the one I want only to get this:
Cannot perform credential operations for /subscriptions/0f1c414a-a389-47df-aab8-a351876ecd47/resourceGroups/app505-dfpg-ne-deploy-eastus2/providers/Microsoft.ContainerRegistry/registries/app505dfpgnedeploycr88836 as admin user is disabled. Kindly enable admin user as per docs: https://learn.microsoft.com/en-us/azure/container-registry/container-registry-authentication#admin-account
This is confusing me. According to https://learn.microsoft.com/en-us/azure/container-registry/container-registry-authentication#admin-account the admin user should not be used and so my ACR does not have one. On the other hand, I understand that I need somehow configure the app service to authenticate with the ACR.
What is the right way to do it then?
So this is now possible since the v2.71 version of the Azure RM provider. A couple of things have to happen...
Assign a Managed Identity to the application (can also use User Assigned but a bit more work)
Set the site_config.acr_use_managed_identity_credentials property to true
Grant the application's identity ACRPull rights on the container.
Below is a modified version of the code above, not tested but should be ok
provider "azurerm" {
}
variable "env" {
type = string
description = "The SDLC environment (qa, dev, prod, etc...)"
}
variable "appsvc_names" {
type = list(string)
description = "The names of the app services to create under the same app service plan"
}
locals {
location = "eastus2"
resource_group_name = "app505-dfpg-${var.env}-web-${local.location}"
acr_name = "app505dfpgnedeploycr88836"
}
resource "azurerm_app_service_plan" "asp" {
name = "${local.resource_group_name}-asp"
location = local.location
resource_group_name = local.resource_group_name
kind = "Linux"
reserved = true
sku {
tier = "Basic"
size = "B1"
}
}
resource "azurerm_app_service" "appsvc" {
for_each = toset(var.appsvc_names)
name = "${local.resource_group_name}-${each.value}-appsvc"
location = local.location
resource_group_name = local.resource_group_name
app_service_plan_id = azurerm_app_service_plan.asp.id
site_config {
linux_fx_version = "DOCKER|${local.acr_name}/${each.value}:latest"
acr_use_managed_identity_credentials = true
}
app_settings = {
DOCKER_REGISTRY_SERVER_URL = "https://${local.acr_name}.azurecr.io"
}
identity {
type = "SystemAssigned"
}
}
data "azurerm_container_registry" "this" {
name = local.acr_name
resource_group_name = local.resource_group_name
}
resource "azurerm_role_assignment" "acr" {
for_each = azurerm_app_service.appsvc
role_definition_name = "AcrPull"
scope = azurerm_container_registry.this.id
principal_id = each.value.identity[0].principal_id
}
output "hostnames" {
value = {
for appsvc in azurerm_app_service.appsvc: appsvc.name => appsvc.default_site_hostname
}
}
EDITED 21 Dec 2021
The MS documentation issue regarding the value being reset by Azure has now been resolved and you can also control Managed Identity via the portal.
So you can use service principal auth with App Service, but you'd have to create service principal grant it ACRpull permissions over the registry and use service principal login\password in App Service site_config
DOCKER_REGISTRY_SERVER_USERNAME
DOCKER_REGISTRY_SERVER_PASSWORD
I am trying to deploy 2 different Azure Apps in the same resource group.
These Azure Apps are defined as docker images stored in an Azure Container Registry (where I previously pushed those docker images).
I am not able to deploy both of them at the same time because I think there is something wrong in the way I am defining them as Terraform is expecting to find only one azurerm_app_service, but I am not sure how I can work around this?
When I run this command: terraform plan -var-file test.tfvars, then I see this message in the output:
Error: azurerm_app_service.ci_rg: resource repeated multiple times
How do I define "2 different resources of the same type"?
This is the content of the main.tf file (where I inject the variables defined in variables.tf with the values defined in test.tfvars):
// the resource group definition
resource "azurerm_resource_group" "ci_rg" {
name = "${var.resource_group_name}"
location = "${var.azure_location}"
}
// the app service plan definition
resource "azurerm_app_service_plan" "ci_rg" {
name = "${var.app_service_plan}"
location = "${azurerm_resource_group.ci_rg.location}"
resource_group_name = "${azurerm_resource_group.ci_rg.name}"
kind = "Linux"
sku {
tier = "Standard"
size = "S1"
capacity = 2 // for both the docker containers
}
properties {
reserved = true
}
}
// the first azure app
resource "azurerm_app_service" "ci_rg" {
name = "${var.first_app_name}"
location = "${azurerm_resource_group.ci_rg.location}"
resource_group_name = "${azurerm_resource_group.ci_rg.name}"
app_service_plan_id = "${azurerm_app_service_plan.ci_rg.id}"
site_config {
linux_fx_version = "DOCKER|${var.first_app_docker_image_name}"
}
app_settings {
"CONF_ENV" = "${var.conf_env}"
"DOCKER_REGISTRY_SERVER_URL" = "${var.docker_registry_url}",
"DOCKER_REGISTRY_SERVER_USERNAME" = "${var.docker_registry_username}",
"DOCKER_REGISTRY_SERVER_PASSWORD" = "${var.docker_registry_password}",
}
}
// the second azure app
resource "azurerm_app_service" "ci_rg" {
name = "${var.second_app_name}"
location = "${azurerm_resource_group.ci_rg.location}"
resource_group_name = "${azurerm_resource_group.ci_rg.name}"
app_service_plan_id = "${azurerm_app_service_plan.ci_rg.id}"
site_config {
linux_fx_version = "DOCKER|${var.second_app_docker_image_name}"
}
app_settings {
"CONF_ENV" = "${var.conf_env}"
"DOCKER_REGISTRY_SERVER_URL" = "${var.docker_registry_url}",
"DOCKER_REGISTRY_SERVER_USERNAME" = "${var.docker_registry_username}",
"DOCKER_REGISTRY_SERVER_PASSWORD" = "${var.docker_registry_password}",
}
}
Edit:
I am not entirely sure about how this Terraform thing works, but I think the label azurerm_app_service is taken by the "syntax of Terraform". See the docs here: https://www.terraform.io/docs/providers/azurerm/r/app_service.html
where the title is azurerm_app_service. So I don't think I can change that.
My guess would be you need to rename the second one to something else. Like this: resource "azurerm_app_service" "ci_rg_second". It obviously doesnt like the fact that it has the same name.