Authenticating on AKS for deploying a Helm release with Terraform - terraform

I am trying to do a Helm chart deployment through Terraform code on AKS.
The TF code that I have will create a resource in Datadog from which I will grab an output value that will be passed to my Helm release to be deployed on my cluster. It only has to create two resources, one of which is the Helm chart.
The problem that I am having is with authentication against my Kubernetes cluster, I am using a data source to bring the credentials from the cluster and then pass them in my kubernetes and helm providers.
My Terraform state for the AKS cluster is stored inside a Blob in a Azure Storage account.
I have tried updating the Helm chart versions, using different methods to access the data such as ${} around my variables.
Tried changing from username = data.azurerm_kubernetes_cluster.credentials.kube_config.0.username to use the admin configuration username = data.azurerm_kubernetes_cluster.credentials.kube_admin_config.0.username
Tried
Terraform version: 1.1.7
A data source is setup to bring the credentials for the AKS cluster in main.tf
data "azurerm_kubernetes_cluster" "credentials" {
name = var.aks_cluster_name
resource_group_name = var.aks_cluster_resource_group_name
}
This is versions.tf and what is being used to setup the connections to AKS.
terraform {
required_providers {
datadog = {
source = "DataDog/datadog"
}
}
backend "azurerm" {
}
}
provider "azurerm" {
features {}
}
provider "helm" {
debug = true
kubernetes {
username = data.azurerm_kubernetes_cluster.credentials.kube_config.0.username
password = data.azurerm_kubernetes_cluster.credentials.kube_config.0.password
host = data.azurerm_kubernetes_cluster.credentials.kube_config.0.host
client_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.cluster_ca_certificate)
}
}
provider "kubernetes" {
username = data.azurerm_kubernetes_cluster.credentials.kube_config.0.username
password = data.azurerm_kubernetes_cluster.credentials.kube_config.0.password
host = data.azurerm_kubernetes_cluster.credentials.kube_config.0.host
client_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.cluster_ca_certificate)
}
Error that I am seeing when running terraform apply, which will report that it can't find the elements in the collection for any of the attributes specified in my provider:
╷
│ Error: Invalid index
│
│ on versions.tf line 26, in provider "helm":
│ 26: host = data.azurerm_kubernetes_cluster.credentials.kube_admin_config.0.host
│ ├────────────────
│ │ data.azurerm_kubernetes_cluster.credentials.kube_admin_config has a sensitive value
│
│ The given key does not identify an element in this collection value.
╵
[ ... ]
╷
│ Error: Invalid index
│
│ on versions.tf line 27, in provider "helm":
│ 27: username = data.azurerm_kubernetes_cluster.credentials.kube_admin_config.0.username
│ ├────────────────
│ │ data.azurerm_kubernetes_cluster.credentials.kube_admin_config has a sensitive value
│
│ The given key does not identify an element in this collection value.
I am unsure on how to change my Terraform code such that this authentication works, given that the methods mentioned above have yielded no results. If needed I can provide the TF code for the deployment of the resources.

I'm using kubelogin to identify myself:
data "azurerm_client_config" "current" {
}
provider "helm" {
kubernetes {
host = azurerm_kubernetes_cluster.aks.kube_config.0.host
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = [
"get-token",
"--environment", "AzurePublicCloud",
"--server-id", "6dae42f8-4368-4678-94ff-3960e28e3630", # The AAD server app ID of AKS Managed AAD is always 6dae42f8-4368-4678-94ff-3960e28e3630 in any environments.
"--client-id", "${yamldecode(azurerm_kubernetes_cluster.aks.kube_config_raw).users[0].user.auth-provider.config.client-id}",
"--tenant-id", data.azurerm_client_config.current.tenant_id,
"--login", "devicecode"
]
command = "kubelogin"
}
}
}

Related

How to configure multiple azurerm providers authenticated via system-assigned managed identity using environment variables

I want to configure two azurerm providers using environment variables
I tried this:
variable "SUBSCRIPTION_ID" {
description = "Subscription ID where resources will be deployed."
}
variable "TENANT_ID" {
description = "Service Principal Tenant ID."
}
provider "azurerm" {
subscription_id = var.SUBSCRIPTION_ID
tenant_id = var.TENANT_ID
use_msi = true
features {}
}
#################################################################
# Tools provider
#################################################################
variable "TOOLS_SUBSCRIPTION_ID" {
description = "Subscription ID where Tools are located,"
}
variable "TOOLS_TENANT_ID" {
description = "Service Principal Tenant ID."
}
provider "azurerm" {
alias = "tools"
subscription_id = var.TOOLS_SUBSCRIPTION_ID
tenant_id = var.TOOLS_TENANT_ID
use_msi = true
features {}
}
With defined :
TF_VAR_SUBSCRIPTION_ID
TF_VAR_TENANT_ID
TF_VAR_TOOLS_SUBSCRIPTION_ID
TF_VAR_TOOLS_TENANT_ID
I checked and all values are present. However I got this error:
│ Error: building AzureRM Client: 1 error occurred:
│ * A Client ID must be configured when authenticating as a Service Principal using a Client Secret.
│
│
│
│ with provider["registry.terraform.io/hashicorp/azurerm"],
│ on providers.tf line 17, in provider "azurerm":
│ 17: provider "azurerm" {
│
╵
╷
│ Error: building AzureRM Client: 1 error occurred:
│ * A Client ID must be configured when authenticating as a Service Principal using a Client Secret.
│
│
│
│ with provider["registry.terraform.io/hashicorp/azurerm"].tools,
│ on providers.tf line 48, in provider "azurerm":
│ 48: provider "azurerm" {
│
The code was ran on Azure VM Scale set with assigned managed identity.
I made another test and I got the same error for single provider. It looks that something wrong is with passing variable via environment variable TF_VAR_name.
I use these versions:
Terraform v1.0.11
azurerm v2.98.0
The error indicates that the client_id argument for the provider has not been specified. When authenticating the AzureRM provider with service principal, you also need to specify a client_id, and then also either a secret or a certificate (unsure which you are targeting here).
provider "azurerm" {
subscription_id = var.SUBSCRIPTION_ID
tenant_id = var.TENANT_ID
client_id = var.CLIENT_ID
features {}
}
provider "azurerm" {
alias = "tools"
subscription_id = var.TOOLS_SUBSCRIPTION_ID
tenant_id = var.TOOLS_TENANT_ID
client_id = var.TOOLS_CLIENT_ID
features {}
}
This will resolve your issue, but you will also need to specify the client cert or secret as mentioned in the linked documentation above. Also, the use_msi argument is being ignored by the provider configuration, so the provider is understanding the authentication method as service principal instead of managed service identity.
Note also that for the default provider configuration, you can use native authentication environment variables like ARM_SUBSCRIPTION_ID instead of Terraform variables i.e. var.SUBSCRIPTION_ID.
I found that one of script set ARM_ACCESS_KEY and ARM_CLIENT_SECRET and becaue of this terrafrom considered this as Service Prinicpal authentication. Once I removed that part all works fine.

Terraform plan throws Provider Plugin Error

I am doing a small POC with terraform and I am unable to run terraform plan
My code:
terraform {
backend "azurerm" {
storage_account_name = "appngqastorage"
container_name = "terraform"
key = "qa.terraform.tfstate"
access_key = "my access key here"
}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 2.77"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "qa_resource_group" {
location = "East US"
name = "namehere"
}
My execution:
terraform init = success
terraform validate = configuration is valid
terraform plan = throws exception
Error:
│ Error: Plugin error
│
│ with provider["registry.terraform.io/hashicorp/azurerm"],
│ on main.tf line 15, in provider "azurerm":
│ 15: provider"azurerm"{
│
│ The plugin returned an unexpected error from plugin.(*GRPCProvider).ConfigureProvider: rpc error: code = Internal desc = grpc: error while marshaling: string field contains invalid UTF-8
After digging a little deeper I was able to figure out what was the issue.
Current project I am working on is been utilized within multiple regions. So, when testing work I am swapping my region in order to properly test the data that displays in specific region. This time when running terraform apply my windows configuration was pointing to another region not the US.
This thread helped me understand.
Upgrade azure CLI to the latest version using az upgrade
Log into your azure account using az login
Re-run your terraform commands!

Terraform aks module - get cluster name and resource group name via remote state

Hi I am trying to follow this offical guide to manage aks resources. There terraform_remote_state is used to get the resource_group_name and kubernetes_cluster_name.
data "terraform_remote_state" "aks" {
backend = "local"
config = {
path = "/path/to/base/project/terraform.tfstate"
}
}
# Retrieve AKS cluster information
provider "azurerm" {
features {}
}
data "azurerm_kubernetes_cluster" "cluster" {
name = data.terraform_remote_state.aks.outputs.kubernetes_cluster_name
resource_group_name = data.terraform_remote_state.aks.outputs.resource_group_name
}
I have created the inital aks cluster with the aks module. Looking at its output in the documentation, it doesnt export the resource group name or cluster name.
Now I wonder how I can get the information. I have tried the below in the base project.
module "aks" {
...
}
output "resource_group_name" {
value = module.aks.resource_group_name
}
output "kubernetes_cluster_name" {
value = module.aks.cluster_name
}
But I get erros when trying terraform plan
Error: Unsupported attribute
│
│ on main.tf line 59, in output "resource_group_name":
│ 59: value = module.aks.resource_group_name
│ ├────────────────
│ │ module.aks is a object, known only after apply
│
│ This object does not have an attribute named "resource_group_name".
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 63, in output "kubernetes_cluster_name":
│ 63: value = module.aks.cluster_name
│ ├────────────────
│ │ module.aks is a object, known only after apply
│
│ This object does not have an attribute named "cluster_name".
Those are listed under inputs for that module though. Now I dont have an idea know how to get those values from the terraform_remote_state.
As the module itself doesn’t have name and resource group as output , we have to declare outputs there first and then call it while deploying or in remote state as well.
So we have to add 2 outputs in output.tf for aks module after doing terraform init.
output "kubernetes_cluster_name" {
value = azurerm_kubernetes_cluster.main.name
}
output "resource_group_name" {
value = azurerm_kubernetes_cluster.main.resource_group_name
}
Then call outputs in main.tf after defining the modules i.e. network and aks , you can see your Kubernetes cluster name in plan as well and after applying it.
output "kuberneteclustername" {
value = module.aks.kubernetes_cluster_name
}
output "resourcegroupname" {
value = module.aks.resource_group_name
}
Now lets test it from the remote state :
data "terraform_remote_state" "aks" {
backend = "local"
config = {
path = "path/to/terraform/aksmodule/terraform.tfstate"
}
}
# Retrieve AKS cluster information
provider "azurerm" {
features {}
}
data "azurerm_kubernetes_cluster" "cluster" {
name = data.terraform_remote_state.aks.outputs.kuberneteclustername
resource_group_name = data.terraform_remote_state.aks.outputs.resourcegroupname
}
output "aks" {
value = data.azurerm_kubernetes_cluster.cluster.name
}
output "rg" {
value = data.azurerm_kubernetes_cluster.cluster.resource_group_name
}

using terraform with helm on minikube gives error

I have the following .tf file
provider "kubernetes" {
config_context_cluster = "minikube"
}
resource "kubernetes_namespace" "user-namespace" {
metadata {
name = "user-namespace"
}
}
provider "helm" {
kubernetes {
config_context_cluster = "minikube"
}
}
resource "helm_release" "local" {
name = "user-server-chart"
chart = "./user-server"
}
When I run terraform apply I get the following error
kubernetes_namespace.brw-user-namespace: Creating...
helm_release.local: Creating...
Error code explanation: 501 = Server does not support this operation.\n") has prevented the request from succeeding (post namespaces)
│
│ with kubernetes_namespace.user-namespace,
│ on main.tf line 5, in resource "kubernetes_namespace" "user-namespace":
│ 5: resource "kubernetes_namespace" "user-namespace" {
│
╵
Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
│
│ with helm_release.local,
│ on main.tf line 17, in resource "helm_release" "local":
│ 17: resource "helm_release" "local" {
Step 1: kubeconfig is not having the correct context and add a line with kubeconfig to your ~/.bashrc file
kubectl config set-context ~/.kube/kubeconfig1.yml
kubectl config use-context ~/.kube/kubeconfig1.yml
or export KUBECONFIG=~/.kube/<kubeconfig_env>.yml
Step 2: A Helm Release resource can be imported using its namespace and name
e.g. terraform import helm_release.example default/example-name
Since the repository attribute is not being continued as metadata by helm, it will not be set to any value by default. All other provider specific attributes will be set to their default values and they can be overridden after running apply using the resource definition configuration.
You may refer to the document [1] [2] for additional information.
[1] https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release
[2] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs

Terraform Error: Failed to query available provider packages, pagerduty provider

I'm on TF version v1.0.0(latest) and am trying to make use of the pagerduty Tf provider and the error log says could not retrieve teh list of available versions. Below is the code snippet and complete error log.
Code:
terraform {
required_providers {
pagerduty = {
source = "PagerDuty/pagerduty"
version = "~> 1.9.8"
}
}
}
provider "pagerduty" {
token = var.token
}
resource "pagerduty_service" "example" {
name = "My Web App"
auto_resolve_timeout = 14400
acknowledgement_timeout = 600
escalation_policy = var.policy
}
resource "pagerduty_service_integration" "apiv2" {
name = "API V2"
type = "events_api_v2_inbound_integration"
service = pagerduty_service.example.id
}
Error:
- Finding latest version of hashicorp/pagerduty...
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/pagerduty: provider registry
│ registry.terraform.io does not have a provider named registry.terraform.io/hashicorp/pagerduty
│
│ Did you intend to use pagerduty/pagerduty? If so, you must specify that source address in each module which
│ requires that provider. To see which modules are currently depending on hashicorp/pagerduty, run the following
│ command:
│ terraform providers
Answering my question.
Separating the first terraform required_providers blocks into its own versions.tf file has solved the issue.

Resources