Terraform Azure AKS Cluster not exporting kubeconfig file - azure

I am trying to provision a kubernetes cluster on Azure (AKS) with Terraform. The provisioning works quite well but I can't get the kubeconfig from kube_config_raw exported to a file.
Below is my main.tf and outputs.tf. I supressed the resource_group and user_assigned_identity resources.
This is a resource I used for creating the configuration: https://learnk8s.io/terraform-aks
main.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">=2.79.1"
}
}
}
provider "azurerm" {
features {}
subscription_id = "..."
}
resource "azurerm_kubernetes_cluster" "aks" {
name = "myCluster"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = "my-cluster-dns"
default_node_pool {
name = "agentpool"
node_count = 1
os_disk_size_gb = 64
vm_size = "Standard_B2ms"
}
identity {
type = "UserAssigned"
user_assigned_identity_id = azurerm_user_assigned_identity.user_assigned_identity.id
}
depends_on = [
azurerm_user_assigned_identity.user_assigned_identity
]
}
outputs.tf - I've tried "./kubeconfig" and "kubeconfig" in the filename but nothing gets exported anywhere
resource "local_file" "kubeconfig" {
depends_on = [azurerm_kubernetes_cluster.aks]
filename = "./kubeconfig"
content = azurerm_kubernetes_cluster.aks.kube_config_raw
}
Bonus: is it possible to export it directly to the existing ~/.kube/config file? Like the az aks get-credentials command does?

outputs.tf - I've tried "./kubeconfig" and "kubeconfig" in the
filename but nothing gets exported anywhere
I tested the same code that you have and did terraform-apply , it saved the local file to the location where the apply was performed.
For example:
If I ran the main.tf file from C:\Users\user\terraform\aksconfig> as its present there then the kubeconfig file gets saved in the same path .
Output:
Bonus: is it possible to export it directly to the existing ~/.kube/config file? Like the az aks get-credentials command does?
Path where the az aks get-credentials --resource-group myresourcegroup --name myCluster stores the config file:
Code to save the script in the same path as az command:
resource "local_file" "kubeconfig" {
depends_on = [azurerm_kubernetes_cluster.aks]
filename = "C:/Users/user/.kube/config" this is where the config file gets stored
content = azurerm_kubernetes_cluster.aks.kube_config_raw
}
Output:
Exisitng config file in /.kube/config
New File overwrites the existing file :
Note: Using local_file block here will completely overwrite the file not appending the context to the previous one . If you are looking for merging the content in a single file like az command does , then its not possible from terraform.

Related

Terraform Azure AKS - How to install azure-keyvault-secrets-provider add-on

I have an AKS kubernetes cluster provisioned with terraform. And I need to enable the azure-keyvault-secrets-provider add-on.
Using the azure CLI, I could enable it as follows:
az aks enable-addons --addons azure-keyvault-secrets-provider --name myAKSCluster --resource-group myResourceGroup
But, how can I do it with the terraform? I tried the documentation, but doesn't mention anything about a secret driver except only one block as follows:
resource "azurerm_kubernetes_cluster" "k8s_cluster" {
lifecycle {
ignore_changes = [
default_node_pool
]
prevent_destroy = false
}
key_vault_secrets_provider {
secret_rotation_enabled = true
}
...
}
Is the above key_vault_secrets_provider doing the same thing as the azure CLI command az aks enable-addons --addons azure-keyvault-secrets-provider --name myAKSCluster --resource-group myResourceGroup ?
Because according to the terraform documentation, this key_vault_secrets_provider block is only for rotating the keyvault secrets. But no mention about enabling the driver.
My requirement is to:
Enable the secret provider driver
Create a kubernetes Secret -> so it will provision the secret in azure
Inject the secret to a kubernetes Deployment
I have tried to check the same in my environment:
Code: Without key_vault_secrets_provider
main.tf:
resource "azurerm_kubernetes_cluster" "example" {
name = "kavyaexample-aks1"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
dns_prefix = "kavyaexampleaks1"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
Environment = "Production"
}
}
output "client_certificate" {
value = azurerm_kubernetes_cluster.example.kube_config.0.client_certificate
sensitive = true
}
When checked the available addons list for my managed aks cluster through CLI , the “azure-keyvault-secrets-provider" is shown as disabled .It means for the latest versions of terraform provider , they have providers .Just that it need to be enabled.
Command:
az aks addon list –name kavyaexample-aks1 --resource-group <myrg>
Now checked after adding key_vault_secrets_provider block with secret rotation enabled.
Main.tf:
resource "azurerm_kubernetes_cluster" "example" {
name = "kavyaexample-aks1"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
dns_prefix = "cffggf"
....
key_vault_secrets_provider {
secret_rotation_enabled = true
}
default_node_pool {
name = ”dfgdf”
...
}
When checked for addon list using the same command:
az aks addon list –name kavyaexample-aks1 --resource-group <myrg>
The azure keyvault secret provider addon is being enabled.
which means adding key_vault_secrets_provider block with secret
rotation enabled itself means , we are making use of the azure
keyvault secret provider addon.
Also check this terraform-azurerm-aks issue on addon_profile being deprecated in latest terraform versions |github

error deploying resources on azure using terraform cloud

I have deployed resources on Microsoft Azure using terraform. I'm using azure storage account container to save my terraform states. I tried to configure terraform cloud to automate the deployment but I get this error.
Error: A resource with the ID "/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/msk-stage-keyvault" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group" for more information.
with module.keyvault.azurerm_resource_group.msk-keyvault
on ../../modules/az-keyvault/main.tf line 2, in resource "azurerm_resource_group" "msk-keyvault":
resource "azurerm_resource_group" "msk-keyvault" {
It seems that terraform cloud is not using my backend state in my provider.tf. How do I make terraform cloud use my backend state in provider.tf.
My Backend Provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.91.0"
}
}
backend "azurerm" {
resource_group_name = "msk-configurations"
storage_account_name = "mskconfigurations"
container_name = "key-vault"
key = "stage.tfstate"
}
}
provider "azurerm" {
features {}
subscription_id = var.subscription
tenant_id = var.ternant_id
}
It looks like your main.tf has already existing keyvault state.
So Initially please check if you have already configured keyvault resource in main.tf file or if you have already imported the state.
If its already present in main.tf file , and if you are again giving it in the backend , please try to remove the one from main.tf file and then execute again.
Also please note that terraform backend needs azure storage account credentials before hand in order to store into the tfstate.
So please avoid creating storage simultaneously all account, container and then the keyvault resource to tfstate.
So if storage account is already created first , then terraform can refer it later in backend.
To preconfigure the storage account and container :
Example:
1. Create storage account and container one after the other instead in the same file:
provider "azurerm" {
features {}
}
data "azurerm_resource_group" "example" {
name = "resourcegroupname"
}
resource "azurerm_storage_account" "example" {
name = "<yourstorageaccountname>"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "example" {
name = "newterraformcont"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
Then create the msk-keyvault resource group and store the tfstate in container.
This is my already created state file in terraform (terraform.tf)
provider "azurerm" {
features {}
}
terraform {
# Configure Terraform State Storage
backend "azurerm" {
resource_group_name = "<resourcegroup>"
storage_account_name = "<storage-earliercreated>"
container_name = " newterraformcont "
key = "terraform.tfstate"
}
}
resource "azurerm_resource_group" " msk-keyvault" {
name = "<msk-keyvault>"
location = "west us"
}
Reference:
azurerm_resource_group | Resources | hashicorp/azurerm | Terraform
Registry
https://www.jorgebernhardt.com/terraform-backend

How to create a storage account for a remote state dynamically?

I know inorder to have a remote state in my terraform code, i must create a storage account,and a container. Usually, it is done manually, but i am trying to create the storage account and the container dynamically using the below code:
resource "azurerm_resource_group" "state_resource_group" {
name = "RG-Terraform-on-Azure"
location = "West Europe"
}
terraform {
backend "azurerm" {
resource_group_name = "RG-Terraform-on-Azure"
storage_account_name = azurerm_storage_account.state_storage_account.name
container_name = azurerm_storage_container.state_container.name
key = "terraform.tfstate"
}
}
resource "azurerm_storage_account" "state_storage_account" {
name = random_string.storage_account_name.result
resource_group_name = azurerm_resource_group.state_resource_group.name
location = azurerm_resource_group.state_resource_group.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "staging"
}
}
resource "azurerm_storage_container" "state_container" {
name = "vhds"
storage_account_name = azurerm_storage_account.state_storage_account.name
container_access_type = "private"
}
resource "random_string" "storage_account_name" {
length = 14
lower = true
numeric = false
upper = false
special = false
}
But, the above code complains that:
│ Error: Variables not allowed
│
│ on main.tf line 11, in terraform:
│ 11: storage_account_name = azurerm_storage_account.state_storage_account.name
│
│ Variables may not be used here.
So,I already know that the i cannot use variables in the backend block, however i am wondering if there is a solution which enable me to create the storage account and the container dynamically and store the state file in there ?
Point:
i have already seen this question, but the .conf file did not work for me!
This can't be done in the same Terraform file. The backend has to exist before anything else. Terraform requires the backend to exist when you run terraform init. The backend is accessed to read the state as the very first step Terraform performs when you do a plan or apply, before any resources are actually created.
In the past I've automated the creation of the storage backend using a CLI tool. If you wanted to automate it with terraform it would have to be in a separate Terraform workspace, but then where would the backend for that workspace be?
In general, it doesn't really work to create the backend in Terraform.

Content of the Azure Runbook not getting updated when the content link referred to by the Terraform script is updated

I'm using the azurerm_automation_runbook module to create an Azure Automation Runbook. Below is the code I'm using.
resource "azurerm_automation_runbook" "automation_runbook" {
name = var.automation_runbook_name
location = var.location
resource_group_name = var.resource_group_name
automation_account_name = var.automation_account_name
runbook_type = "PowerShell"
log_verbose = "true"
log_progress = "true"
publish_content_link {
uri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/c4935ffb69246a6058eb24f54640f53f69d3ac9f/101-automation-runbook-getvms/Runbooks/Get-AzureVMTutorial.ps1"
}
}
I was able to create a Runbook using the above code successfully. But the problem is when I change the uri within the publish_content_link block to https://raw.githubusercontent.com/azureautomation/automation-packs/master/200-connect-azure-vm/Runbooks/Connect-AzureVM.ps1 and apply (terraform apply detects the change and apply it successfully), the new PowerShell script is not getting reflected in the Azure Automation Runbook in the Azure Portal and it still shows the old PowerShell script.
Any help on how to fix this issue would be appreciated.
I tested this on my environment and I noticed that simply changing the uri in publish_content_link doesn't really do any changes in the runbook script.
In order to apply the changes in the script you have to change the name of the runbook too instead of only changing the uri.
So, after you change the name and uri both in the terraform code and do a apply. It will recreate the runbook with the new name and content in the portal.
Outputs:
OR
If you want to change just the content and not the name , then you can save the script from that link locally and pass the content to the already created runbook and it will successfully update the content.
I tested this as well with the below code:
provider "azurerm"{
features{}
}
provider "local" {}
data "azurerm_resource_group" "example" {
name = "ansumantest"
}
resource "azurerm_automation_account" "example" {
name = "account1"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
sku_name = "Basic"
}
# added after the runbook was created
data "local_file" "pscript" {
filename = "C:/Users/user/terraform/runbook automation/connect-vm.ps1"
}
resource "azurerm_automation_runbook" "example" {
name = "Get-AzureVMTutorial"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
automation_account_name = azurerm_automation_account.example.name
log_verbose = "true"
log_progress = "true"
description = "This is an example runbook"
runbook_type = "PowerShell"
content = data.local_file.pscript.content # added this to update the content
}
Output:

Passing in variables assigned in Shell script

I am trying to run a Terraform deployment via a Shell script where within the Shell script I first dynamically collect the access key for my Azure storage account and assign it to a variable. I then want to use the variable in a -var assignment on the terraform command line. This method works great when configuring the backend for remote state but it is not working for doing a deployment. The other variables used in the template are being pulled from a terraform.tfvars file. Below is my Shell script and Terraform template:
Shell script:
#!/bin/bash
set -eo pipefail
subscription_name="Visual Studio Enterprise with MSDN"
tfstate_storage_resource_group="terraform-state-rg"
tfstate_storage_account="terraformtfstatesa"
az account set --subscription "$subscription_name"
tfstate_storage_access_key=$(
az storage account keys list \
--resource-group "$tfstate_storage_resource_group" \
--account-name "$tfstate_storage_account" \
--query '[0].value' -o tsv
)
echo $tfstate_storage_access_key
terraform apply \
-var "access_key=$tfstate_storage_access_key"
Deployment template:
provider "azurerm" {
subscription_id = "${var.sub_id}"
}
data "terraform_remote_state" "rg" {
backend = "azurerm"
config {
storage_account_name = "terraformtfstatesa"
container_name = "terraform-state"
key = "rg.stage.project.terraform.tfstate"
access_key = "${var.access_key}"
}
}
resource "azurerm_storage_account" "my_table" {
name = "${var.storage_account}"
resource_group_name = "${data.terraform_remote_state.rg.rgname}"
location = "${var.region}"
account_tier = "Standard"
account_replication_type = "LRS"
}
I have tried defining the variable in my terraform.tfvars file:
storage_account = "appastagesa"
les_table_name = "appatable
region = "eastus"
sub_id = "abc12345-099c-1234-1234-998899889988"
access_key = ""
The access_key definition appears to get ignored.
I then tried not using a terraform.tfvars file, and created the variables.tf file below:
variable storage_account {
description = "Name of the storage account to create"
default = "appastagesa"
}
variable les_table_name {
description = "Name of the App table to create"
default = "appatable"
}
variable region {
description = "The region where resources will be deployed (ex. eastus, eastus2, etc.)"
default = "eastus"
}
variable sub_id {
description = "The ID of the subscription to deploy into"
default = "abc12345-099c-1234-1234-998899889988"
}
variable access_key {}
I then modified my deploy.sh script to use the line below to run my terraform deployment:
terraform apply \
-var "access_key=$tfstate_storage_access_key" \
-var-file="variables.tf"
This results in the error invalid value "variables.tf" for flag -var-file: multiple map declarations not supported for variables Usage: terraform apply [options] [DIR-OR-PLAN] being thrown.
After playing with this for hours...I am almost embarrassed as to what the problem was but I am also frustrated with Terraform because of the time I wasted on this issue.
I had all of my variables defined in my variables.tf file with all but one having default values. For the one without a default value, I was passing it in as part of the command line. My command line was where the problem was. Because of all of the documentation I read, I thought I had to tell terraform what my variables file was by using the -var-file option. Turns out you don't and when I did it threw the error. Turns out all I had to do was use the -var option for the variable that had no defined default and terraform just automagically saw the variables.tf file. Frustrating. I am in love with Terraform but the one negative I would give it is that the documentation is lacking.

Resources