I already have an AKS cluster but want to add a new nodepool by using terraform, but I couldn't find kubernetes_cluster_id value. So I'm wondering if it's possible to create a new nodepool in existing AKS cluster with terraform?
You can use a data source to extract the cluster-ID and then just reference it in the azurerm_kubernetes_cluster_node_pool resource where the kubernetes_cluster_id is required.
data "azurerm_kubernetes_cluster" "example" {
name = "myakscluster"
resource_group_name = "my-example-resource-group"
}
resource "azurerm_kubernetes_cluster_node_pool" "example" {
name = "internal"
kubernetes_cluster_id = data.azurerm_kubernetes_cluster.example.id
vm_size = "Standard_DS2_v2"
node_count = 1
tags = {
Environment = "Production"
}
}
Is it possible to add a new node pool to an existing AKS Cluster with Terraform?
Answer: Yes, You can use the azurerm_kubernetes_cluster_node_pool resource in Terraform to create a new node pool and use the resource ID of the existing AKS Cluster as a reference for kubernetes_cluster_id.
You can find your kubernetes_cluster_id in the Azure Portal or Azure CLI.
Azure CLI
Command:
az aks show --resource-group <your-resource-group> --name <your-aks-cluster-name> --query id -o tsv
Output:
/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.ContainerService/managedClusters/my-aks-cluster
Azure Portal
Related
I am having my azure infrastructure created using terraform.
Now I want to add few resources to existing resource group.
When I did same it is giving error like resources group is already exists.
How can I refer existing resource and no changes to existing resources and tfstate file.
There is a couple of ways to refer existing resource in Azure without making changes.
Use Terraform import
Use Terraform data resource
Terraform import example:
resource "azurerm_resource_group" "example" {
# ...instance configuration...
name = "MyResourceGroup"
}
Run command: terraform import azurerm_resource_group.example \ /subscriptions/MySubscriptionNumber/resourceGroups/MyResourceGroup
Terraform data resource example:
data "azurerm_resource_group" "example" {
name = "MyResourceGroup"
}
I have created some resources in Azure using Terraform and a Service principal:
A resource group
A virtual network
A virtual machine
Now, I need to create a virtual Gateway from this resource group and virtual network, but using a personal Azure account in the same Organization.
How can I add my user email as a Administrator to this resource group, from Terraform, using the Service Principal credentials?
You can use Terraform resource azurerm_role_assignment to add Owner permissions for your user to this resource group.
Example:
resource "azurerm_resource_group" "this" {
name = "example"
location = "West Europe"
}
resource "azurerm_role_assignment" "this" {
scope = azurerm_resource_group.this.id
role_definition_name = "Owner"
principal_id = "<Your user object id>"
}
What is Terraform equivalent to
az vm encryption enable --name --resource-group --volume-type OS --aad-client-id --aad-client-secret --disk-encryption-keyvault https:///secrets//
Based on this Repository
We configure the Azure Key Vault service for Server-side encryption
(SSE) for the Azure Managed Disk in this config. The procedured can be
procured using the Terraform provider azurerm_disk_encryption_set.
resource "azurerm_disk_encryption_set" "example" {
name = "des"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
key_vault_key_id = azurerm_key_vault_key.example.id
identity {
type = "SystemAssigned"
}
}
Problem
Terraform gives the following error when trying to use terraform plan or terraform apply after create a service principal in Azure:
provider.azurerm: No valid (unexpired) Azure CLI Auth Tokens found. Please run az login.
Steps to Reproduce
Create a service principal in Azure via az ad sp create-for-rbac.
Add the service principal configuration as a provider block to your .tf file:
provider "azurerm" {
alias = "tf_bootstrap"
client_id = "55708466-3686-xxxx-xxxx-xxxxxxxxxxxx"
client_secret = "88352837-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
tenant_id = "129a861e-a703-xxxx-xxxx-xxxxxxxxxxxx"
subscription_id = "c2e9d518-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
resource "azurerm_resource_group" "dev" {
name = "dev-rg"
location = "East US"
}
Attempt to run terraform plan.
If using the alias key in a provider block, as shown in the question, a provider key must be specified in each data or resource blocks.
For example:
// When a provider alias has been defined.
resource "azurerm_resource_group" "dev" {
provider = "azurerm.tf_bootstrap"
name = "dev-rg"
location = "East US"
}
If you miss a provider for one of your resources or data blocks, authentication fails on that block.
Note however that is also valid to not specify an alias key in the original provider block. In that case, it is no longer necessary to specify a provider key in every resource and data block; the provider key can be omitted.
// When a provider alias has not been defined.
resource "azurerm_resource_group" "dev" {
name = "dev-rg"
location = "East US"
}
We are migrating from unmanaged to managed disks in Azure. Currently our backend.tf definition is as follows
terraform {
backend "azure" {
storage_account_name = "foo"
container_name = "foo-container"
key = "foo.tfstate"
}
}
With managed disks you don't have reference to storage account as it is managed by Azure. What does this mean for backend.tf. Do we just remove storage account and container? Do we need to add some flag to identify backend storage as managed? Google search is not producing required answers, hence reaching out here.
Thanks
With managed disks you don't have reference to storage account as it
is managed by Azure. What does this mean for backend.tf.
It means you could not use backend "azure", Azure managed disk does not support this.
Please refer to this official document.Stores the state as a given key in a given blob container on Microsoft Azure Storage.
Creating managed disk with terraform you could check this link.
resource "azurerm_managed_disk" "test" {
name = "acctestmd"
location = "West US 2"
resource_group_name = "${azurerm_resource_group.test.name}"
storage_account_type = "Standard_LRS"
create_option = "Empty"
disk_size_gb = "1"
tags {
environment = "staging"
}