Azure Terraform - how to add encryption values to VMs - azure

What is Terraform equivalent to
az vm encryption enable --name --resource-group --volume-type OS --aad-client-id --aad-client-secret --disk-encryption-keyvault https:///secrets//

Based on this Repository
We configure the Azure Key Vault service for Server-side encryption
(SSE) for the Azure Managed Disk in this config. The procedured can be
procured using the Terraform provider azurerm_disk_encryption_set.
resource "azurerm_disk_encryption_set" "example" {
name = "des"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
key_vault_key_id = azurerm_key_vault_key.example.id
identity {
type = "SystemAssigned"
}
}

Related

AKS new nodepool provisioning with terraform

I already have an AKS cluster but want to add a new nodepool by using terraform, but I couldn't find kubernetes_cluster_id value. So I'm wondering if it's possible to create a new nodepool in existing AKS cluster with terraform?
You can use a data source to extract the cluster-ID and then just reference it in the azurerm_kubernetes_cluster_node_pool resource where the kubernetes_cluster_id is required.
data "azurerm_kubernetes_cluster" "example" {
name = "myakscluster"
resource_group_name = "my-example-resource-group"
}
resource "azurerm_kubernetes_cluster_node_pool" "example" {
name = "internal"
kubernetes_cluster_id = data.azurerm_kubernetes_cluster.example.id
vm_size = "Standard_DS2_v2"
node_count = 1
tags = {
Environment = "Production"
}
}
Is it possible to add a new node pool to an existing AKS Cluster with Terraform?
Answer: Yes, You can use the azurerm_kubernetes_cluster_node_pool resource in Terraform to create a new node pool and use the resource ID of the existing AKS Cluster as a reference for kubernetes_cluster_id.
You can find your kubernetes_cluster_id in the Azure Portal or Azure CLI.
Azure CLI
Command:
az aks show --resource-group <your-resource-group> --name <your-aks-cluster-name> --query id -o tsv
Output:
/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.ContainerService/managedClusters/my-aks-cluster
Azure Portal

Deploy VM in Azure Stack Edge with Terraform

I want to deploy with Terraform some Virtual machines inside Azure Stack Edge. Is it possible?
From the Azure documentation Here, I suspect that I can use the same Terraform code to create virtual machines in a Resource Group because it seems that they use the same Azure API, but I'm not sure.
If so, how could I adapt my code to use a Azure Stack Edge instead of Azure Resource group?
#Creating the VM
resource "azurerm_windows_virtual_machine" "jumphost" {
name = var.name
resource_group_name = data.azurerm_resource_group.jumphost.name
location = data.azurerm_resource_group.jumphost.location
size = "Standard_B2ms"
admin_username = "adminuser"
admin_password = data.azurerm_key_vault_secret.jumphost.value
network_interface_ids = [
azurerm_network_interface.jumphost.id,
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
disk_size_gb = 127
}
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2022-Datacenter"
version = "latest"
}
}
This is an example how I deploy a VM.
Many Thanks
Both The Azure Stack Provider and Azure Provider are used to manage resources via the Azure Resource Manager API's..You can use the same terraform code to deploy resources in Azure Stack or AzureRM. Only you need to change the providers.
Below is screen shot from terraform registry.
Terraform, created by Microsoft partner HashiCorp, is using the same ARM REST APIs as a foundation.
For more information you can refer this Document

how to manage terraform state file in azure for preventing deletion and replacement

How to manage terraform state file in azure for preventing deletion and replacement,
just like arm templates.
I am deploying VMs with terraform and terraform plan and apply is replacing
my vms each time.
You could store Terraform state in Azure Storage from this tutorial, Azure Storage blobs are automatically locked before any operation that writes state.
If so, you will configure the state back end. The Terraform state back end is configured when you run the terraform init command. The following data is needed to configure the state back end:
storage_account_name: The name of the Azure Storage account.
container_name: The name of the blob container.
key: The name of the state store file to be created.
access_key: The storage access key.
Here is an example.
terraform {
backend "azurerm" {
resource_group_name = "tstate"
storage_account_name = "tstate09762"
container_name = "tstate"
key = "terraform.tfstate"
}
}
resource "azurerm_resource_group" "state-demo-secure" {
name = "state-demo"
location = "eastus"
}

Terraform fails using an Azure service principal for authentication

Problem
Terraform gives the following error when trying to use terraform plan or terraform apply after create a service principal in Azure:
provider.azurerm: No valid (unexpired) Azure CLI Auth Tokens found. Please run az login.
Steps to Reproduce
Create a service principal in Azure via az ad sp create-for-rbac.
Add the service principal configuration as a provider block to your .tf file:
provider "azurerm" {
alias = "tf_bootstrap"
client_id = "55708466-3686-xxxx-xxxx-xxxxxxxxxxxx"
client_secret = "88352837-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
tenant_id = "129a861e-a703-xxxx-xxxx-xxxxxxxxxxxx"
subscription_id = "c2e9d518-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
resource "azurerm_resource_group" "dev" {
name = "dev-rg"
location = "East US"
}
Attempt to run terraform plan.
If using the alias key in a provider block, as shown in the question, a provider key must be specified in each data or resource blocks.
For example:
// When a provider alias has been defined.
resource "azurerm_resource_group" "dev" {
provider = "azurerm.tf_bootstrap"
name = "dev-rg"
location = "East US"
}
If you miss a provider for one of your resources or data blocks, authentication fails on that block.
Note however that is also valid to not specify an alias key in the original provider block. In that case, it is no longer necessary to specify a provider key in every resource and data block; the provider key can be omitted.
// When a provider alias has not been defined.
resource "azurerm_resource_group" "dev" {
name = "dev-rg"
location = "East US"
}

Terraform Backend in azure with managed disks

We are migrating from unmanaged to managed disks in Azure. Currently our backend.tf definition is as follows
terraform {
backend "azure" {
storage_account_name = "foo"
container_name = "foo-container"
key = "foo.tfstate"
}
}
With managed disks you don't have reference to storage account as it is managed by Azure. What does this mean for backend.tf. Do we just remove storage account and container? Do we need to add some flag to identify backend storage as managed? Google search is not producing required answers, hence reaching out here.
Thanks
With managed disks you don't have reference to storage account as it
is managed by Azure. What does this mean for backend.tf.
It means you could not use backend "azure", Azure managed disk does not support this.
Please refer to this official document.Stores the state as a given key in a given blob container on Microsoft Azure Storage.
Creating managed disk with terraform you could check this link.
resource "azurerm_managed_disk" "test" {
name = "acctestmd"
location = "West US 2"
resource_group_name = "${azurerm_resource_group.test.name}"
storage_account_type = "Standard_LRS"
create_option = "Empty"
disk_size_gb = "1"
tags {
environment = "staging"
}

Resources