Terraform Create Azure IoT Device Provisioning Service Enrollment Group - azure

I've been trying to create an Azure IoT Hub Device Provisioning Servince along with Enrollment Group that is using X509 Certificates.
As far as I can see there's no way to do it using azurerm provider. I've also tried to explore azapi options but it seems like type = "Microsoft.Devices/provisioningServices#2022-12-12" also won't be able to offer automatic enrollment group creation?
Is there any other provider I could use for that?

Eventually, I ended up using local_file to create a temporary cert file and then null_resource to run Azure CLI commands, my solution:
locals {
iot_hub_name = join("-", [var.project_name, "iothub", var.environment_name])
dps_name = join("-", [var.project_name, "dps", var.environment_name])
cert_path = "intermediate"
}
data "azurerm_client_config" "current" {}
resource "azurerm_iothub" "azure_iot_hub" {
...
}
resource "azurerm_iothub_dps" "azure_iot_hub_dps" {
...
}
resource "local_file" "create_cert_file" {
content = var.iot_dps_intermediate_cert
filename = local.cert_path
}
resource "null_resource" "create-dps-certificate-enrollement" {
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = <<-EOT
az login --service-principal -u $CLIENT_ID -p $CLIENT_SECRET --tenant $TENANT_ID
az extension add --name azure-iot
az iot dps enrollment-group create --cp $CERT_PATH -g $RESOURCE_GROUP --dps-name $DPS_NAME --enrollment-id $ENROLLMENT_ID
EOT
environment = {
CLIENT_ID = data.azurerm_client_config.current.client_id
TENANT_ID = data.azurerm_client_config.current.tenant_id
CLIENT_SECRET = var.client_secret
RESOURCE_GROUP = var.resource_group_name
DPS_NAME = local.dps_name
ENROLLMENT_ID = "${local.dps_name}-enrollement-group"
CERT_PATH = local.cert_path
}
}
depends_on = [local_file.create_cert_file]
}
where var.iot_dps_intermediate_cert represents the content of .pem file that is used to create a new Enrollment Group

Related

Terraform Azure AKS - How to install azure-keyvault-secrets-provider add-on

I have an AKS kubernetes cluster provisioned with terraform. And I need to enable the azure-keyvault-secrets-provider add-on.
Using the azure CLI, I could enable it as follows:
az aks enable-addons --addons azure-keyvault-secrets-provider --name myAKSCluster --resource-group myResourceGroup
But, how can I do it with the terraform? I tried the documentation, but doesn't mention anything about a secret driver except only one block as follows:
resource "azurerm_kubernetes_cluster" "k8s_cluster" {
lifecycle {
ignore_changes = [
default_node_pool
]
prevent_destroy = false
}
key_vault_secrets_provider {
secret_rotation_enabled = true
}
...
}
Is the above key_vault_secrets_provider doing the same thing as the azure CLI command az aks enable-addons --addons azure-keyvault-secrets-provider --name myAKSCluster --resource-group myResourceGroup ?
Because according to the terraform documentation, this key_vault_secrets_provider block is only for rotating the keyvault secrets. But no mention about enabling the driver.
My requirement is to:
Enable the secret provider driver
Create a kubernetes Secret -> so it will provision the secret in azure
Inject the secret to a kubernetes Deployment
I have tried to check the same in my environment:
Code: Without key_vault_secrets_provider
main.tf:
resource "azurerm_kubernetes_cluster" "example" {
name = "kavyaexample-aks1"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
dns_prefix = "kavyaexampleaks1"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
Environment = "Production"
}
}
output "client_certificate" {
value = azurerm_kubernetes_cluster.example.kube_config.0.client_certificate
sensitive = true
}
When checked the available addons list for my managed aks cluster through CLI , the “azure-keyvault-secrets-provider" is shown as disabled .It means for the latest versions of terraform provider , they have providers .Just that it need to be enabled.
Command:
az aks addon list –name kavyaexample-aks1 --resource-group <myrg>
Now checked after adding key_vault_secrets_provider block with secret rotation enabled.
Main.tf:
resource "azurerm_kubernetes_cluster" "example" {
name = "kavyaexample-aks1"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
dns_prefix = "cffggf"
....
key_vault_secrets_provider {
secret_rotation_enabled = true
}
default_node_pool {
name = ”dfgdf”
...
}
When checked for addon list using the same command:
az aks addon list –name kavyaexample-aks1 --resource-group <myrg>
The azure keyvault secret provider addon is being enabled.
which means adding key_vault_secrets_provider block with secret
rotation enabled itself means , we are making use of the azure
keyvault secret provider addon.
Also check this terraform-azurerm-aks issue on addon_profile being deprecated in latest terraform versions |github

Make Terraform azurerm_key_vault_certificate module create a new version of Key Vault Certificate

newbie here to don't punch too hard please.
The big picture goal is: automating the process of provisioning a new SSL cert from Lets encrypt, storing the cert in Azure key vault and then propagating it to a bunch of azure VMs. We have a solution in place but of course it was created by people who are no longer with the organization and now we're trying to improve it to work for our scenario. The current problem is: terraform is trying to delete an existing cert from Azure key vault and then create a new one with the same name. Of course, soft delete and purge protection are both enabled and (!) it's a group policy imposed by AAD so they can't be disabled to create a new vault without them. I've read here that it's possible to work around the issue using terraform's null resource but I know nothing about it to be able to use it. Does anyone have an idea how exactly this could be achieved? Please feel free to ask further questions, I'll answer what I can. Thanks!
To use null_resource first, we must define the null_resource provider in our main.tf file.See Terraform null provider and null_resource explained | by Jack Roper | FAUN Publication
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.43"
}
null = {
version = "~> 3.0.0"
}
}
Then if you have defined the resources keyvault , certificate..
resource "azurerm_key_vault" "kv" {
name = "ansumankeyvault01"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
....
tenant_id =
}
resource "azurerm_key_vault_certificate" "example" {
name = "generated-cert"
key_vault_id = azurerm_key_vault.example.id
certificate_policy {
...
}
key_properties {
...
}
lifetime_action {
action {
action_type = "AutoRenew"
}
trigger {
days_before_expiry = 30
}
...
}
Please check the workaround in
terraform-provider-azurerm/issues as you mentioned.
Refer : null_resource | Resources | hashicorp/null | Terraform Registry /
Powershell using local-exec provisioner :
resource "null_resource" "script" {
provisioner "local-exec" {
command =
“Add-AzKeyVaultCertificate -VaultName ${azurerm_key_vault.kv.name}
-Name ${ azurerm_key_vault_certificate.example.anme } -CertificatePolicy ${ azurerm_key_vault_certificate.example.certificate_policy}”
interpreter = ["powershell", "-Command"]
depends_on = [azurerm_key_vault_certificate.example]
}
}
or using az cli commands.
resource "null_resource" "nullrsrc" {
provisioner "local-exec" {
command = " az keyvault certificate create --name
--policy ${ azurerm_key_vault_certificate.example.certificate_policy}
--vault-name ${azurerm_key_vault.kv.name}
Depends_on = [azurerm_key_vault_certificate.example]
}
}
Or you can create a powershell command script to reference
resource "null_resource" "create_sql_user" {
provisioner "local-exec" {
command = ".'${path.module}\\scripts\\update_cert_version.ps1' "
interpreter = ["pwsh", "-Command"]
}
depends_on = [azurerm_key_vault_certificate.example]
}
References
Azure CLI or PowerShell command to create new version of a
certificate in keyvault - Stack Overflow
Execute AZ CLI commands using local-exec provisioner · Issue #1046 ·
hashicorp/terraform-provider-azurerm · GitHub

How to activate Managed HSM and configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM using Terraform

I’m working on to create Azure Key Vault Managed HSM using terraform. For that I have followed this documentation.
The above documentation contains the code for creating the HSM but not for the activation of managed HSM.
I want to provision and activate a managed HSM using Terraform. Is it possible or not through the terraform?
After Activate a managed HSM, I want to configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM. For that I have followed this documentation, but it contains the Azure CLI code.
Unfortunately , its not directly possible to activate the Managed HSM from Terraform . Currently , you can only provision it from terraform or ARM template but for activating it has to be done only from PowerShell and Azure CLI. It is also the same while updating the storage account with customer managed key and assigning a key vault role assignment.
If you use azurerm_storage_account_customer_managed_key, then you will get the below error:
Overall all HSM Key vault Operations needs to be performed on CLI or Powershell.
So , For workaround you can use local-exec in terraform to directly run it without performing separate operations.
Code:
provider "azurerm" {
features {}
}
data "azurerm_client_config" "current" {
}
resource "azurerm_resource_group" "example" {
name = "keyvaulthsm-resources"
location = "West Europe"
}
resource "azurerm_key_vault_managed_hardware_security_module" "example" {
name = "testKVHsm"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku_name = "Standard_B1"
purge_protection_enabled = true
soft_delete_retention_days = 90
tenant_id = data.azurerm_client_config.current.tenant_id
admin_object_ids = [data.azurerm_client_config.current.object_id]
tags = {
Env = "Test"
}
}
variable "KeyName" {
default=["C:/<Path>/cert_0.key","C:/<Path>/cert_1.key","C:/<Path>/cert_2.key"]
}
variable "CertName" {
default=["C:/<Path>/cert_0.cer","C:/<Path>/cert_1.cer","C:/<Path>/cert_2.cer"]
}
resource "null_resource" "OPENSSLCERT" {
count = 3
provisioner "local-exec" {
command = <<EOT
cd "C:\Program Files\OpenSSL-Win64\bin"
./openssl.exe req -newkey rsa:2048 -nodes -keyout ${var.KeyName[count.index]} -x509 -days 365 -out ${var.CertName[count.index]} -subj "/C=IN/ST=Telangana/L=Hyderabad/O=exy ltd/OU=Stack/CN=domain.onmicrosoft.com"
EOT
interpreter = [
"PowerShell","-Command"
]
}
}
resource "null_resource" "securityDomain" {
provisioner "local-exec" {
command = <<EOT
az keyvault security-domain download --hsm-name ${azurerm_key_vault_managed_hardware_security_module.example.name} --sd-wrapping-keys ./cert_0.cer ./cert_1.cer ./cert_2.cer --sd-quorum 2 --security-domain-file ${azurerm_key_vault_managed_hardware_security_module.example.name}-SD.json
EOT
interpreter = [
"PowerShell","-Command"
]
}
depends_on = [
null_resource.OPENSSLCERT
]
}
resource "azurerm_storage_account" "example" {
name = "ansumanhsmstor1"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "GRS"
identity {
type = "SystemAssigned"
}
}
resource "null_resource" "roleassignkv" {
provisioner "local-exec" {
command = <<EOT
az keyvault role assignment create --hsm-name ${azurerm_key_vault_managed_hardware_security_module.example.name} --role "Managed HSM Crypto Service Encryption User" --assignee ${azurerm_storage_account.example.identity[0].principal_id} --scope /keys
az keyvault role assignment create --hsm-name ${azurerm_key_vault_managed_hardware_security_module.example.name} --role "Managed HSM Crypto User" --assignee ${data.azurerm_client_config.current.object_id} --scope /
az keyvault key create --hsm-name ${azurerm_key_vault_managed_hardware_security_module.example.name} --name storageencryptionkey --ops wrapKey unwrapKey --kty RSA-HSM --size 3072
EOT
interpreter = [
"PowerShell","-Command"
]
}
depends_on = [
null_resource.securityDomain,
azurerm_storage_account.example
]
}
resource "null_resource" "storageupdate" {
provisioner "local-exec" {
command = <<EOT
az storage account update --name ${azurerm_storage_account.example.name} --resource-group ${azurerm_resource_group.example.name} --encryption-key-name storageencryptionkey --encryption-key-source Microsoft.Keyvault --encryption-key-vault ${azurerm_key_vault_managed_hardware_security_module.example.hsm_uri}
EOT
interpreter = [
"PowerShell","-Command"
]
}
depends_on = [
null_resource.securityDomain,
azurerm_storage_account.example,
null_resource.roleassignkv
]
}
Output:
Note: Please make sure to enable Purge Protection on the HSM Keyvault and have all the required permissions on Management Plane (not added in code) and Control Plane (I have added in the code). To install OpenSSL you can refer this answer by mtotowamkwe on this SO thread.

How to enable managed identity for the virtual machine scale set on a terraform kubernetes deploy

I am deploying AKS through terraform.
It's working great, but I would like to also enable identity on the VMSS object in order to allow pod level managed identity access (mostly grab keys from key vaults).
I can manually do this by going to the auto-created VMSS object that Azure creates once launching the AKS cluster.
However, I do not see an option for this in the terraform resource.
Has anyone ran into this and found a way to pull it off?
My deployment code is like this:
resource "azurerm_kubernetes_cluster" "main" {
name = "myaks"
location = "centralus"
resource_group_name = "myrg"
dns_prefix = "myaks"
node_resource_group = "aksmanagedrg"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_B2ms"
vnet_subnet_id = "myakssubnetid"
os_disk_size_gb = 128
}
identity {
type = "SystemAssigned"
}
addon_profile {
aci_connector_linux {
enabled = false
}
azure_policy {
enabled = false
}
http_application_routing {
enabled = false
}
kube_dashboard {
enabled = true
}
oms_agent {
enabled = false
}
}
network_profile {
network_plugin = "azure"
load_balancer_sku = "standard"
}
}
Thanks!
It seems you're looking for the pod-managed identities in Azure Kubernetes Service. If so, then, unfortunately, Terraform seems does not support to configure the property. When you follow the article above to configure the pod-managed identities, then you can see the pod identity profile like this:
And there is no such option for you to configure it. But instead, you can run the Azure CLI in the Terraform via the null_resource and provisioner local-exec and here is an example:
resource "null_resource" "aks_update" {
provisioner "local-exec" {
command = "az aks update --resource-group ${azurerm_resource_group.aks.name} --name ${azurerm_kubernetes_cluster.aks.name} --enable-pod-identity"
}
}
resource "null_resource" "aks_add_poidentity" {
provisioner "local-exec" {
command = "az aks pod-identity add --resource-group ${azurerm_resource_group.aks.name} --cluster-name ${azurerm_kubernetes_cluster.aks.name} --namespace ${var.pod_identity_namespace} --name ${azurerm_user_assigned_identity.aks.name} --identity-resource-id ${azurerm_user_assigned_identity.aks.id}"
}
}
This could be a way to enable the identity in the pods level for the AKS.

Azure Container Registry Permissions Using Terraform

Setting up a azurerm_container_registry with terraform, I was wondering how I can change the permissions for certain users (e.g. ReadOnly), or perhaps to create a access_key which can be used from my CI-Pipeline, but does not require a user at all.
This Terraform configuration creates an ACR registry, and Azure Service Principal,
and grants the SP contributor access to the ACR registry. This can be updated to reader.
More information can be found on ACR auth with service principals here.
resource "azurerm_resource_group" "acr-rg" {
name = "acr-rg-007"
location = "West US"
}
resource "azurerm_container_registry" "acr" {
name = "acr00722"
resource_group_name = "${azurerm_resource_group.acr-rg.name}"
location = "${azurerm_resource_group.acr-rg.location}"
sku = "standard"
}
resource "azurerm_azuread_application" "acr-app" {
name = "acr-app"
}
resource "azurerm_azuread_service_principal" "acr-sp" {
application_id = "${azurerm_azuread_application.acr-app.application_id}"
}
resource "azurerm_azuread_service_principal_password" "acr-sp-pass" {
service_principal_id = "${azurerm_azuread_service_principal.acr-sp.id}"
value = "Password12"
end_date = "2020-01-01T01:02:03Z"
}
resource "azurerm_role_assignment" "acr-assignment" {
scope = "${azurerm_container_registry.acr.id}"
role_definition_name = "Contributor"
principal_id = "${azurerm_azuread_service_principal_password.acr-sp-pass.service_principal_id}"
}
output "docker" {
value = "docker login ${azurerm_container_registry.acr.login_server} -u ${azurerm_azuread_service_principal.acr-sp.application_id} -p ${azurerm_azuread_service_principal_password.acr-sp-pass.value}"
}

Resources