I'm getting an error while trying to set up a VM with a Key vault. This is part of the code I think is relevant.
resource "azurerm_key_vault_key" "example" {
name = "TF-key-example"
key_vault_id = "${azurerm_key_vault.example.id}"
key_type = "RSA"
key_size = 2048
key_opts = [
"decrypt",
"encrypt",
"sign",
"unwrapKey",
"verify",
"wrapKey",
]
}
resource "azurerm_disk_encryption_set" "example" {
name = "example-set"
resource_group_name = "${azurerm_resource_group.example.name}"
location = "${azurerm_resource_group.example.location}"
key_vault_key_id = "${azurerm_key_vault_key.example.id}"
identity {
type = "SystemAssigned"
}
}
resource "azurerm_key_vault_access_policy" "disk-encryption" {
key_vault_id = "${azurerm_key_vault.example.id}"
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"create",
"get",
"list",
"wrapkey",
"unwrapkey",
]
secret_permissions = [
"get",
"list",
]
}
resource "azurerm_role_assignment" "disk-encryption-read-keyvault" {
scope = "${azurerm_key_vault.example.id}"
role_definition_name = "Reader"
principal_id = "${azurerm_disk_encryption_set.example.identity.0.principal_id}"
}
This is the error I'm getting:
Error: Error creating Linux Virtual Machine "example-vm" (Resource
Group "Encrypt-resources"):
compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request:
StatusCode=400 -- Original Error: Code="KeyVaultAccessForbidden"
Message="Unable to access key vault resource
'https://tf-keyvault-example.vault.azure.net/keys/TF-key-example/*****'
to enable encryption at rest. Please grant get, wrap and unwrap key
permissions to disk encryption set 'example-set'. Please visit
https://aka.ms/keyvaultaccessssecmk for more information."
Where and how should I add the permissions?
As the error print - Please grant get, wrap and unwrap key permissions to disk encryption set 'example-set'.
Add the following block:
# grant the Managed Identity of the Disk Encryption Set access to Read Data from Key Vault
resource "azurerm_key_vault_access_policy" "disk-encryption" {
key_vault_id = azurerm_key_vault.example.id
key_permissions = [
"get",
"wrapkey",
"unwrapkey",
]
tenant_id = azurerm_disk_encryption_set.example.identity.0.tenant_id
object_id = azurerm_disk_encryption_set.example.identity.0.principal_id
}
# grant the Managed Identity of the Disk Encryption Set "Reader" access to the Key Vault
resource "azurerm_role_assignment" "disk-encryption-read-keyvault" {
scope = azurerm_key_vault.example.id
role_definition_name = "Reader"
principal_id = azurerm_disk_encryption_set.example.identity.0.principal_id
}
More about azurerm_key_vault_access_policy and azurerm_role_assignment.
Update-
The issue was related to not specifying the correct object_id.
Later on, The Machine that builds the Terraform missed the SSH file path(e.g -"~/.ssh/id_rsa.pub") .
Fixed by running this command:
ssh-keygen -t rsa -b 4096 -C "your_email#example.com"
After that, the key vault permission was missing access policy to terraform user.
Besides all that, the sequence of the resources was mixed. fixed that to a more logical sequence.
The full and working code can be found here.
As Amit Baranes pointed out, you need to set the access policy for your encryption set.
In your above example you grant your data source client ID access to the key vault by way of access policy. The identity of your encryption set however only gets read to the vault by way of role.
Tucked away here the AzureRM VM resource documentation states:
NOTE: The Disk Encryption Set must have the Reader Role Assignment
scoped on the Key Vault - in addition to an Access Policy to the Key
Vault
You need to make sure you grant the encryption ID both the read role and an access policy.
A possible resulting full block looks like this, where we give your service principal and the identity access to the vault by way of an access policy. We also retain the read role
resource "azurerm_key_vault_key" "example" {
name = "TF-key-example"
key_vault_id = "${azurerm_key_vault.example.id}"
key_type = "RSA"
key_size = 2048
key_opts = [
"decrypt",
"encrypt",
"sign",
"unwrapKey",
"verify",
"wrapKey",
]
}
resource "azurerm_disk_encryption_set" "example" {
name = "example-set"
resource_group_name = "${azurerm_resource_group.example.name}"
location = "${azurerm_resource_group.example.location}"
key_vault_key_id = "${azurerm_key_vault_key.example.id}"
identity {
type = "SystemAssigned"
}
}
resource "azurerm_key_vault_access_policy" "service-principal" {
key_vault_id = "${azurerm_key_vault.example.id}"
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"create",
"get",
"list",
"wrapkey",
"unwrapkey",
]
secret_permissions = [
"get",
"list",
]
}
resource "azurerm_key_vault_access_policy" "encryption-set" {
key_vault_id = "${azurerm_key_vault.example.id}"
tenant_id = azurerm_disk_encryption_set.example.identity.0.tenant_id
object_id = azurerm_disk_encryption_set.example.identity.0.principal_id
key_permissions = [
"create",
"get",
"list",
"wrapkey",
"unwrapkey",
]
secret_permissions = [
"get",
"list",
]
}
resource "azurerm_role_assignment" "disk-encryption-read-keyvault" {
scope = "${azurerm_key_vault.example.id}"
role_definition_name = "Reader"
principal_id = "${azurerm_disk_encryption_set.example.identity.0.principal_id}"
}
You would probably want to reduce the access for the service principal, however i left it as is for now.
I just noticed Reader role is not fit anymore, you now need to use Key Vault Crypto Service Encryption User.
resource "azurerm_role_assignment" "disk-encryption-read-keyvault" {
scope = "${azurerm_key_vault.example.id}"
role_definition_name = "Key Vault Crypto Service Encryption User"
principal_id = ${azurerm_disk_encryption_set.example.identity.0.principal_id}"
}
Related
I'm trying to change the keyvault used by my virtual machine in terraform. When I trying to apply the changes, Terraform then tried to replace the virtual machine with the new key vault. How do I just change the keyvault used by the vm or change the creds in terraform without destroying the virtual machine ?
I tried to use lifecyle (prevent_destroy = true) but it then fails showing this message
`> Error: Instance cannot be destroyed
on .terraform\modules\avd_vm\Modules\AVD_VM\main.tf line 388:
388: resource "azurerm_windows_virtual_machine" "acumen_vm_kv" {
Resource module.avd_vm.azurerm_windows_virtual_machine.acumen_vm_kv has
lifecycle.prevent_destroy set, but the plan calls for this resource to be
destroyed. To avoid this error and continue with the plan, either disable
lifecycle.prevent_destroy or reduce the scope of the plan using the -target
flag.`
I tried to reproduce the same from my end.
Received same error:
Resource vmpassword has lifecycle.prevent_destroy set, but the plan calls for this resource to be destroyed. To avoid this error and continue
│ with the plan, either disable lifecycle.prevent_destroy or reduce the scope of the plan using the -target flag
Note: If you are trying to have two different Key Vaults for VM,
it would be better to use another resource block for new keyvault.
resource "azurerm_key_vault_secret" "vmpassword" {
name = "vmpassword"
value = random_password.vmpassword.result
key_vault_id = azurerm_key_vault.kv1.id
depends_on = [ azurerm_key_vault.kv1 ]
lifecycle {
prevent_destroy = true
}
}
resource "azurerm_key_vault" "kv2" {
name = "kavy-newkv2"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
enabled_for_disk_encryption = true
tenant_id = data.azurerm_client_config.current.tenant_id
soft_delete_retention_days = 7
purge_protection_enabled = false
sku_name = "standard"
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"Get","Create", "Decrypt", "Delete", "Encrypt", "Update"
]
secret_permissions = [
"Get", "Backup", "Delete", "List", "Purge", "Recover", "Restore", "Set",
]
storage_permissions = [
"Get", "Restore","Set"
]
}
}
resource "azurerm_key_vault_secret" "vmpassword" {
name = "vmpassword"
value = random_password.vmpassword.result
key_vault_id = azurerm_key_vault.kv1.id
depends_on = [ azurerm_key_vault.kv1 ]
}
import each resource manually to show in your state file. Terraform tracks each resource individually.
You can ignore changes in vm ,if required
lifecycle {
ignore_changes = [
tags,
]
}
Reference:
terraform-lifecycle-prevent-destroy | StackOverflow
https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle#prevent_destroy
I am trying to create a keyvault on Azure using Terraform which is performed by my service principal user:
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "key_vault" {
name = "${var.project_name}-keyvault"
location = var.resource_group_location
resource_group_name = var.resource_group_name
enabled_for_disk_encryption = true
tenant_id = data.azurerm_client_config.current.tenant_id
soft_delete_retention_days = 7
purge_protection_enabled = false
sku_name = "standard"
}
resource "azurerm_key_vault_access_policy" "access_policy" {
key_vault_id = azurerm_key_vault.key_vault.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
secret_permissions = [
"Set", "Get", "Delete", "Purge", "List", ]
}
resource "azurerm_key_vault_secret" "client_id" {
name = "client-id"
value = var.client_id_value
key_vault_id = azurerm_key_vault.key_vault.id
}
resource "azurerm_key_vault_secret" "client_secret" {
name = "client-secret"
value = var.client_secret_value
key_vault_id = azurerm_key_vault.key_vault.id
}
resource "azurerm_key_vault_secret" "subscription_id" {
name = "subscription-id"
value = var.subscription_id_value
key_vault_id = azurerm_key_vault.key_vault.id
}
resource "azurerm_key_vault_secret" "tenant_id" {
name = "tenant-id"
value = var.tenant_id_value
key_vault_id = azurerm_key_vault.key_vault.id
}
But i get this error:
Error: checking for presence of existing Secret "client-id" (Key Vault "https://formulaeinsdef-keyvault.vault.azure.net/"): keyvault.BaseClient#GetSecret:
Failure responding to request: StatusCode=403 -- Original Error: autorest/azure:
Service returned an error. Status=403 Code="Forbidden" Message="The user, group or application 'appid=***;oid=32d24355-0d93-476d-a775-6882d5a22e0b;iss=https://sts.windows.net/***/' does not have secrets get permission on key vault 'formulaeinsdef-keyvault;location=westeurope'.
For help resolving this issue, please see https://go.microsoft.com/fwlink/?linkid=2125287" InnerError={"code":"AccessDenied"}
The above code creates the key-vault successfully, but it fails to add the secrets inside it.
My Service Principal user has Contributor role and i think, it should be enough to GET and SET key keys.
I tried to give my service principal the Reader or even Ownerpermission, but it was not helpful.
I also checked this question, but it is not helping me.
I checked the Access Policies tab and i have the permissions to Set, Get, Delete, Purge, List.
Each of the secrets needs an explicit dependency on the access policy. Otherwise, Terraform may attempt to create the secret before creating the access policy.
resource "azurerm_key_vault_secret" "client_id" {
name = "client-id"
value = var.client_id_value
key_vault_id = azurerm_key_vault.key_vault.id
### Explicit dependency
depends_on = [
azurerm_key_vault_access_policy.access_policy
]
}
Alternatively, moving the access policy definition into the key vault block would make the explicit dependencies unnecessary:
resource "azurerm_key_vault" "key_vault" {
# Content omitted for brevity
.
.
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
secret_permissions = [
"Set", "Get", "Delete", "Purge", "List", ]
}
}
I have below requirments.
Rotate Storage account access keys (primary_access_key and secondary_access_key both) via a terraform.
add the new regenerated keys as a new version to Secrets created in keyvault for both primary and secondary access keys.
resource "azurerm_storage_account" "example" {
name = "storageaccrotatekeys"
resource_group_name = "accessrotate"
location = "East US"
account_tier = "Standard"
account_replication_type = "LRS"
public_network_access_enabled = false
}
Below azure_storage_account resource only contains attributes for primary_access_key and secondary_access_key that too sensitive values.
I couldn't find any option to rotate keys. Please help
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/storage_account#import
It may directly be not happening with terraform to rotate the access keys
AFAIK but
please check this customer_managed_key block that can be given in resource
azurerm_storage_account block where auto rotation can be enabled with keyvaultId and version.This customer_managed_key which contains the argument key_version which is Optional to mention the version of Key Vault Key. To enable Automatic Key Rotation you can avoid this option.
To manually rotate , give the version in the block key_version.
If separate block is created for customer_managed_key , you can provide required argument key_vault_key_id where in giving version-less key ID will enable auto-rotation of this key.
Note: customer_managed_key needs account_kind to be StorageV2 UserAssigned as the identity type.
Code: from azurerm_storage_account_customer_managed_key | Resources | hashicorp/azurerm | Terraform Registry
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
resource "azurerm_resource_group" "example" {
name = "<resource group>"
location = "westus2"
}
provider "azurerm" {
features {}
alias = "cloud_operations"
}
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "example" {
name = "ka-examplekv"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "standard"
purge_protection_enabled = true
}
resource "azurerm_key_vault_access_policy" "storage" {
key_vault_id = azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = azurerm_storage_account.example.identity.0.principal_id
key_permissions = ["Get", "Create", "List", "Restore", "Recover", "UnwrapKey", "WrapKey", "Purge", "Encrypt", "Decrypt", "Sign", "Verify"]
secret_permissions = ["Get"]
}
resource "azurerm_key_vault_access_policy" "client" {
key_vault_id = azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = ["Get", "Create", "Delete", "List", "Restore", "Recover", "UnwrapKey", "WrapKey", "Purge", "Encrypt", "Decrypt", "Sign", "Verify"]
secret_permissions = ["Get","List"]
}
resource "azurerm_key_vault_key" "example" {
name = "ka-tfexkey"
key_vault_id = azurerm_key_vault.example.id
key_type = "RSA"
key_size = 2048
key_opts = ["decrypt", "encrypt", "sign", "unwrapKey", "verify", "wrapKey"]
depends_on = [
azurerm_key_vault_access_policy.client,
azurerm_key_vault_access_policy.storage,
]
}
resource "azurerm_storage_account" "example" {
name = "kaexamplestor"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "GRS"
identity {
type = "SystemAssigned"
}
}
resource "azurerm_storage_account_customer_managed_key" "example" {
storage_account_id = azurerm_storage_account.example.id
key_vault_id = azurerm_key_vault.example.id
key_name = azurerm_key_vault_key.example.name
}
Also check this time rotaing resource which rotates UTC timestamp stored in the Terraform state and recreates resource when the current time in the locally stored source is beyond the rotation time. This occurs only when Terraform is executed
Reference:
customer_managed_key in azurerm_storage_account | Resources | hashicorp/azurerm | Terraform Registry
I have a terraform code that deploys an Azure key vault using the code:
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "keyvault" {
name = "${local.environment}"
resource_group_name = azurerm_resource_group.rg.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "standard"
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
# List of key permissions...
]
# All permissions listed currently.
secret_permissions = [
# List of secret permissions...
]
storage_permissions = [
# List of storage permissions...
]
}
}
I have a certain code that runs under a different principle that is used when deploying this code. So data.azurerm_client_config.current.object_id (aka: The object ID of a user, service principal, or security group in the Azure Active Directory tenant for the vault.) would be different inside that code and the secrets are therefore inaccessible to the code.
How can I amend the access_policy so different users/service principals can access the same data vault simultaneously?
You need to use the azurerm_key_vault_access_policy resource. . So you'd change your code to:
resource "azurerm_key_vault" "keyvault" {....}
//add one of these for each user
resource "azurerm_key_vault_access_policy" "kvapta" {
key_vault_id = azurerm_key_vault.keyvault.id
tenant_id = var.identity.tenant_id
object_id = var.identity.principal_id
certificate_permissions = []
key_permissions = [
]
secret_permissions =[]
storage_permissions = [
]
}
I tried to provision a Terraform keyvault secret defining the access policy as below. But I get permission issues.
resource "azurerm_key_vault" "keyvault1" {
name = "${local.key_vault_one_name}"
location = "${local.location_name}"
resource_group_name = "${azurerm_resource_group.keyvault.name}"
enabled_for_disk_encryption = false
enabled_for_template_deployment = true
tenant_id = "${data.azurerm_client_config.current.tenant_id}"
sku {
name = "standard"
}
access_policy {
tenant_id = "${data.azurerm_client_config.current.tenant_id}"
object_id = "${data.azurerm_client_config.current.service_principal_object_id}"
application_id = "${data.azurerm_client_config.current.client_id}"
key_permissions = [
"get","list","update","create","import","delete","recover","backup","restore"
]
secret_permissions = [
"get","list","delete","recover","backup","restore","set"
]
certificate_permissions = [
"get","list","update","create","import","delete","recover","backup","restore", "deleteissuers", "getissuers", "listissuers", "managecontacts", "manageissuers", "setissuers"
]
}
}
# Create Key Vault Secrets
resource "azurerm_key_vault_secret" "test1" {
name = "db-username"
value = "bmipimadmin"
//vault_uri = "${azurerm_key_vault.keyvault1.vault_uri}"
key_vault_id = "${azurerm_key_vault.keyvault1.id}"
}
I get the below error when trying to terraform apply even though the service principal has all the access required to play with Key Vault.
1 error occurred:
* azurerm_key_vault_secret.test1: 1 error occurred:
* azurerm_key_vault_secret.test1: keyvault.BaseClient#SetSecret: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="Forbidden" Message="Access denied" InnerError={"code":"AccessDenied"}
I can reproduce your issue and you are missing comma , at the end of permissions. In this case, you just need to specify tenant_id and object_id when you terraform apply though the service principal. At this before, the service principal should be granted RBAC role (like contributor role) about your Azure key vault resource. See more details here.
For example, this works for me,
access_policy {
tenant_id = "${data.azurerm_client_config.current.tenant_id}"
object_id = "${data.azurerm_client_config.current.service_principal_object_id}"
key_permissions = [
"get","list","update","create","import","delete","recover","backup","restore",
]
secret_permissions = [
"get","list","delete","recover","backup","restore","set",
]
certificate_permissions = [
"get","list","update","create","import","delete","recover","backup","restore", "deleteissuers", "getissuers", "listissuers", "managecontacts", "manageissuers", "setissuers",
]
}
Ref: https://www.terraform.io/docs/providers/azurerm/r/key_vault.html#access_policy