I have a Terraform configuration for an Azure Key Vault:
resource "azurerm_key_vault" "key_vault" {
# ...
network_acls {
default_action = "Deny"
ip_rules = ["MY_IP_ADDRESS"]
bypass = "AzureServices"
}
}
resource "azurerm_key_vault_access_policy" "application" {
key_vault_id = azurerm_key_vault.key_vault.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
certificate_permissions = local.permissions_certificates_all
key_permissions = local.permissions_keys_all
secret_permissions = local.permissions_secrets_all
storage_permissions = local.permissions_storage_all
}
What happens is that when I attempt to add an azurerm_key_vault_secret to the Key Vault that is created above, it fails with the error message:
Service returned an error. Status=403 Code="Forbidden" Message="The user, group or application 'appid=ID;oid=ID;iss=https://sts.windows.net/ID/' does not have secrets get permission on key vault 'KEY_VAULT_NAME;location=eastus2'
When I run the terraform apply again, it works just fine.
I tried adding a time_sleep with 10 minutes (and a set of depends_on needs to ensure it happens) to see if it would resolve it, and it did not.
It seems, however, that the solution is to somehow request that Terraform re-authenticate in some way so that the permissions get picked up.
Is there a way to do this in a Terraform file with the azurerm provider or generically request re-authentication? I did not see it in the documentation.
Thanks!
Terraform Version Data:
Terraform v1.1.2
on linux_amd64
+ provider registry.terraform.io/hashicorp/azuread v2.13.0
+ provider registry.terraform.io/hashicorp/azurerm v2.90.0
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/microsoft/azuredevops v0.1.8
Did you try seeing what happens when you explicitly state the permissions?
In your case as a starting point just get:
resource "azurerm_key_vault_access_policy" "application" {
key_vault_id = azurerm_key_vault.key_vault.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
certificate_permissions = local.permissions_certificates_all
key_permissions = local.permissions_keys_all
storage_permissions = local.permissions_storage_all
secret_permissions = [
"Get",
]
}
It's a timing issue. Terraform applies in parallel threads, so it is trying to add the secret before the policy is added. That is why it works the second time.
To avoid this, you can add a
depends_on = [azurerm_key_vault_access_policy.application]
to your secret.
This will make the secret creation wait until the policy is added.
Related
I'm trying to change the keyvault used by my virtual machine in terraform. When I trying to apply the changes, Terraform then tried to replace the virtual machine with the new key vault. How do I just change the keyvault used by the vm or change the creds in terraform without destroying the virtual machine ?
I tried to use lifecyle (prevent_destroy = true) but it then fails showing this message
`> Error: Instance cannot be destroyed
on .terraform\modules\avd_vm\Modules\AVD_VM\main.tf line 388:
388: resource "azurerm_windows_virtual_machine" "acumen_vm_kv" {
Resource module.avd_vm.azurerm_windows_virtual_machine.acumen_vm_kv has
lifecycle.prevent_destroy set, but the plan calls for this resource to be
destroyed. To avoid this error and continue with the plan, either disable
lifecycle.prevent_destroy or reduce the scope of the plan using the -target
flag.`
I tried to reproduce the same from my end.
Received same error:
Resource vmpassword has lifecycle.prevent_destroy set, but the plan calls for this resource to be destroyed. To avoid this error and continue
│ with the plan, either disable lifecycle.prevent_destroy or reduce the scope of the plan using the -target flag
Note: If you are trying to have two different Key Vaults for VM,
it would be better to use another resource block for new keyvault.
resource "azurerm_key_vault_secret" "vmpassword" {
name = "vmpassword"
value = random_password.vmpassword.result
key_vault_id = azurerm_key_vault.kv1.id
depends_on = [ azurerm_key_vault.kv1 ]
lifecycle {
prevent_destroy = true
}
}
resource "azurerm_key_vault" "kv2" {
name = "kavy-newkv2"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
enabled_for_disk_encryption = true
tenant_id = data.azurerm_client_config.current.tenant_id
soft_delete_retention_days = 7
purge_protection_enabled = false
sku_name = "standard"
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"Get","Create", "Decrypt", "Delete", "Encrypt", "Update"
]
secret_permissions = [
"Get", "Backup", "Delete", "List", "Purge", "Recover", "Restore", "Set",
]
storage_permissions = [
"Get", "Restore","Set"
]
}
}
resource "azurerm_key_vault_secret" "vmpassword" {
name = "vmpassword"
value = random_password.vmpassword.result
key_vault_id = azurerm_key_vault.kv1.id
depends_on = [ azurerm_key_vault.kv1 ]
}
import each resource manually to show in your state file. Terraform tracks each resource individually.
You can ignore changes in vm ,if required
lifecycle {
ignore_changes = [
tags,
]
}
Reference:
terraform-lifecycle-prevent-destroy | StackOverflow
https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle#prevent_destroy
I am trying to create a keyvault on Azure using Terraform which is performed by my service principal user:
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "key_vault" {
name = "${var.project_name}-keyvault"
location = var.resource_group_location
resource_group_name = var.resource_group_name
enabled_for_disk_encryption = true
tenant_id = data.azurerm_client_config.current.tenant_id
soft_delete_retention_days = 7
purge_protection_enabled = false
sku_name = "standard"
}
resource "azurerm_key_vault_access_policy" "access_policy" {
key_vault_id = azurerm_key_vault.key_vault.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
secret_permissions = [
"Set", "Get", "Delete", "Purge", "List", ]
}
resource "azurerm_key_vault_secret" "client_id" {
name = "client-id"
value = var.client_id_value
key_vault_id = azurerm_key_vault.key_vault.id
}
resource "azurerm_key_vault_secret" "client_secret" {
name = "client-secret"
value = var.client_secret_value
key_vault_id = azurerm_key_vault.key_vault.id
}
resource "azurerm_key_vault_secret" "subscription_id" {
name = "subscription-id"
value = var.subscription_id_value
key_vault_id = azurerm_key_vault.key_vault.id
}
resource "azurerm_key_vault_secret" "tenant_id" {
name = "tenant-id"
value = var.tenant_id_value
key_vault_id = azurerm_key_vault.key_vault.id
}
But i get this error:
Error: checking for presence of existing Secret "client-id" (Key Vault "https://formulaeinsdef-keyvault.vault.azure.net/"): keyvault.BaseClient#GetSecret:
Failure responding to request: StatusCode=403 -- Original Error: autorest/azure:
Service returned an error. Status=403 Code="Forbidden" Message="The user, group or application 'appid=***;oid=32d24355-0d93-476d-a775-6882d5a22e0b;iss=https://sts.windows.net/***/' does not have secrets get permission on key vault 'formulaeinsdef-keyvault;location=westeurope'.
For help resolving this issue, please see https://go.microsoft.com/fwlink/?linkid=2125287" InnerError={"code":"AccessDenied"}
The above code creates the key-vault successfully, but it fails to add the secrets inside it.
My Service Principal user has Contributor role and i think, it should be enough to GET and SET key keys.
I tried to give my service principal the Reader or even Ownerpermission, but it was not helpful.
I also checked this question, but it is not helping me.
I checked the Access Policies tab and i have the permissions to Set, Get, Delete, Purge, List.
Each of the secrets needs an explicit dependency on the access policy. Otherwise, Terraform may attempt to create the secret before creating the access policy.
resource "azurerm_key_vault_secret" "client_id" {
name = "client-id"
value = var.client_id_value
key_vault_id = azurerm_key_vault.key_vault.id
### Explicit dependency
depends_on = [
azurerm_key_vault_access_policy.access_policy
]
}
Alternatively, moving the access policy definition into the key vault block would make the explicit dependencies unnecessary:
resource "azurerm_key_vault" "key_vault" {
# Content omitted for brevity
.
.
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
secret_permissions = [
"Set", "Get", "Delete", "Purge", "List", ]
}
}
I have a terraform code that deploys an Azure key vault using the code:
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "keyvault" {
name = "${local.environment}"
resource_group_name = azurerm_resource_group.rg.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "standard"
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
# List of key permissions...
]
# All permissions listed currently.
secret_permissions = [
# List of secret permissions...
]
storage_permissions = [
# List of storage permissions...
]
}
}
I have a certain code that runs under a different principle that is used when deploying this code. So data.azurerm_client_config.current.object_id (aka: The object ID of a user, service principal, or security group in the Azure Active Directory tenant for the vault.) would be different inside that code and the secrets are therefore inaccessible to the code.
How can I amend the access_policy so different users/service principals can access the same data vault simultaneously?
You need to use the azurerm_key_vault_access_policy resource. . So you'd change your code to:
resource "azurerm_key_vault" "keyvault" {....}
//add one of these for each user
resource "azurerm_key_vault_access_policy" "kvapta" {
key_vault_id = azurerm_key_vault.keyvault.id
tenant_id = var.identity.tenant_id
object_id = var.identity.principal_id
certificate_permissions = []
key_permissions = [
]
secret_permissions =[]
storage_permissions = [
]
}
I have following code to create azure key vault:
resource "azurerm_key_vault" "key_vault" {
name = "example"
location = var.location
resource_group_name = azurerm_resource_group.resource_group.name
sku_name = "standard"
tenant_id = var.tenant_id
access_policy {
tenant_id = var.tenant_id
object_id = azurerm_user_assigned_identity.user_assigned_identity.principal_id
secret_permissions = [
"get",
]
}
}
Assume following scenario:
deploy infrastructure using terraform
change key vault access policies manually
redeploy using terraform
Terraform will remove access policies that were created manually.
Is there a way to tell terraform to not remove existing access policies?
I have an Azure KeyVault with 4 Access Policies. Each Access Policy has its own unique ObjectId.
In trying to import our legacy Azure resources into a Terraform configuration, I've therefore create Terraform block like the below.
resource "azurerm_key_vault" "example" {
name = "examplekeyvault"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "premium"
}
resource "azurerm_key_vault_access_policy" "policy1" {
key_vault_id = azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = 001
key_permissions = [
"Get",
]
secret_permissions = [
"Get",
]
}
The above worked okay and I was able to import "policy1" successfully.
However, when I then replicated the policy block and appended it with the next policy like the one below, it just doesn't appear to accept it as a properly formed Terraform configuration. My intention is obviously to import all four policies (if that is possible).
resource "azurerm_key_vault" "example" {
name = "examplekeyvault"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "premium"
}
resource "azurerm_key_vault_access_policy" "policy1" {
key_vault_id = azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = 001
key_permissions = [
"Get",
]
secret_permissions = [
"Get",
]
}
resource "azurerm_key_vault_access_policy" "policy2" {
key_vault_id = azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = 002
key_permissions = [
"Get",
]
secret_permissions = [
"Get",
]
}
In both of the above illustrations, I've only used dummy ObjectIds.
Am I doing this entirely the wrong way or is it just not possible to import multiple policies into one Terraform config? The Terraform registry documentation meanwhile says Azure permits a maximum of 1024 Access Policies per Key Vault.
In the end, my proposed solution of simply appending additional policy blocks to the key vault access policy as depicted in my second code snippet (above), appeared to work, as my subsequent Terraform Plan and Apply went well without any errors reported.
I can only therefore conclude and/or assume that appending those additional policy blocks was a correct solution after all.