I have an Azure KeyVault with 4 Access Policies. Each Access Policy has its own unique ObjectId.
In trying to import our legacy Azure resources into a Terraform configuration, I've therefore create Terraform block like the below.
resource "azurerm_key_vault" "example" {
name = "examplekeyvault"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "premium"
}
resource "azurerm_key_vault_access_policy" "policy1" {
key_vault_id = azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = 001
key_permissions = [
"Get",
]
secret_permissions = [
"Get",
]
}
The above worked okay and I was able to import "policy1" successfully.
However, when I then replicated the policy block and appended it with the next policy like the one below, it just doesn't appear to accept it as a properly formed Terraform configuration. My intention is obviously to import all four policies (if that is possible).
resource "azurerm_key_vault" "example" {
name = "examplekeyvault"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "premium"
}
resource "azurerm_key_vault_access_policy" "policy1" {
key_vault_id = azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = 001
key_permissions = [
"Get",
]
secret_permissions = [
"Get",
]
}
resource "azurerm_key_vault_access_policy" "policy2" {
key_vault_id = azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = 002
key_permissions = [
"Get",
]
secret_permissions = [
"Get",
]
}
In both of the above illustrations, I've only used dummy ObjectIds.
Am I doing this entirely the wrong way or is it just not possible to import multiple policies into one Terraform config? The Terraform registry documentation meanwhile says Azure permits a maximum of 1024 Access Policies per Key Vault.
In the end, my proposed solution of simply appending additional policy blocks to the key vault access policy as depicted in my second code snippet (above), appeared to work, as my subsequent Terraform Plan and Apply went well without any errors reported.
I can only therefore conclude and/or assume that appending those additional policy blocks was a correct solution after all.
Related
I'm trying to change the keyvault used by my virtual machine in terraform. When I trying to apply the changes, Terraform then tried to replace the virtual machine with the new key vault. How do I just change the keyvault used by the vm or change the creds in terraform without destroying the virtual machine ?
I tried to use lifecyle (prevent_destroy = true) but it then fails showing this message
`> Error: Instance cannot be destroyed
on .terraform\modules\avd_vm\Modules\AVD_VM\main.tf line 388:
388: resource "azurerm_windows_virtual_machine" "acumen_vm_kv" {
Resource module.avd_vm.azurerm_windows_virtual_machine.acumen_vm_kv has
lifecycle.prevent_destroy set, but the plan calls for this resource to be
destroyed. To avoid this error and continue with the plan, either disable
lifecycle.prevent_destroy or reduce the scope of the plan using the -target
flag.`
I tried to reproduce the same from my end.
Received same error:
Resource vmpassword has lifecycle.prevent_destroy set, but the plan calls for this resource to be destroyed. To avoid this error and continue
│ with the plan, either disable lifecycle.prevent_destroy or reduce the scope of the plan using the -target flag
Note: If you are trying to have two different Key Vaults for VM,
it would be better to use another resource block for new keyvault.
resource "azurerm_key_vault_secret" "vmpassword" {
name = "vmpassword"
value = random_password.vmpassword.result
key_vault_id = azurerm_key_vault.kv1.id
depends_on = [ azurerm_key_vault.kv1 ]
lifecycle {
prevent_destroy = true
}
}
resource "azurerm_key_vault" "kv2" {
name = "kavy-newkv2"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
enabled_for_disk_encryption = true
tenant_id = data.azurerm_client_config.current.tenant_id
soft_delete_retention_days = 7
purge_protection_enabled = false
sku_name = "standard"
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"Get","Create", "Decrypt", "Delete", "Encrypt", "Update"
]
secret_permissions = [
"Get", "Backup", "Delete", "List", "Purge", "Recover", "Restore", "Set",
]
storage_permissions = [
"Get", "Restore","Set"
]
}
}
resource "azurerm_key_vault_secret" "vmpassword" {
name = "vmpassword"
value = random_password.vmpassword.result
key_vault_id = azurerm_key_vault.kv1.id
depends_on = [ azurerm_key_vault.kv1 ]
}
import each resource manually to show in your state file. Terraform tracks each resource individually.
You can ignore changes in vm ,if required
lifecycle {
ignore_changes = [
tags,
]
}
Reference:
terraform-lifecycle-prevent-destroy | StackOverflow
https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle#prevent_destroy
I am trying to create a keyvault on Azure using Terraform which is performed by my service principal user:
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "key_vault" {
name = "${var.project_name}-keyvault"
location = var.resource_group_location
resource_group_name = var.resource_group_name
enabled_for_disk_encryption = true
tenant_id = data.azurerm_client_config.current.tenant_id
soft_delete_retention_days = 7
purge_protection_enabled = false
sku_name = "standard"
}
resource "azurerm_key_vault_access_policy" "access_policy" {
key_vault_id = azurerm_key_vault.key_vault.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
secret_permissions = [
"Set", "Get", "Delete", "Purge", "List", ]
}
resource "azurerm_key_vault_secret" "client_id" {
name = "client-id"
value = var.client_id_value
key_vault_id = azurerm_key_vault.key_vault.id
}
resource "azurerm_key_vault_secret" "client_secret" {
name = "client-secret"
value = var.client_secret_value
key_vault_id = azurerm_key_vault.key_vault.id
}
resource "azurerm_key_vault_secret" "subscription_id" {
name = "subscription-id"
value = var.subscription_id_value
key_vault_id = azurerm_key_vault.key_vault.id
}
resource "azurerm_key_vault_secret" "tenant_id" {
name = "tenant-id"
value = var.tenant_id_value
key_vault_id = azurerm_key_vault.key_vault.id
}
But i get this error:
Error: checking for presence of existing Secret "client-id" (Key Vault "https://formulaeinsdef-keyvault.vault.azure.net/"): keyvault.BaseClient#GetSecret:
Failure responding to request: StatusCode=403 -- Original Error: autorest/azure:
Service returned an error. Status=403 Code="Forbidden" Message="The user, group or application 'appid=***;oid=32d24355-0d93-476d-a775-6882d5a22e0b;iss=https://sts.windows.net/***/' does not have secrets get permission on key vault 'formulaeinsdef-keyvault;location=westeurope'.
For help resolving this issue, please see https://go.microsoft.com/fwlink/?linkid=2125287" InnerError={"code":"AccessDenied"}
The above code creates the key-vault successfully, but it fails to add the secrets inside it.
My Service Principal user has Contributor role and i think, it should be enough to GET and SET key keys.
I tried to give my service principal the Reader or even Ownerpermission, but it was not helpful.
I also checked this question, but it is not helping me.
I checked the Access Policies tab and i have the permissions to Set, Get, Delete, Purge, List.
Each of the secrets needs an explicit dependency on the access policy. Otherwise, Terraform may attempt to create the secret before creating the access policy.
resource "azurerm_key_vault_secret" "client_id" {
name = "client-id"
value = var.client_id_value
key_vault_id = azurerm_key_vault.key_vault.id
### Explicit dependency
depends_on = [
azurerm_key_vault_access_policy.access_policy
]
}
Alternatively, moving the access policy definition into the key vault block would make the explicit dependencies unnecessary:
resource "azurerm_key_vault" "key_vault" {
# Content omitted for brevity
.
.
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
secret_permissions = [
"Set", "Get", "Delete", "Purge", "List", ]
}
}
I have a terraform code that deploys an Azure key vault using the code:
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "keyvault" {
name = "${local.environment}"
resource_group_name = azurerm_resource_group.rg.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "standard"
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
# List of key permissions...
]
# All permissions listed currently.
secret_permissions = [
# List of secret permissions...
]
storage_permissions = [
# List of storage permissions...
]
}
}
I have a certain code that runs under a different principle that is used when deploying this code. So data.azurerm_client_config.current.object_id (aka: The object ID of a user, service principal, or security group in the Azure Active Directory tenant for the vault.) would be different inside that code and the secrets are therefore inaccessible to the code.
How can I amend the access_policy so different users/service principals can access the same data vault simultaneously?
You need to use the azurerm_key_vault_access_policy resource. . So you'd change your code to:
resource "azurerm_key_vault" "keyvault" {....}
//add one of these for each user
resource "azurerm_key_vault_access_policy" "kvapta" {
key_vault_id = azurerm_key_vault.keyvault.id
tenant_id = var.identity.tenant_id
object_id = var.identity.principal_id
certificate_permissions = []
key_permissions = [
]
secret_permissions =[]
storage_permissions = [
]
}
What is the default identity type in CosmosDB in Azure?
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_account#default_identity_type
When I run my Terraform plan, the default_identity_type is getting updated, but I don't know what that is. Is there a place where I can see this value in the CLI, resource manager or the portal? What property in Azure does this setting correspond to?
Here is what the azurerm doc says:
default_identity_type - (Optional) The default identity for accessing Key Vault. Possible values are FirstPartyIdentity, SystemAssignedIdentity or start with UserAssignedIdentity. Defaults to FirstPartyIdentity.
There is an identity block, but that seems to be a different thing from default_identity_type.
The documentation says it is for using CosmosDB with key vault, but as far as I know, there are no special settings in the CosmosDB resource for using key vault.
The identity block defines the managed identity for cosmosdb account which currently can only be System Assigned and default_identity_type is for using one managed identity to access the key vault from the cosmosdb account for encyprtion purpose.
The default_identity_type defaults to FirstPartyIdentity which means there is a default Identity with name Azure Cosmos DB which is used by all the cosmosdb resources in Azure and use it to access the keyvault like below example 1. If you are using the identity block with SystemAssigned then you can mention SystemAssignedIdentity in the default_identity_type parameter as shown in the below example 2.
Example 1:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "ansumantest-resources"
location = "eastus"
}
## firstparty identity which is provided by Microsoft
data "azuread_service_principal" "cosmosdb" {
display_name = "Azure Cosmos DB"
}
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "example" {
name = "ansumantestkv12"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "premium"
purge_protection_enabled = true
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"list",
"create",
"delete",
"get",
"update",
]
}
# identity added in access policy
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azuread_service_principal.cosmosdb.id
key_permissions = [
"get",
"unwrapKey",
"wrapKey",
]
}
}
resource "azurerm_key_vault_key" "example" {
name = "ansumantestkey1"
key_vault_id = azurerm_key_vault.example.id
key_type = "RSA"
key_size = 3072
key_opts = [
"decrypt",
"encrypt",
"wrapKey",
"unwrapKey",
]
}
resource "azurerm_cosmosdb_account" "example" {
name = "ansumantest-cosmosdb"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
offer_type = "Standard"
kind = "MongoDB"
key_vault_key_id = azurerm_key_vault_key.example.versionless_id
default_identity_type = "FirstPartyIdentity"
consistency_policy {
consistency_level = "Strong"
}
geo_location {
location = azurerm_resource_group.example.location
failover_priority = 0
}
}
In this method the Identity that is used to access is the Default Azure Cosmos DB Service Principal, so there won't be any details in the identity blade. Only in Data Encryption Blade you can see the key vault details.
Example 2:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "ansumantest-resources"
location = "eastus"
}
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "example" {
name = "ansumantestkv12"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "premium"
purge_protection_enabled = true
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"list",
"create",
"delete",
"get",
"update",
]
}
}
resource "azurerm_key_vault_key" "example" {
name = "ansumantestkey2"
key_vault_id = azurerm_key_vault.example.id
key_type = "RSA"
key_size = 3072
key_opts = [
"decrypt",
"encrypt",
"wrapKey",
"unwrapKey",
]
}
resource "azurerm_cosmosdb_account" "example" {
name = "ansumantest-cosmosdb"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
offer_type = "Standard"
kind = "MongoDB"
key_vault_key_id = azurerm_key_vault_key.example.versionless_id
default_identity_type = "FirstPartyIdentity"
#after deployment change to below
#default_identity_type = "SystemAssignedIdentity"
consistency_policy {
consistency_level = "Strong"
}
##system managed identity for this cosmosdb resource
identity {
type="SystemAssigned"
}
geo_location {
location = azurerm_resource_group.example.location
failover_priority = 0
}
}
#providing access to the system managed identity of cosmosdb to keyvault
resource "azurerm_key_vault_access_policy" "example" {
key_vault_id = azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = azurerm_cosmosdb_account.example.identity.0.principal_id
key_permissions = [
"get",
"unwrapKey",
"wrapKey",
]
}
In this example you cannot set default_identity_type = SystemAssignedIdentity while provisioning the cosmosdb account . Once the cosmos db is deployed with default identity type as firstPartyIdentity then you can modify it to SystemAssignedIdentity and then apply update on the cosmosdb block by using below command :
terraform apply -target="azurerm_cosmosdb_account.example" -auto-approve
Outputs :
I have following code to create azure key vault:
resource "azurerm_key_vault" "key_vault" {
name = "example"
location = var.location
resource_group_name = azurerm_resource_group.resource_group.name
sku_name = "standard"
tenant_id = var.tenant_id
access_policy {
tenant_id = var.tenant_id
object_id = azurerm_user_assigned_identity.user_assigned_identity.principal_id
secret_permissions = [
"get",
]
}
}
Assume following scenario:
deploy infrastructure using terraform
change key vault access policies manually
redeploy using terraform
Terraform will remove access policies that were created manually.
Is there a way to tell terraform to not remove existing access policies?