I am trying to create a keyvault on Azure using Terraform which is performed by my service principal user:
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "key_vault" {
name = "${var.project_name}-keyvault"
location = var.resource_group_location
resource_group_name = var.resource_group_name
enabled_for_disk_encryption = true
tenant_id = data.azurerm_client_config.current.tenant_id
soft_delete_retention_days = 7
purge_protection_enabled = false
sku_name = "standard"
}
resource "azurerm_key_vault_access_policy" "access_policy" {
key_vault_id = azurerm_key_vault.key_vault.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
secret_permissions = [
"Set", "Get", "Delete", "Purge", "List", ]
}
resource "azurerm_key_vault_secret" "client_id" {
name = "client-id"
value = var.client_id_value
key_vault_id = azurerm_key_vault.key_vault.id
}
resource "azurerm_key_vault_secret" "client_secret" {
name = "client-secret"
value = var.client_secret_value
key_vault_id = azurerm_key_vault.key_vault.id
}
resource "azurerm_key_vault_secret" "subscription_id" {
name = "subscription-id"
value = var.subscription_id_value
key_vault_id = azurerm_key_vault.key_vault.id
}
resource "azurerm_key_vault_secret" "tenant_id" {
name = "tenant-id"
value = var.tenant_id_value
key_vault_id = azurerm_key_vault.key_vault.id
}
But i get this error:
Error: checking for presence of existing Secret "client-id" (Key Vault "https://formulaeinsdef-keyvault.vault.azure.net/"): keyvault.BaseClient#GetSecret:
Failure responding to request: StatusCode=403 -- Original Error: autorest/azure:
Service returned an error. Status=403 Code="Forbidden" Message="The user, group or application 'appid=***;oid=32d24355-0d93-476d-a775-6882d5a22e0b;iss=https://sts.windows.net/***/' does not have secrets get permission on key vault 'formulaeinsdef-keyvault;location=westeurope'.
For help resolving this issue, please see https://go.microsoft.com/fwlink/?linkid=2125287" InnerError={"code":"AccessDenied"}
The above code creates the key-vault successfully, but it fails to add the secrets inside it.
My Service Principal user has Contributor role and i think, it should be enough to GET and SET key keys.
I tried to give my service principal the Reader or even Ownerpermission, but it was not helpful.
I also checked this question, but it is not helping me.
I checked the Access Policies tab and i have the permissions to Set, Get, Delete, Purge, List.
Each of the secrets needs an explicit dependency on the access policy. Otherwise, Terraform may attempt to create the secret before creating the access policy.
resource "azurerm_key_vault_secret" "client_id" {
name = "client-id"
value = var.client_id_value
key_vault_id = azurerm_key_vault.key_vault.id
### Explicit dependency
depends_on = [
azurerm_key_vault_access_policy.access_policy
]
}
Alternatively, moving the access policy definition into the key vault block would make the explicit dependencies unnecessary:
resource "azurerm_key_vault" "key_vault" {
# Content omitted for brevity
.
.
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
secret_permissions = [
"Set", "Get", "Delete", "Purge", "List", ]
}
}
Related
I have a terraform code that deploys an Azure key vault using the code:
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "keyvault" {
name = "${local.environment}"
resource_group_name = azurerm_resource_group.rg.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "standard"
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
# List of key permissions...
]
# All permissions listed currently.
secret_permissions = [
# List of secret permissions...
]
storage_permissions = [
# List of storage permissions...
]
}
}
I have a certain code that runs under a different principle that is used when deploying this code. So data.azurerm_client_config.current.object_id (aka: The object ID of a user, service principal, or security group in the Azure Active Directory tenant for the vault.) would be different inside that code and the secrets are therefore inaccessible to the code.
How can I amend the access_policy so different users/service principals can access the same data vault simultaneously?
You need to use the azurerm_key_vault_access_policy resource. . So you'd change your code to:
resource "azurerm_key_vault" "keyvault" {....}
//add one of these for each user
resource "azurerm_key_vault_access_policy" "kvapta" {
key_vault_id = azurerm_key_vault.keyvault.id
tenant_id = var.identity.tenant_id
object_id = var.identity.principal_id
certificate_permissions = []
key_permissions = [
]
secret_permissions =[]
storage_permissions = [
]
}
What is the default identity type in CosmosDB in Azure?
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_account#default_identity_type
When I run my Terraform plan, the default_identity_type is getting updated, but I don't know what that is. Is there a place where I can see this value in the CLI, resource manager or the portal? What property in Azure does this setting correspond to?
Here is what the azurerm doc says:
default_identity_type - (Optional) The default identity for accessing Key Vault. Possible values are FirstPartyIdentity, SystemAssignedIdentity or start with UserAssignedIdentity. Defaults to FirstPartyIdentity.
There is an identity block, but that seems to be a different thing from default_identity_type.
The documentation says it is for using CosmosDB with key vault, but as far as I know, there are no special settings in the CosmosDB resource for using key vault.
The identity block defines the managed identity for cosmosdb account which currently can only be System Assigned and default_identity_type is for using one managed identity to access the key vault from the cosmosdb account for encyprtion purpose.
The default_identity_type defaults to FirstPartyIdentity which means there is a default Identity with name Azure Cosmos DB which is used by all the cosmosdb resources in Azure and use it to access the keyvault like below example 1. If you are using the identity block with SystemAssigned then you can mention SystemAssignedIdentity in the default_identity_type parameter as shown in the below example 2.
Example 1:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "ansumantest-resources"
location = "eastus"
}
## firstparty identity which is provided by Microsoft
data "azuread_service_principal" "cosmosdb" {
display_name = "Azure Cosmos DB"
}
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "example" {
name = "ansumantestkv12"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "premium"
purge_protection_enabled = true
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"list",
"create",
"delete",
"get",
"update",
]
}
# identity added in access policy
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azuread_service_principal.cosmosdb.id
key_permissions = [
"get",
"unwrapKey",
"wrapKey",
]
}
}
resource "azurerm_key_vault_key" "example" {
name = "ansumantestkey1"
key_vault_id = azurerm_key_vault.example.id
key_type = "RSA"
key_size = 3072
key_opts = [
"decrypt",
"encrypt",
"wrapKey",
"unwrapKey",
]
}
resource "azurerm_cosmosdb_account" "example" {
name = "ansumantest-cosmosdb"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
offer_type = "Standard"
kind = "MongoDB"
key_vault_key_id = azurerm_key_vault_key.example.versionless_id
default_identity_type = "FirstPartyIdentity"
consistency_policy {
consistency_level = "Strong"
}
geo_location {
location = azurerm_resource_group.example.location
failover_priority = 0
}
}
In this method the Identity that is used to access is the Default Azure Cosmos DB Service Principal, so there won't be any details in the identity blade. Only in Data Encryption Blade you can see the key vault details.
Example 2:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "ansumantest-resources"
location = "eastus"
}
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "example" {
name = "ansumantestkv12"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "premium"
purge_protection_enabled = true
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"list",
"create",
"delete",
"get",
"update",
]
}
}
resource "azurerm_key_vault_key" "example" {
name = "ansumantestkey2"
key_vault_id = azurerm_key_vault.example.id
key_type = "RSA"
key_size = 3072
key_opts = [
"decrypt",
"encrypt",
"wrapKey",
"unwrapKey",
]
}
resource "azurerm_cosmosdb_account" "example" {
name = "ansumantest-cosmosdb"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
offer_type = "Standard"
kind = "MongoDB"
key_vault_key_id = azurerm_key_vault_key.example.versionless_id
default_identity_type = "FirstPartyIdentity"
#after deployment change to below
#default_identity_type = "SystemAssignedIdentity"
consistency_policy {
consistency_level = "Strong"
}
##system managed identity for this cosmosdb resource
identity {
type="SystemAssigned"
}
geo_location {
location = azurerm_resource_group.example.location
failover_priority = 0
}
}
#providing access to the system managed identity of cosmosdb to keyvault
resource "azurerm_key_vault_access_policy" "example" {
key_vault_id = azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = azurerm_cosmosdb_account.example.identity.0.principal_id
key_permissions = [
"get",
"unwrapKey",
"wrapKey",
]
}
In this example you cannot set default_identity_type = SystemAssignedIdentity while provisioning the cosmosdb account . Once the cosmos db is deployed with default identity type as firstPartyIdentity then you can modify it to SystemAssignedIdentity and then apply update on the cosmosdb block by using below command :
terraform apply -target="azurerm_cosmosdb_account.example" -auto-approve
Outputs :
I have an Azure KeyVault with 4 Access Policies. Each Access Policy has its own unique ObjectId.
In trying to import our legacy Azure resources into a Terraform configuration, I've therefore create Terraform block like the below.
resource "azurerm_key_vault" "example" {
name = "examplekeyvault"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "premium"
}
resource "azurerm_key_vault_access_policy" "policy1" {
key_vault_id = azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = 001
key_permissions = [
"Get",
]
secret_permissions = [
"Get",
]
}
The above worked okay and I was able to import "policy1" successfully.
However, when I then replicated the policy block and appended it with the next policy like the one below, it just doesn't appear to accept it as a properly formed Terraform configuration. My intention is obviously to import all four policies (if that is possible).
resource "azurerm_key_vault" "example" {
name = "examplekeyvault"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "premium"
}
resource "azurerm_key_vault_access_policy" "policy1" {
key_vault_id = azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = 001
key_permissions = [
"Get",
]
secret_permissions = [
"Get",
]
}
resource "azurerm_key_vault_access_policy" "policy2" {
key_vault_id = azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = 002
key_permissions = [
"Get",
]
secret_permissions = [
"Get",
]
}
In both of the above illustrations, I've only used dummy ObjectIds.
Am I doing this entirely the wrong way or is it just not possible to import multiple policies into one Terraform config? The Terraform registry documentation meanwhile says Azure permits a maximum of 1024 Access Policies per Key Vault.
In the end, my proposed solution of simply appending additional policy blocks to the key vault access policy as depicted in my second code snippet (above), appeared to work, as my subsequent Terraform Plan and Apply went well without any errors reported.
I can only therefore conclude and/or assume that appending those additional policy blocks was a correct solution after all.
I'm getting an error while trying to set up a VM with a Key vault. This is part of the code I think is relevant.
resource "azurerm_key_vault_key" "example" {
name = "TF-key-example"
key_vault_id = "${azurerm_key_vault.example.id}"
key_type = "RSA"
key_size = 2048
key_opts = [
"decrypt",
"encrypt",
"sign",
"unwrapKey",
"verify",
"wrapKey",
]
}
resource "azurerm_disk_encryption_set" "example" {
name = "example-set"
resource_group_name = "${azurerm_resource_group.example.name}"
location = "${azurerm_resource_group.example.location}"
key_vault_key_id = "${azurerm_key_vault_key.example.id}"
identity {
type = "SystemAssigned"
}
}
resource "azurerm_key_vault_access_policy" "disk-encryption" {
key_vault_id = "${azurerm_key_vault.example.id}"
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"create",
"get",
"list",
"wrapkey",
"unwrapkey",
]
secret_permissions = [
"get",
"list",
]
}
resource "azurerm_role_assignment" "disk-encryption-read-keyvault" {
scope = "${azurerm_key_vault.example.id}"
role_definition_name = "Reader"
principal_id = "${azurerm_disk_encryption_set.example.identity.0.principal_id}"
}
This is the error I'm getting:
Error: Error creating Linux Virtual Machine "example-vm" (Resource
Group "Encrypt-resources"):
compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request:
StatusCode=400 -- Original Error: Code="KeyVaultAccessForbidden"
Message="Unable to access key vault resource
'https://tf-keyvault-example.vault.azure.net/keys/TF-key-example/*****'
to enable encryption at rest. Please grant get, wrap and unwrap key
permissions to disk encryption set 'example-set'. Please visit
https://aka.ms/keyvaultaccessssecmk for more information."
Where and how should I add the permissions?
As the error print - Please grant get, wrap and unwrap key permissions to disk encryption set 'example-set'.
Add the following block:
# grant the Managed Identity of the Disk Encryption Set access to Read Data from Key Vault
resource "azurerm_key_vault_access_policy" "disk-encryption" {
key_vault_id = azurerm_key_vault.example.id
key_permissions = [
"get",
"wrapkey",
"unwrapkey",
]
tenant_id = azurerm_disk_encryption_set.example.identity.0.tenant_id
object_id = azurerm_disk_encryption_set.example.identity.0.principal_id
}
# grant the Managed Identity of the Disk Encryption Set "Reader" access to the Key Vault
resource "azurerm_role_assignment" "disk-encryption-read-keyvault" {
scope = azurerm_key_vault.example.id
role_definition_name = "Reader"
principal_id = azurerm_disk_encryption_set.example.identity.0.principal_id
}
More about azurerm_key_vault_access_policy and azurerm_role_assignment.
Update-
The issue was related to not specifying the correct object_id.
Later on, The Machine that builds the Terraform missed the SSH file path(e.g -"~/.ssh/id_rsa.pub") .
Fixed by running this command:
ssh-keygen -t rsa -b 4096 -C "your_email#example.com"
After that, the key vault permission was missing access policy to terraform user.
Besides all that, the sequence of the resources was mixed. fixed that to a more logical sequence.
The full and working code can be found here.
As Amit Baranes pointed out, you need to set the access policy for your encryption set.
In your above example you grant your data source client ID access to the key vault by way of access policy. The identity of your encryption set however only gets read to the vault by way of role.
Tucked away here the AzureRM VM resource documentation states:
NOTE: The Disk Encryption Set must have the Reader Role Assignment
scoped on the Key Vault - in addition to an Access Policy to the Key
Vault
You need to make sure you grant the encryption ID both the read role and an access policy.
A possible resulting full block looks like this, where we give your service principal and the identity access to the vault by way of an access policy. We also retain the read role
resource "azurerm_key_vault_key" "example" {
name = "TF-key-example"
key_vault_id = "${azurerm_key_vault.example.id}"
key_type = "RSA"
key_size = 2048
key_opts = [
"decrypt",
"encrypt",
"sign",
"unwrapKey",
"verify",
"wrapKey",
]
}
resource "azurerm_disk_encryption_set" "example" {
name = "example-set"
resource_group_name = "${azurerm_resource_group.example.name}"
location = "${azurerm_resource_group.example.location}"
key_vault_key_id = "${azurerm_key_vault_key.example.id}"
identity {
type = "SystemAssigned"
}
}
resource "azurerm_key_vault_access_policy" "service-principal" {
key_vault_id = "${azurerm_key_vault.example.id}"
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"create",
"get",
"list",
"wrapkey",
"unwrapkey",
]
secret_permissions = [
"get",
"list",
]
}
resource "azurerm_key_vault_access_policy" "encryption-set" {
key_vault_id = "${azurerm_key_vault.example.id}"
tenant_id = azurerm_disk_encryption_set.example.identity.0.tenant_id
object_id = azurerm_disk_encryption_set.example.identity.0.principal_id
key_permissions = [
"create",
"get",
"list",
"wrapkey",
"unwrapkey",
]
secret_permissions = [
"get",
"list",
]
}
resource "azurerm_role_assignment" "disk-encryption-read-keyvault" {
scope = "${azurerm_key_vault.example.id}"
role_definition_name = "Reader"
principal_id = "${azurerm_disk_encryption_set.example.identity.0.principal_id}"
}
You would probably want to reduce the access for the service principal, however i left it as is for now.
I just noticed Reader role is not fit anymore, you now need to use Key Vault Crypto Service Encryption User.
resource "azurerm_role_assignment" "disk-encryption-read-keyvault" {
scope = "${azurerm_key_vault.example.id}"
role_definition_name = "Key Vault Crypto Service Encryption User"
principal_id = ${azurerm_disk_encryption_set.example.identity.0.principal_id}"
}
I have a module that sets up default access to a key vault. Then I have a resource that sets up a secret in the key vault:
module "default_kv_access" {
source = "../default_kv_access"
key_vault = azurerm_key_vault.kv
}
...
resource "azurerm_key_vault_secret" "secrets" {
for_each = local.secrets
name = each.key
value = each.value
key_vault_id = azurerm_key_vault.kv.id
}
When destroyed, terraform first destroys the module and then attempts to destroy the secrets (wasteful, because the key vault would be destroyed anyway, but given).
Anyway, by destroying the module first, terraform removes all the access policies and so when it comes to destroying the azurerm_key_vault_secret resource - it fails, because the service principal running the code does not have the necessary access to the secrets.
What I need is tell terraform that azurerm_key_vault_secret depends on the default_kv_access module.
So, the question is how can I do it, given that I cannot just mention the module in the depends_on statement.
EDIT 1
The module code is:
variable "key_vault" {}
locals {
ctx = jsondecode(file("${path.root}/../${basename(abspath(path.root)) == "product" ? "" : "../"}metadata.g.json"))
# Will have to be replaced when the hosting is ready
hosting_ad_group_name = "AdminRole-Product-DFDevelopmentOps"
}
data "azurerm_client_config" "client" {}
data "azuread_service_principal" "hosting_sp" {
display_name = local.ctx.HostingAppName
}
data "azuread_group" "hosting_ad_group" {
name = local.hosting_ad_group_name
}
locals {
allow_kv_access_to = {
client = {
object_id = data.azurerm_client_config.client.object_id
secret_permissions = ["get", "set", "list", "delete", "recover", "backup", "restore"]
}
hosting_sp = {
object_id = data.azuread_service_principal.hosting_sp.object_id
secret_permissions = ["get", "set", "list", "delete", "recover", "backup", "restore"]
}
hosting_ad_group = {
object_id = data.azuread_group.hosting_ad_group.id
secret_permissions = ["get", "list"]
}
}
}
resource "azurerm_key_vault_access_policy" "default" {
for_each = local.allow_kv_access_to
key_vault_id = var.key_vault.id
tenant_id = var.key_vault.tenant_id
object_id = each.value.object_id
secret_permissions = each.value.secret_permissions
}
One way I've seen this done (depends_on with a module) is to reference a property of the module in a locals property, then depends_on that local reference in your resource. I have this working in a few of my own configurations and I get the desired outcome, the resource is not destroyed or created before the module.
Example:
module "default_kv_access" {
source = "../default_kv_access"
key_vault = azurerm_key_vault.kv
}
locals {
module_depends_on = module.default_kv_access.name
}
resource "azurerm_key_vault_secret" "secrets" {
depends_on = [local.module_depends_on]
for_each = local.secrets
name = each.key
value = each.value
key_vault_id = azurerm_key_vault.kv.id
}