I have following code to create azure key vault:
resource "azurerm_key_vault" "key_vault" {
name = "example"
location = var.location
resource_group_name = azurerm_resource_group.resource_group.name
sku_name = "standard"
tenant_id = var.tenant_id
access_policy {
tenant_id = var.tenant_id
object_id = azurerm_user_assigned_identity.user_assigned_identity.principal_id
secret_permissions = [
"get",
]
}
}
Assume following scenario:
deploy infrastructure using terraform
change key vault access policies manually
redeploy using terraform
Terraform will remove access policies that were created manually.
Is there a way to tell terraform to not remove existing access policies?
Related
I am trying to create a keyvault on Azure using Terraform which is performed by my service principal user:
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "key_vault" {
name = "${var.project_name}-keyvault"
location = var.resource_group_location
resource_group_name = var.resource_group_name
enabled_for_disk_encryption = true
tenant_id = data.azurerm_client_config.current.tenant_id
soft_delete_retention_days = 7
purge_protection_enabled = false
sku_name = "standard"
}
resource "azurerm_key_vault_access_policy" "access_policy" {
key_vault_id = azurerm_key_vault.key_vault.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
secret_permissions = [
"Set", "Get", "Delete", "Purge", "List", ]
}
resource "azurerm_key_vault_secret" "client_id" {
name = "client-id"
value = var.client_id_value
key_vault_id = azurerm_key_vault.key_vault.id
}
resource "azurerm_key_vault_secret" "client_secret" {
name = "client-secret"
value = var.client_secret_value
key_vault_id = azurerm_key_vault.key_vault.id
}
resource "azurerm_key_vault_secret" "subscription_id" {
name = "subscription-id"
value = var.subscription_id_value
key_vault_id = azurerm_key_vault.key_vault.id
}
resource "azurerm_key_vault_secret" "tenant_id" {
name = "tenant-id"
value = var.tenant_id_value
key_vault_id = azurerm_key_vault.key_vault.id
}
But i get this error:
Error: checking for presence of existing Secret "client-id" (Key Vault "https://formulaeinsdef-keyvault.vault.azure.net/"): keyvault.BaseClient#GetSecret:
Failure responding to request: StatusCode=403 -- Original Error: autorest/azure:
Service returned an error. Status=403 Code="Forbidden" Message="The user, group or application 'appid=***;oid=32d24355-0d93-476d-a775-6882d5a22e0b;iss=https://sts.windows.net/***/' does not have secrets get permission on key vault 'formulaeinsdef-keyvault;location=westeurope'.
For help resolving this issue, please see https://go.microsoft.com/fwlink/?linkid=2125287" InnerError={"code":"AccessDenied"}
The above code creates the key-vault successfully, but it fails to add the secrets inside it.
My Service Principal user has Contributor role and i think, it should be enough to GET and SET key keys.
I tried to give my service principal the Reader or even Ownerpermission, but it was not helpful.
I also checked this question, but it is not helping me.
I checked the Access Policies tab and i have the permissions to Set, Get, Delete, Purge, List.
Each of the secrets needs an explicit dependency on the access policy. Otherwise, Terraform may attempt to create the secret before creating the access policy.
resource "azurerm_key_vault_secret" "client_id" {
name = "client-id"
value = var.client_id_value
key_vault_id = azurerm_key_vault.key_vault.id
### Explicit dependency
depends_on = [
azurerm_key_vault_access_policy.access_policy
]
}
Alternatively, moving the access policy definition into the key vault block would make the explicit dependencies unnecessary:
resource "azurerm_key_vault" "key_vault" {
# Content omitted for brevity
.
.
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
secret_permissions = [
"Set", "Get", "Delete", "Purge", "List", ]
}
}
I have a Terraform configuration for an Azure Key Vault:
resource "azurerm_key_vault" "key_vault" {
# ...
network_acls {
default_action = "Deny"
ip_rules = ["MY_IP_ADDRESS"]
bypass = "AzureServices"
}
}
resource "azurerm_key_vault_access_policy" "application" {
key_vault_id = azurerm_key_vault.key_vault.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
certificate_permissions = local.permissions_certificates_all
key_permissions = local.permissions_keys_all
secret_permissions = local.permissions_secrets_all
storage_permissions = local.permissions_storage_all
}
What happens is that when I attempt to add an azurerm_key_vault_secret to the Key Vault that is created above, it fails with the error message:
Service returned an error. Status=403 Code="Forbidden" Message="The user, group or application 'appid=ID;oid=ID;iss=https://sts.windows.net/ID/' does not have secrets get permission on key vault 'KEY_VAULT_NAME;location=eastus2'
When I run the terraform apply again, it works just fine.
I tried adding a time_sleep with 10 minutes (and a set of depends_on needs to ensure it happens) to see if it would resolve it, and it did not.
It seems, however, that the solution is to somehow request that Terraform re-authenticate in some way so that the permissions get picked up.
Is there a way to do this in a Terraform file with the azurerm provider or generically request re-authentication? I did not see it in the documentation.
Thanks!
Terraform Version Data:
Terraform v1.1.2
on linux_amd64
+ provider registry.terraform.io/hashicorp/azuread v2.13.0
+ provider registry.terraform.io/hashicorp/azurerm v2.90.0
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/microsoft/azuredevops v0.1.8
Did you try seeing what happens when you explicitly state the permissions?
In your case as a starting point just get:
resource "azurerm_key_vault_access_policy" "application" {
key_vault_id = azurerm_key_vault.key_vault.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
certificate_permissions = local.permissions_certificates_all
key_permissions = local.permissions_keys_all
storage_permissions = local.permissions_storage_all
secret_permissions = [
"Get",
]
}
It's a timing issue. Terraform applies in parallel threads, so it is trying to add the secret before the policy is added. That is why it works the second time.
To avoid this, you can add a
depends_on = [azurerm_key_vault_access_policy.application]
to your secret.
This will make the secret creation wait until the policy is added.
I have an Azure KeyVault with 4 Access Policies. Each Access Policy has its own unique ObjectId.
In trying to import our legacy Azure resources into a Terraform configuration, I've therefore create Terraform block like the below.
resource "azurerm_key_vault" "example" {
name = "examplekeyvault"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "premium"
}
resource "azurerm_key_vault_access_policy" "policy1" {
key_vault_id = azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = 001
key_permissions = [
"Get",
]
secret_permissions = [
"Get",
]
}
The above worked okay and I was able to import "policy1" successfully.
However, when I then replicated the policy block and appended it with the next policy like the one below, it just doesn't appear to accept it as a properly formed Terraform configuration. My intention is obviously to import all four policies (if that is possible).
resource "azurerm_key_vault" "example" {
name = "examplekeyvault"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "premium"
}
resource "azurerm_key_vault_access_policy" "policy1" {
key_vault_id = azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = 001
key_permissions = [
"Get",
]
secret_permissions = [
"Get",
]
}
resource "azurerm_key_vault_access_policy" "policy2" {
key_vault_id = azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = 002
key_permissions = [
"Get",
]
secret_permissions = [
"Get",
]
}
In both of the above illustrations, I've only used dummy ObjectIds.
Am I doing this entirely the wrong way or is it just not possible to import multiple policies into one Terraform config? The Terraform registry documentation meanwhile says Azure permits a maximum of 1024 Access Policies per Key Vault.
In the end, my proposed solution of simply appending additional policy blocks to the key vault access policy as depicted in my second code snippet (above), appeared to work, as my subsequent Terraform Plan and Apply went well without any errors reported.
I can only therefore conclude and/or assume that appending those additional policy blocks was a correct solution after all.
My terraform design depends on a pre-provisioned keyvault containing secrets to be used by app services. I imported this key vault into my remote state. I can see it has been imported. Now when I run terraform plan, it acts as if it does not know about the imported resource.
This is how my terraform looks like
provider "azurerm" {
version="=2.20.0"
skip_provider_registration="true"
features{}
}
terraform {
backend "azurerm" {}
}
resource "azurerm_key_vault" "kv" {
name = "${var.env}ActicoDQM-kv"
}
module "app_service_plan"{
source = "./modules/app-service-plan"
...redacted for brevity
tags = var.tags
}
module "app-service"{
source = "./modules/app-service"
...redacted for brevity
tags = var.tags
key_vault_id = azurerm_key_vault.kv.key_vault_id
}
Adding an access policy for the app service inside the module
resource "azurerm_app_service" "app" {
... redacted for brevity
}
identity {
type = "SystemAssigned"
}
}
resource "azurerm_key_vault_access_policy" "app" {
key_vault_id = var.key_vault_id
tenant_id = azurerm_app_service.app.identity[0].tenant_id
object_id = azurerm_app_service.app.identity[0].principal_id
secret_permissions = ["get", "list"]
}
There seems to be some missing link in my understanding, because now when I do
terraform plan
It acts as if it doesn't know about imported keyvault
Error: Missing required argument
on main.tf line 19, in resource "azurerm_key_vault" "kv":
19: resource "azurerm_key_vault" "kv" {
The argument "tenant_id" is required, but no definition was found.
Even though you're importing an existing keyvault into your terraform state you need to fully define all required arguments according to keyvault resource docs.
At minimum your keyvault resource should specify these arguments:
resource "azurerm_key_vault" "kv" {
name = "${var.env}ActicoDQM-kv"
location = ..
resource_group_name = ..
sku_name = "standard" or "premium"
tenant_id = data.azurerm_client_config.current.tenant_id
}
You can expose the tenant_id using a data resource:
data "azurerm_client_config" "current" {
}
Experts,
I have a situation where I have to grant access on multiple Azure resources to a particular group, and i have to do this using Terraform only.
example:
Azure Group Name: India-group (5-6 users is there in this group)
Azure Subscription name: India
Azure Resource SQL Database: SQL-db-1
Azure Resource Key-Vault: India-key-vlt-1
Azure Resource Storage Account: India-acnt-1
and many more like PostgreSQL, storage account, blob.....
I think you do not need to care about how does the resource group can access the resources. What you need to care about is how to access the resources when it's necessary.
Generally, we use the service principal that assign roles that contain appropriate permission to access the resources. You can take a look at What is role-based access control (RBAC) for Azure resources and Create a service principal via CLI.
In Terraform, I assume you want to get the secrets from the KeyVault. Here is an example:
provider "azurerm" {
features {}
}
resource "azuread_application" "example" {
name = "example"
homepage = "http://homepage"
identifier_uris = ["http://uri"]
reply_urls = ["http://replyurl"]
available_to_other_tenants = false
oauth2_allow_implicit_flow = true
}
resource "azuread_service_principal" "example" {
application_id = azuread_application.example.application_id
app_role_assignment_required = false
tags = ["example", "tags", "here"]
}
resource "azurerm_resource_group" "example" {
name = "resourceGroup1"
location = "West US"
}
resource "azurerm_key_vault" "example" {
name = "testvault"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
enabled_for_disk_encryption = true
tenant_id = var.tenant_id
soft_delete_enabled = true
purge_protection_enabled = false
sku_name = "standard"
access_policy {
tenant_id = var.tenant_id
object_id = azuread_service_principal.example.object_id
key_permissions = [
"get",
]
secret_permissions = [
"get",
]
storage_permissions = [
"get",
]
}
network_acls {
default_action = "Deny"
bypass = "AzureServices"
}
tags = {
environment = "Testing"
}
}
Then you can access the key vault to get the secrets or keys through the service principal. You can also take a look at the example that controls Key Vault via python.
For other resources, you need to learn about the resource itself first, and then you can know how to access it in a suitable way. Finally, you can use Terraform to achieve it.