I am trying to write a Terraform descriptor for integration Azure Functions, KeyVault and CosmosDB.
On one hand I need Azure Functions identity id to create KeyVault access policy.
On the other I need KeyVault's CosmosDB key reference to put into Azure Functions configuration.
That causes cycle dependency Azure Functions <-> KeyVault. Is there a way to solve it some way? If I would do it manually, I would create Azure Functions App, create KeyVault, add access policy in KeyVault and update Azure Functions with KeyVault key reference. But as far as I know, Terraform doesn't allow to create and update resource later.
Some code snippets:
functions.tf
variable "db_key" {
type = string
}
resource "azurerm_linux_function_app" "my_functions" {
...
app_settings = {
"DB_KEY": var.db_key
}
}
output "functions_app_id" {
value = azurerm_linux_function_app.my_functions.identity[0].principal_id
}
keyvault.tf
variable "functions_app_id" {
type = string
}
resource "azurerm_key_vault" "my_keyvault" {
access_policy {
tenant_id = ...
object_id = var.functions_app_id
secret_permissions {
"Get"
}
}
}
resource "azurerm_key_vault_secret" "db_key" {
...
}
output "db_key" {
value = "#Microsoft.KeyVault(SecretUri=${azurerm_key_vault_secret.db_key.id})"
}
main.tf
module "functions" {
...
db_key = module.key-vault.db_key
}
module "key-vault" {
...
functions_app_id = module.functions.functions_app_id
}
You can :
Create Key Vault with key
Create function with key reference
Add access policy or RBAC to vault for function
Ok I have figured out how to do this. Instead of using access_policy block in key_vault script, I should have used "azurerm_key_vault_access_policy" resource in functions.tf. Now it looks like this
functions.tf
variable "db_key" {
type = string
}
resource "azurerm_linux_function_app" "my_functions" {
...
app_settings = {
"DB_KEY": var.db_key
}
identity {
type = "SystemAssigned"
}
}
resource "azurerm_key_vault_access_policy" "functions_app_access_policy" {
key_vault_id = ... //passed as output from key_vault.tf
tenant_id = ...
object_id = azurerm_linux_function_app.my_functions.identity[0].principal_id
secret_permissions = ["Get"]
}
And there is no access_policy block in key_vault.tf file anymore
Related
I'm trying to create an az ad app and credential for each entry in a locals set.
The objects in the locals set have values that are needed for both resources, but my issue is the credentials resource needs values from both the locals object as well as the ad application.
This would be easy normally, but I am using a for_each which is complicated, and the value of each for the credential resource is the ad application. Is there any way I can get access to the each of az app resource but from the credential resource?
locals {
github_repos_with_apps = {
tftesting_testing = {
repo = "tftesting-testing"
environment = "tfplan"
}
}
}
resource "azuread_application" "aadapp" {
for_each = local.github_repos_with_apps
display_name = join("-", ["github-actions", each.value.repo, each.value.environment])
owners = [data.azuread_client_config.current.object_id]
}
resource "azuread_application_federated_identity_credential" "cred" {
for_each = azuread_application.aadapp
application_object_id = each.value.object_id
display_name = "my-repo-deploy"
description = "Deployments for my-repo"
audiences = ["api://AzureADTokenExchange"]
issuer = "https://token.actions.githubusercontent.com"
subject = "repo:my-org/${each.value.<something?>.repo}:environment:${each.value.<something?>.environment}"
}
In the snippet above I need the cred resource to access aadapp.object_id but also reference the locals value in order to get rep and environment. Since both cred and aadapp both use for_each the meaning of each.value changes. I'd like to reference the each.value of aadapp from cred.
My problem line is the subject value in the cred resource:
subject = "repo:my-org/${each.value.<something?>.repo}:environment:${each.value.<something?>.environment}"
I think I may have to use modules to accomplish this, but I feel there is a quicker way, like being able to store a temporary value on aadapp that would let me reference it.
After scouring some examples I did find out how to achieve this.
If I change all resources to use for_each = local.github_repos_with_apps, I can then use 'each.key` as a lookup to get the other associated resources like so:
application_object_id = resource.azuread_application.aadapp[each.key].object_id
This allows the cred resource to reference the locals values directly
subject = "repo:my-org/${each.value.repo}:environment:${each.value.environment}"
Full code:
locals {
github_repos_with_apps = {
first_test : {
repo = "tftesting-testing"
environment = "tfplan"
}
second_test : {
repo = "bleep-testing"
environment = "tfplan"
}
}
}
resource "azuread_application" "aadapp" {
for_each = local.github_repos_with_apps
display_name = join("-", ["github-actions", each.value.repo, each.value.environment])
owners = [data.azuread_client_config.current.object_id]
lifecycle {
ignore_changes = [
required_resource_access
]
}
}
resource "azuread_application_federated_identity_credential" "cred" {
for_each = local.github_repos_with_apps
application_object_id = resource.azuread_application.aadapp[each.key].object_id
display_name = each.value.repo
description = "Deployments for my-repo"
audiences = ["api://AzureADTokenExchange"]
issuer = "https://token.actions.githubusercontent.com"
subject = "repo:my-org/${each.value.repo}:environment:${each.value.environment}"
}
We are using terraform to build my azure resources with azurerm provider.
We are injecting a secret during the terraform run and this secret may change from time to time.
We use a azurerm_key_vault_secret to store the secret and a function app with managed identity (that has got reading access to the key vault) that receives the secret like this:
resource "azurerm_key_vault_secret" "my_secret" {
name = "my-secret"
value = var.my_secret
key_vault_id = azurerm_key_vault.default.id
}
resource "azurerm_function_app" "app" {
name = "..."
app_settings = {
MySecret = "#Microsoft.KeyVault(SecretUri=${azurerm_key_vault_secret.my_secret.id})"
}
identity {
type = "SystemAssigned"
}
...
}
When i run terraform apply and the secret is changed, the function app points to the old version of the secret. It seems the azurerm_key_vault_secret.my_secret.id is being read before the secret was updated.
Does anybody have any idea, how I can make sure the function_app will wait for the update of the secret?
(And yes, the id changes and I also don't like it, but that is how the provider works.)
When you are updating a key vault secret then the change is handled by Key vault UI . So Terraform won't detect the changes on azurerm_key_vault_secret.example.id and thus the reference's also won't be modified .
As a Workaround , You can use a data source for the same key vault secret and provide it in the function-app as shown in the below code , so that all the changes done in key vault secret can be read from data source and the changes can be applied accordingly :
resource "azurerm_key_vault_secret" "example" {
name = "functionappsecret"
value = "changedpassword"
key_vault_id = azurerm_key_vault.example.id
}
data "azurerm_key_vault_secret" "secret" {
name="functionappsecret"
key_vault_id = azurerm_key_vault.example.id
depends_on = [
azurerm_key_vault_secret.example
]
}
resource "azurerm_function_app" "example" {
name = "ansuman-azure-functions"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
app_service_plan_id = azurerm_app_service_plan.example.id
storage_account_name = azurerm_storage_account.example.name
storage_account_access_key = azurerm_storage_account.example.primary_access_key
app_settings = {
MySecret = "#Microsoft.KeyVault(SecretUri=${data.azurerm_key_vault_secret.secret.id})"
}
identity {
type="SystemAssigned"
}
}
Ouptut:
I want to use the resource "data" in Terraform for example for an sns topic but I don't want too look for a resource in the aws-account, for which I'm deploying my other resources. It should look up to my other aws-account (in the same organization) and find resources in there. Is there a way to make this happen?
data "aws_sns_topic" "topic_alarms_data" {
name = "topic_alarms"
}
Define an aws provider with credentials to the remote account:
# Default provider that you use:
provider "aws" {
region = var.context.aws_region
assume_role {
role_arn = format("arn:aws:iam::%s:role/TerraformRole", var.account_id)
}
}
provider "aws" {
alias = "remote"
region = var.context.aws_region
assume_role {
role_arn = format("arn:aws:iam::%s:role/TerraformRole", var.remote_account_id)
}
}
data "aws_sns_topic" "topic_alarms_data" {
provider = aws.remote
name = "topic_alarms"
}
Now the topics are loaded from the second provider.
My terraform design depends on a pre-provisioned keyvault containing secrets to be used by app services. I imported this key vault into my remote state. I can see it has been imported. Now when I run terraform plan, it acts as if it does not know about the imported resource.
This is how my terraform looks like
provider "azurerm" {
version="=2.20.0"
skip_provider_registration="true"
features{}
}
terraform {
backend "azurerm" {}
}
resource "azurerm_key_vault" "kv" {
name = "${var.env}ActicoDQM-kv"
}
module "app_service_plan"{
source = "./modules/app-service-plan"
...redacted for brevity
tags = var.tags
}
module "app-service"{
source = "./modules/app-service"
...redacted for brevity
tags = var.tags
key_vault_id = azurerm_key_vault.kv.key_vault_id
}
Adding an access policy for the app service inside the module
resource "azurerm_app_service" "app" {
... redacted for brevity
}
identity {
type = "SystemAssigned"
}
}
resource "azurerm_key_vault_access_policy" "app" {
key_vault_id = var.key_vault_id
tenant_id = azurerm_app_service.app.identity[0].tenant_id
object_id = azurerm_app_service.app.identity[0].principal_id
secret_permissions = ["get", "list"]
}
There seems to be some missing link in my understanding, because now when I do
terraform plan
It acts as if it doesn't know about imported keyvault
Error: Missing required argument
on main.tf line 19, in resource "azurerm_key_vault" "kv":
19: resource "azurerm_key_vault" "kv" {
The argument "tenant_id" is required, but no definition was found.
Even though you're importing an existing keyvault into your terraform state you need to fully define all required arguments according to keyvault resource docs.
At minimum your keyvault resource should specify these arguments:
resource "azurerm_key_vault" "kv" {
name = "${var.env}ActicoDQM-kv"
location = ..
resource_group_name = ..
sku_name = "standard" or "premium"
tenant_id = data.azurerm_client_config.current.tenant_id
}
You can expose the tenant_id using a data resource:
data "azurerm_client_config" "current" {
}
The following terraform configuration is supposed to:
Obtain the id of the relevant Key Vault
Obtain the id of the certificate secret
Setup custom hostname binding
Setup app service certificate
data "azurerm_key_vault" "hosting_secondary_kv" {
name = local.ctx.HostingSecondaryKVName
resource_group_name = local.ctx.HostingSecondaryRGName
}
data "azurerm_key_vault_secret" "cert" {
name = var.env == "prod" ? local.ctx.ProdCertificateName : local.ctx.NonProdCertificateName
key_vault_id = data.azurerm_key_vault.hosting_secondary_kv.id
}
resource "azurerm_app_service_custom_hostname_binding" "webapp_fqdn" {
for_each = local.apsvc_map
hostname = each.value.fqdn
app_service_name = azurerm_app_service.webapp[each.key].name
resource_group_name = var.regional_web_rg[each.value.location].name
ssl_state = "SniEnabled"
thumbprint = azurerm_app_service_certificate.cert[each.value.location].thumbprint
depends_on = [
azurerm_traffic_manager_endpoint.ep
]
}
resource "azurerm_app_service_certificate" "cert" {
for_each = local.locations
name = var.env == "prod" ? local.ctx.ProdCertificateName : local.ctx.NonProdCertificateName
resource_group_name = var.regional_web_rg[each.value].name
location = each.value
key_vault_secret_id = data.azurerm_key_vault_secret.cert.id
}
I have configured all the permissions as explained in https://www.terraform.io/docs/providers/azurerm/r/app_service_certificate.html
Running the code yields the following error:
Error: Error creating/updating App Service Certificate "wildcard-np-xyzhcm-com" (Resource Group "MyAppServiceResourceGroup"): web.CertificatesClient#CreateOrUpdate: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="LinkedAuthorizationFailed" Message="The client '5...8' with object id '5...8' has permission to perform action 'Microsoft.Web/certificates/write' on scope '/subscriptions/0...7/resourceGroups/MyAppServiceResourceGroup/providers/Microsoft.Web/certificates/wildcard-np-xyzhcm-com'; however, it does not have permission to perform action 'write' on the linked scope(s) '/subscriptions/0...7/resourceGroups/MyKeyVaultResourceGroup/providers/Microsoft.KeyVault/vaults/MyKeyVault' or the linked scope(s) are invalid."
All the resources are in the same subscription.
I do not understand. Does Azure want me to grant the Service Principal performing the deployment (5...8) the 'write' permission on the key vault containing the certificate? What am I missing?
EDIT 1
I used terraform to create the access policy to the Key Vault. Here is the relevant code:
A custom role definition allowing the "Microsoft.KeyVault/vaults/read" action:
resource "azurerm_role_definition" "key_vault_reader" {
name = "Key Vault Reader"
scope = data.azurerm_subscription.current.id
permissions {
actions = ["Microsoft.KeyVault/vaults/read"]
not_actions = []
}
assignable_scopes = [
data.azurerm_subscription.current.id
]
}
Letting the Microsoft WebApp Service Principal access the certificate:
data "azurerm_key_vault" "hosting_secondary_kv" {
name = local.ctx.HostingSecondaryKVName
resource_group_name = local.ctx.HostingSecondaryRGName
}
data "azuread_service_principal" "MicrosoftWebApp" {
application_id = "abfa0a7c-a6b6-4736-8310-5855508787cd"
}
resource "azurerm_key_vault_access_policy" "webapp_sp_access_to_hosting_secondary_kv" {
key_vault_id = data.azurerm_key_vault.hosting_secondary_kv.id
object_id = data.azuread_service_principal.MicrosoftWebApp.object_id
tenant_id = data.azurerm_subscription.current.tenant_id
secret_permissions = ["get"]
certificate_permissions = ["get"]
}
Next grant the Service Principal used by the deployment the custom Key Vault Reader role in the resource group of the respective Key Vault:
data "azurerm_key_vault" "hosting_secondary_kv" {
name = local.ctx.HostingSecondaryKVName
resource_group_name = local.ctx.HostingSecondaryRGName
}
data "azurerm_role_definition" "key_vault_reader" {
name = "Key Vault Reader"
scope = data.azurerm_subscription.current.id
}
resource "azurerm_role_assignment" "sp_as_hosting_secondary_kv_reader" {
scope = "${data.azurerm_subscription.current.id}/resourceGroups/${local.ctx.HostingSecondaryRGName}"
role_definition_id = data.azurerm_role_definition.key_vault_reader.id
principal_id = azuread_service_principal.sp.id
}
Finally setup the access policy for the aforementioned Service Principal:
resource "azurerm_key_vault_access_policy" "sp_access_to_hosting_secondary_kv" {
key_vault_id = data.azurerm_key_vault.hosting_secondary_kv.id
object_id = azuread_service_principal.sp.object_id
tenant_id = data.azurerm_subscription.current.tenant_id
secret_permissions = ["get"]
certificate_permissions = ["get"]
}
And the snapshots from the portal:
So we have discussed it with the Microsoft Support and the solution they have provided is that we can use a custom Role Definition based on the built-in Reader role + Key Vault deploy action.
The terraform role definition looks like this:
resource "random_uuid" "reader_with_kv_deploy_id" {}
resource "azurerm_role_definition" "reader_with_kv_deploy" {
role_definition_id = random_uuid.reader_with_kv_deploy_id.result
name = "Key Vault Reader with Action for ${var.sub}"
scope = data.azurerm_subscription.current.id
description = "Can deploy/import secret from key vault to Web App"
permissions {
actions = ["*/read", "Microsoft.KeyVault/vaults/deploy/action"]
not_actions = []
}
assignable_scopes = [
data.azurerm_subscription.current.id
]
}
Anyway, using this role instead of "Key Vault Contributor" does allow to link an App Service to a certificate in a Key Vault.
Those two questions remain:
Why on earth this complication is even necessary and just Reader was not deemed good enough?
Why there is no built-in role for this? I cannot believe anyone would agree to grant a service principal Key Vault Contributor where a mere Reader should be enough.