i am new to Terraform Scripts i am working on Azure with Terraform i have create a Resource Group and in that resource Group i have created a Key Vault i want to Populate secrets from Central Key vault is there any way ?
Yes, you can import secrets using the data source key_vault_secret https://www.terraform.io/docs/providers/azurerm/d/key_vault_secret.html
data "azurerm_key_vault" "existing" {
name = "Test1-KV"
resource_group_name = "Test1-RG"
}
data "azurerm_key_vault_secret" "existing-sauce" {
name = "secret-sauce"
key_vault_id = data.azurerm_key_vault.existing.id
}
resource "azurerm_key_vault" "new" {
name = "New-KV"
resource_group_name = "New-RG"
...
}
resource "azurerm_key_vault_secret" "new-sauce" {
name = "secret-sauce"
value = data.azurerm_key_vault_secret.existing_sauce.value
key_vault_id = azurerm_key_vault.new.id
}
Of course, the user/service principle that you run Terraform with needs to have an access policy on the KeyVault to allow reading secrets.
//edit: As I understand from the comments, you want to iterate through all the existing secrets in a KeyVault and replicate them in another KV. This not possible with Terraform as of today since there is not TF data source that would list all secrets in a KV. To use the aforementioned data source, you need to specify each secret by its name.
To achieve what you want to do you need something like powershell or az CLI.
Related
I have defined resource to provision databricks workspace on Azure using Terraform as follows which consumes the list ( of inputs from tfvar file for # of workspaces) and provision them.
resource "azurerm_databricks_workspace" "workspace" {
for_each = { for r in var.databricks_workspace_list : r.workspace_nm => r}
name = each.key
resource_group_name = each.value.resource_group_name
location = each.value.location
sku = "standard"
tags = {
Environment = "Dev"
}
}
I am trying to create additional resource as below
resource "databricks_instance_pool" "smallest_nodes" {
instance_pool_name = "Smallest Nodes"
min_idle_instances = 0
max_capacity = 300
node_type_id = data.databricks_node_type.smallest.id // data block is defined
idle_instance_autotermination_minutes = 10
}
To create instance pool, I need to pass workspace id in databricks provider block as below
provider "databricks" {
azure_client_id= *******
azure_client_secret= *******
azure_tenant_id= *******
azure_workspace_resource_id = azurerm_databricks_workspace.workspace.id
}
But when I do terraform plan, it fails with below error
Missing resource instance key
azure_workspace_resource_id = azurerm_databricks_workspace.workspace.id
Because azure_workspace_resource_id = azurerm_databricks_workspace has for_each set, its attribute must be accessed on specific instances.
For example, to correlate indices , use :
azurerm_databricks_workspace[each.key]
I couldnt use for_each in provider block, also not able to find out way to index workspace id in provider block.
Appreciate your inputs.
TF version : 0.13
Azure RM : 3.10.0
Databricks : 0.5.7
The problem is that you can create multiple workspaces when you're using for_each in the azurerm_databricks_workspace resource. But your provider block is trying to refer to a "generic" resource instance, so it's complaining.
The solution here would be either:
Remove for_each if you're creating just one workspace
instead of azurerm_databricks_workspace.workspace.id, you need to refer azurerm_databricks_workspace.workspace[<name>].id where the <name> is the specific instance of Databricks from the list of workspaces.
P.S. Your databricks_instance_pool resource doesn't have explicit depends_on, so the operation will fail with authentication error as described here.
It appears that Terraform uses Keys for backend state files when persisting to an Azure storage account. I wish to use a single storage account with dedicated folders for different service principals but without cross-folder write access. I am trying to avoid accidental overwrites of the state files by different service principals. But since Terraform is using the keys to update the storage account, every service principal technically has rights to update every file. And the developer would have to take care not to accidentally reference the wrong state file to update. Any thoughts on how to protect against this?
You can use a SAS token generated for a Container to be used by that service principal only and no other service principals .
I tested with something like below:
data "terraform_remote_state" "foo" {
backend = "azurerm"
config = {
storage_account_name = "cloudshellansuman"
container_name = "test"
key = "prod.terraform.tfstate"
sas_token = "sp=racwdl&st=2021-09-28T05:49:01Z&se=2023-04-01T13:49:01Z&sv=2020-08-04&sr=c&sig=O87nHO01sPxxxxxxxxxxxxxsyQGQGLSYzlp6F8%3D"
}
}
provider "azurerm" {
features {}
use_msi = true
subscription_id = "948d4068-xxxxx-xxxxxx-xxxxxxxxxx"
tenant_id = "72f988bf-xxxx-xxxxx-xxxxxx-xxxxxxx"
}
resource "azurerm_resource_group" "test" {
name="xterraformtest12345"
location ="east us"
}
But If I change container name to another container then I can't write as it will error out saying the authentication failed as the SAS token is for Test container not Test1 container.
For more information on how to generate SAS token for containers and how to set backend azurerm for terraform , please refer the below links:
Generate shared access signature (SAS) token for containers and blobs with Azure portal. | Microsoft Docs
Use Azure storage for Terraform remote state
OR
You can set the containers authentication method to azure ad user account, after assigning storage blob data contributor/owner role to the service principal which will use that specific container .
Then you can use something like below:
data "terraform_remote_state" "foo" {
backend = "azurerm"
config = {
storage_account_name = "cloudshellansuman"
container_name = "test1"
key = "prod.terraform.tfstate"
subscription_id = "b83c1ed3-xxxx-xxxxxx-xxxxxxx"
tenant_id = "72f988bf-xxx-xxx-xxx-xxx-xxxxxx"
client_id = "f6a2f33d-xxxx-xxxx-xxx-xxxxx"
client_secret = "y5L7Q~oiMOoGCxm7fK~xxxxxxxxxxxxxxxxx"
use_azuread_auth =true
}
}
provider "azurerm"{
subscription_id = "b83c1ed3-xxxx-xxxxxx-xxxxxxx"
tenant_id = "72f988bf-xxx-xxx-xxx-xxx-xxxxxx"
client_id = "f6a2f33d-xxxx-xxxx-xxx-xxxxx"
client_secret = "y5L7Q~oiMOoGCxm7fK~xxxxxxxxxxxxxxxxx"
features {}
}
data "azurerm_resource_group" "test" {
name="resourcegroupname"
}
resource "azurerm_virtual_network" "example" {
name = "example-network"
resource_group_name = data.azurerm_resource_group.test.name
location = data.azurerm_resource_group.test.location
address_space = ["10.254.0.0/16"]
}
Output:
If service principal doesn't have Role assigned to it for the container , then it will give error like below:
Note: For the first scenario I have used managed system identity, but the same can be achieved for service principal as well.
I want to update the details of an expired SP through terraform. I can regenerate the SP by changing the expiration date for the SP. but the SP details are been stored in the keyvault. So while updating the keyvault with the same id/secret it errors out. Is there a way to update/delete the key_vault secret through terraform ?
resource "azurerm_key_vault_secret" "sp_arm_client_id"
{
name = "ARM-CLIENT-ID"
value = az_sp.app_id key_vault_id = data.azurerm_key_vault.storable_kvs[each.key].id
}
I tested your scenario in my environment and I was successfully able to do the changes and it got stored in current version and the previous one was in older version using the below code:
provider "azuread" {}
provider "azurerm" {
features{}
}
data "azuread_client_config" "current" {}
data "azuread_application" "appreg" {
display_name="ansumanterraformtest"
}
resource "azuread_application_password" "apppass" {
application_object_id = data.azuread_application.appreg.object_id
end_date_relative = "3h"
}
data "azurerm_key_vault" "kv" {
name = "kvname"
resource_group_name = "ansumantest"
}
resource "azurerm_key_vault_secret" "demo_sp_client_id" {
name = "demo-sp-client-id"
value = data.azuread_application.appreg.application_id
key_vault_id = data.azurerm_key_vault.kv.id
}
resource "azurerm_key_vault_secret" "demo_sp_client_secret" {
name = "demo-sp-client-secret"
value =azuread_application_password.apppass.value
key_vault_id = data.azurerm_key_vault.kv.id
}
Output:
Note: You might be getting the error if that secret in keyvault was not created from terraform . If it was created from portal or any other source then you have to first import that secret to terraform state and then change it so that the terraform can manage it .
Import command:
terraform import azurerm_key_vault_secret.example "https://example-keyvault.vault.azure.net/secrets/example/fdf067c93bbb4b22bff4d8b7a9a56217"
Reference:
azurerm_key_vault_secret | Resources | hashicorp/azurerm | Terraform Registry
I am trying to download a certificate in azure keyvault and save it as a local file by using terraform? Is this operation possible at all with terraform? I have checked it but couldn't find a terraform function doing that.
You may use the azurerm_key_vault_certificate_data data source.
Example from the docs
data "azurerm_key_vault" "example" {
name = "examplekv"
resource_group_name = "some-resource-group"
}
data "azurerm_key_vault_certificate_data" "example" {
name = "secret-sauce"
key_vault_id = data.azurerm_key_vault.example.id
}
output "example_pem" {
value = data.azurerm_key_vault_certificate_data.example.pem
}
I am trying to find some way of moving my certificates from a Key Vault in one Azure Subscription to another Azure subscription. Is there anyway of doing this>
Find below an approach to move a self-signed certification created in Azure Key Vault assuming it is already created.
--- Download PFX ---
First, go to the Azure Portal and navigate to the Key Vault that holds the certificate that needs to be moved. Then, select the certificate, the desired version and click Download in PFX/PEM format.
--- Import PFX ---
Now, go to the Key Vault in the destination subscription, Certificates, click +Generate/Import and import the PFX file downloaded in the previous step.
If you need to automate this process, the following article provides good examples related to your question:
https://blogs.technet.microsoft.com/kv/2016/09/26/get-started-with-azure-key-vault-certificates/
I eventually used terraform to achieve this. I referenced the certificates from the azure keyvault secret resource and created new certificates.
the sample code here.
terraform {
required_version = ">= 0.13"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.17.0"
}
}
}
provider "azurerm" {
features {}
}
locals {
certificates = [
"certificate_name_1",
"certificate_name_2",
"certificate_name_3",
"certificate_name_4",
"certificate_name_5",
"certificate_name_6"
]
}
data "azurerm_key_vault" "old" {
name = "old_keyvault_name"
resource_group_name = "old_keyvault_resource_group"
}
data "azurerm_key_vault" "new" {
name = "new_keyvault_name"
resource_group_name = "new_keyvault_resource_group"
}
data "azurerm_key_vault_secret" "secrets" {
for_each = toset(local.certificates)
name = each.value
key_vault_id = data.azurerm_key_vault.old.id
}
resource "azurerm_key_vault_certificate" "secrets" {
for_each = data.azurerm_key_vault_secret.secrets
name = each.value.name
key_vault_id = data.azurerm_key_vault.new.id
certificate {
contents = each.value.value
}
}
wrote a post here as well