Terraform prevent deletion of resource - terraform

I have the following resource that adds a secret to an Azure Key Vault
resource "azurerm_key_vault_secret" "mySecret" {
name = "mySecret"
value = "helloWorld"
key_vault_id = DYANMIC_ID
lifecycle {
prevent_destroy = true
}
}
When I run the terraform plan command on Key Vault 1 I wanted to create the Secret in the vault - and this is successful. When I execute the same statement again, but against Key Vault 2 it is attempting to delete the value from Key Vault 1. I would like Terraform to ignore the deletion. Is this possible?
At the end of the second execution, I would like my Secret in both Key Vault 1 and Key Vault 2.
Currently I get an error: "Instance cannot be destroyed".
Further, it would be ideal that if I changed the value of "value" it would update and not error.

Related

When using terraform to build a mongodb atlas cluster how do i use dynamic secrets through vault

I'm trying to get my terraform (which manages our mongo atlas infrastructure) to use dynamic secrets (through vault, running on localhost for now) when terraform is connecting to atlas, but I cant seem to get it to work.
I can't find any examples of how to do this so I have put together a sample github repo showing the few things I've tried to do so far.
All the magic is contained in the 3 provider files. I have the standard method of connecting to atlas using static keys (even with said keys generated as temporary API keys through vault) see provider1.tf
The problem comes when I try and use the atlas vault secret engine for terraform along with the mongo atlas provider. There are just no examples!
My question is how do I use the atlas vault secret engine to generate and use temporary keys when provisioning infrastructure with terraform?
I've tried two different ways of wiring up the providers, see files provider2.tf and provider3.tf, code is copied here:
provider2.tf
provider "mongodbatlas" {
public_key = vault_database_secrets_mount.db.mongodbatlas[0].public_key
private_key = vault_database_secrets_mount.db.mongodbatlas[0].private_key
}
provider "vault" {
address = "http://127.0.0.1:8200"
}
resource "vault_database_secrets_mount" "db" {
path = "db"
mongodbatlas {
name = "foo"
public_key = var.mongodbatlas_org_public_key
private_key = var.mongodbatlas_org_private_key
project_id = var.mongodbatlas_project_id
}
}
provider3.tf
provider "mongodbatlas" {
public_key = vault_database_secret_backend_connection.db2.mongodbatlas[0].public_key
private_key = vault_database_secret_backend_connection.db2.mongodbatlas[0].private_key
}
resource "vault_mount" "db1" {
path = "mongodbatlas01"
type = "database"
}
resource "vault_database_secret_backend_connection" "db2" {
backend = vault_mount.db1.path
name = "mongodbatlas02"
allowed_roles = ["dev", "prod"]
mongodbatlas {
public_key = var.mongodbatlas_org_public_key
private_key = var.mongodbatlas_org_private_key
project_id = var.mongodbatlas_project_id
}
}
Both methods give the same sort of error:
mongodbatlas_cluster.cluster-terraform01: Creating...
╷
│ Error: error creating MongoDB Cluster: POST https://cloud.mongodb.com/api/atlas/v1.0/groups/10000000000000000000001/clusters: 401 (request "") You are not authorized for this resource.
│
│ with mongodbatlas_cluster.cluster-terraform01,
│ on main.tf line 1, in resource "mongodbatlas_cluster" "cluster-terraform01":
│ 1: resource "mongodbatlas_cluster" "cluster-terraform01" {
Any pointers or examples would be greatly appreciated
many thanks
Once you have setup and started vault, enabled mongodbatlas, added config and roles to vault, its actually rather easy to connect terraform to atlas using dynamic ephemeral keys created by vault.
Run these commands first to start and configure vault locally:
vault server -dev
export VAULT_ADDR='http://127.0.0.1:8200'
vault secrets enable mongodbatlas
# Write your master API keys into vault
vault write mongodbatlas/config public_key=org-api-public-key private_key=org-api-private-key
vault write mongodbatlas/roles/test project_id=100000000000000000000001 roles=GROUP_OWNER ttl=2h max_ttl=5h cidr_blocks=123.45.67.1/24
Now add the vault provider and data source to your terraform config:
provider "vault" {
address = "http://127.0.0.1:8200"
}
data "vault_generic_secret" "mongodbatlas" {
path = "mongodbatlas/creds/test"
}
Finally, add the mongodbatlas provider with the keys as provided by the vault data source:
provider "mongodbatlas" {
public_key = data.vault_generic_secret.mongodbatlas.data["public_key"]
private_key = data.vault_generic_secret.mongodbatlas.data["private_key"]
}
This is shown in a full example in this example github repo

update vault secret using terraform

I'm trying to push terraform variable to vault using this recource
resource "vault_generic_secret" "secret" {
path = "mxv/terraform/machines/test"
data_json = <<EOT
{
"ip": "aws_instance.app_server.public_ip"
}
EOT
}
but in vault pushed variable name not value. Is there any way to push value from terraform to vault?

Terraform Populate secrets from central key vault

i am new to Terraform Scripts i am working on Azure with Terraform i have create a Resource Group and in that resource Group i have created a Key Vault i want to Populate secrets from Central Key vault is there any way ?
Yes, you can import secrets using the data source key_vault_secret https://www.terraform.io/docs/providers/azurerm/d/key_vault_secret.html
data "azurerm_key_vault" "existing" {
name = "Test1-KV"
resource_group_name = "Test1-RG"
}
data "azurerm_key_vault_secret" "existing-sauce" {
name = "secret-sauce"
key_vault_id = data.azurerm_key_vault.existing.id
}
resource "azurerm_key_vault" "new" {
name = "New-KV"
resource_group_name = "New-RG"
...
}
resource "azurerm_key_vault_secret" "new-sauce" {
name = "secret-sauce"
value = data.azurerm_key_vault_secret.existing_sauce.value
key_vault_id = azurerm_key_vault.new.id
}
Of course, the user/service principle that you run Terraform with needs to have an access policy on the KeyVault to allow reading secrets.
//edit: As I understand from the comments, you want to iterate through all the existing secrets in a KeyVault and replicate them in another KV. This not possible with Terraform as of today since there is not TF data source that would list all secrets in a KV. To use the aforementioned data source, you need to specify each secret by its name.
To achieve what you want to do you need something like powershell or az CLI.

Terraform Incapsula provider fails to create custom certificate resource

We are trying to use Terraform Incapsula privider to manage Imperva site and custom certificate resources.
We are able to create Imperva site resources but certificate resource creation fails.
Our use-case is to get the certificate from Azure KeyVault and import it to Imperva using Incapsula Privider. We get the certificate from KeyVault using Terraform "azurerm_key_vault_secret" data source. It returns the certificate as Base64 string that we pass as "certificate" parameter into Terraform "incapsula_custom_certificate" resource along with siteID that was created using Terraform "incapsula_site" resource. When we run "terraform apply" we get the error below.
incapsula_custom_certificate.custom-certificate: Creating...
Error: Error from Incapsula service when adding custom certificate for site_id ******807: {"res":2,"res_message":"Invalid input","debug_info":{"certificate":"invalid certificate or passphrase","id-info":"13007"}}
on main.tf line 36, in resource "incapsula_custom_certificate" "custom-certificate":
36: resource "incapsula_custom_certificate" "custom-certificate" {
We tried reading the certificate from PFX file in Base64 encoding using Terraform "filebase64" function, but we get the same error.
Here is our Terraform code:
provider "azurerm" {
version = "=2.12.0"
features {}
}
data "azurerm_key_vault_secret" "imperva_api_id" {
name = var.imperva-api-id
key_vault_id = var.kv.id
}
data "azurerm_key_vault_secret" "imperva_api_key" {
name = var.imperva-api-key
key_vault_id = var.kv.id
}
data "azurerm_key_vault_secret" "cert" {
name = var.certificate_name
key_vault_id = var.kv.id
}
provider "incapsula" {
api_id = data.azurerm_key_vault_secret.imperva_api_id.value
api_key = data.azurerm_key_vault_secret.imperva_api_key.value
}
resource "incapsula_site" "site" {
domain = var.client_facing_fqdn
send_site_setup_emails = true
site_ip = var.tm_cname
force_ssl = true
}
resource "incapsula_custom_certificate" "custom-certificate" {
site_id = incapsula_site.site.id
certificate = data.azurerm_key_vault_secret.cert.value
#certificate = filebase64("certificate.pfx")
}
We were able to import the same PFX certificate file using the same Site ID, Imperva API ID and Key by calling directly Imperva API from a Python script.
The certificate doesn't have a passphase.
Are we doing something wrong or is this an Incapsula provider issue?
Looking through the source code of the provider it looks like it is already performing a base64 encode operation as part of the AddCertificate function, which means using the Terraform filebase64 function is double-encoding the certificate.
Instead, I think it should look like this:
resource "incapsula_custom_certificate" "custom-certificate" {
site_id = incapsula_site.site.id
certificate = file("certificate.pfx")
}
If the returned value from azure is base64 then something like this could work too.
certificate = base64decode(data.azurerm_key_vault_secret.cert.value)
Have you tried creating a self-signed cert, converting it to PFX with a passphrase, and using that?
I ask because Azure's PFX output has a blank/non-existent passphrase, and I've had issues with a handful of tools over the years that simply won't import a PFX unless you set a passphrase.

Moving Certificate from Keyvault to another Keyvault in a diffrent subscription

I am trying to find some way of moving my certificates from a Key Vault in one Azure Subscription to another Azure subscription. Is there anyway of doing this>
Find below an approach to move a self-signed certification created in Azure Key Vault assuming it is already created.
--- Download PFX ---
First, go to the Azure Portal and navigate to the Key Vault that holds the certificate that needs to be moved. Then, select the certificate, the desired version and click Download in PFX/PEM format.
--- Import PFX ---
Now, go to the Key Vault in the destination subscription, Certificates, click +Generate/Import and import the PFX file downloaded in the previous step.
If you need to automate this process, the following article provides good examples related to your question:
https://blogs.technet.microsoft.com/kv/2016/09/26/get-started-with-azure-key-vault-certificates/
I eventually used terraform to achieve this. I referenced the certificates from the azure keyvault secret resource and created new certificates.
the sample code here.
terraform {
required_version = ">= 0.13"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.17.0"
}
}
}
provider "azurerm" {
features {}
}
locals {
certificates = [
"certificate_name_1",
"certificate_name_2",
"certificate_name_3",
"certificate_name_4",
"certificate_name_5",
"certificate_name_6"
]
}
data "azurerm_key_vault" "old" {
name = "old_keyvault_name"
resource_group_name = "old_keyvault_resource_group"
}
data "azurerm_key_vault" "new" {
name = "new_keyvault_name"
resource_group_name = "new_keyvault_resource_group"
}
data "azurerm_key_vault_secret" "secrets" {
for_each = toset(local.certificates)
name = each.value
key_vault_id = data.azurerm_key_vault.old.id
}
resource "azurerm_key_vault_certificate" "secrets" {
for_each = data.azurerm_key_vault_secret.secrets
name = each.value.name
key_vault_id = data.azurerm_key_vault.new.id
certificate {
contents = each.value.value
}
}
wrote a post here as well

Resources