update vault secret using terraform - terraform

I'm trying to push terraform variable to vault using this recource
resource "vault_generic_secret" "secret" {
path = "mxv/terraform/machines/test"
data_json = <<EOT
{
"ip": "aws_instance.app_server.public_ip"
}
EOT
}
but in vault pushed variable name not value. Is there any way to push value from terraform to vault?

Related

When using terraform to build a mongodb atlas cluster how do i use dynamic secrets through vault

I'm trying to get my terraform (which manages our mongo atlas infrastructure) to use dynamic secrets (through vault, running on localhost for now) when terraform is connecting to atlas, but I cant seem to get it to work.
I can't find any examples of how to do this so I have put together a sample github repo showing the few things I've tried to do so far.
All the magic is contained in the 3 provider files. I have the standard method of connecting to atlas using static keys (even with said keys generated as temporary API keys through vault) see provider1.tf
The problem comes when I try and use the atlas vault secret engine for terraform along with the mongo atlas provider. There are just no examples!
My question is how do I use the atlas vault secret engine to generate and use temporary keys when provisioning infrastructure with terraform?
I've tried two different ways of wiring up the providers, see files provider2.tf and provider3.tf, code is copied here:
provider2.tf
provider "mongodbatlas" {
public_key = vault_database_secrets_mount.db.mongodbatlas[0].public_key
private_key = vault_database_secrets_mount.db.mongodbatlas[0].private_key
}
provider "vault" {
address = "http://127.0.0.1:8200"
}
resource "vault_database_secrets_mount" "db" {
path = "db"
mongodbatlas {
name = "foo"
public_key = var.mongodbatlas_org_public_key
private_key = var.mongodbatlas_org_private_key
project_id = var.mongodbatlas_project_id
}
}
provider3.tf
provider "mongodbatlas" {
public_key = vault_database_secret_backend_connection.db2.mongodbatlas[0].public_key
private_key = vault_database_secret_backend_connection.db2.mongodbatlas[0].private_key
}
resource "vault_mount" "db1" {
path = "mongodbatlas01"
type = "database"
}
resource "vault_database_secret_backend_connection" "db2" {
backend = vault_mount.db1.path
name = "mongodbatlas02"
allowed_roles = ["dev", "prod"]
mongodbatlas {
public_key = var.mongodbatlas_org_public_key
private_key = var.mongodbatlas_org_private_key
project_id = var.mongodbatlas_project_id
}
}
Both methods give the same sort of error:
mongodbatlas_cluster.cluster-terraform01: Creating...
╷
│ Error: error creating MongoDB Cluster: POST https://cloud.mongodb.com/api/atlas/v1.0/groups/10000000000000000000001/clusters: 401 (request "") You are not authorized for this resource.
│
│ with mongodbatlas_cluster.cluster-terraform01,
│ on main.tf line 1, in resource "mongodbatlas_cluster" "cluster-terraform01":
│ 1: resource "mongodbatlas_cluster" "cluster-terraform01" {
Any pointers or examples would be greatly appreciated
many thanks
Once you have setup and started vault, enabled mongodbatlas, added config and roles to vault, its actually rather easy to connect terraform to atlas using dynamic ephemeral keys created by vault.
Run these commands first to start and configure vault locally:
vault server -dev
export VAULT_ADDR='http://127.0.0.1:8200'
vault secrets enable mongodbatlas
# Write your master API keys into vault
vault write mongodbatlas/config public_key=org-api-public-key private_key=org-api-private-key
vault write mongodbatlas/roles/test project_id=100000000000000000000001 roles=GROUP_OWNER ttl=2h max_ttl=5h cidr_blocks=123.45.67.1/24
Now add the vault provider and data source to your terraform config:
provider "vault" {
address = "http://127.0.0.1:8200"
}
data "vault_generic_secret" "mongodbatlas" {
path = "mongodbatlas/creds/test"
}
Finally, add the mongodbatlas provider with the keys as provided by the vault data source:
provider "mongodbatlas" {
public_key = data.vault_generic_secret.mongodbatlas.data["public_key"]
private_key = data.vault_generic_secret.mongodbatlas.data["private_key"]
}
This is shown in a full example in this example github repo

Terraform prevent deletion of resource

I have the following resource that adds a secret to an Azure Key Vault
resource "azurerm_key_vault_secret" "mySecret" {
name = "mySecret"
value = "helloWorld"
key_vault_id = DYANMIC_ID
lifecycle {
prevent_destroy = true
}
}
When I run the terraform plan command on Key Vault 1 I wanted to create the Secret in the vault - and this is successful. When I execute the same statement again, but against Key Vault 2 it is attempting to delete the value from Key Vault 1. I would like Terraform to ignore the deletion. Is this possible?
At the end of the second execution, I would like my Secret in both Key Vault 1 and Key Vault 2.
Currently I get an error: "Instance cannot be destroyed".
Further, it would be ideal that if I changed the value of "value" it would update and not error.

How to generate file and re-use its content with terrafrom

I would like to generate ssh keys with a local-exec then read the content of the file.
resource "null_resource" "generate-ssh-keys-pair" {
provisioner "local-exec" {
command = <<EOT
ssh-keygen -t rsa -b 4096 -C "test" -P "" -f "testkey"
EOT
}
}
data "local_file" "public-key" {
depends_on = [null_resource.generate-ssh-keys-pair]
filename = "testkey.pub"
}
data "local_file" "private-key" {
depends_on = [null_resource.generate-ssh-keys-pair]
filename = "testkey"
}
terraform plan works but when I run the apply, I got error on testkey and testkey.pub don't exist.
Thanks
Instead of generating a file using an external command and then reading it in, I would suggest to use the Terraform tls provider to generate the key within Terraform itself, using tls_private_key:
terraform {
required_providers {
tls = {
source = "hashicorp/tls"
}
}
}
resource "tls_private_key" "example" {
algorithm = "RSA"
rsa_bits = 4096
}
The tls_private_key resource type exports two attributes that are equivalent to the two files you were intending to read in your example:
tls_private_key.example.private_key_pem: the private key in PEM format
tls_private_key.example.public_key_openssh: the public key in the format OpenSSH expects to find in .ssh/authorized_keys.
Please note the warning in the tls_private_key documentation that using this resource will cause the private key data to be saved in your Terraform state, and so you should protect that state data accordingly. That would also have been true for your approach of reading files from disk using data resources, because any value Terraform has available for use in expressions must always be stored in the state.
I run your code and there are no problems with it. It correctly generates testkey and testkey.pub.
So whatever causes it to fail for you, its not this snipped of code you've provided in the question. The fault must be outside the code snipped.
Generating an SSH key in Terraform is inherently insecure because it gets stored in the tfstate file, however I had a similar problem to solve and thought that the most secure/usable was to use a secret management service + using a cloud bucket for the backend:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.3.0"
}
}
backend "gcs" {
bucket = "tfstatebucket"
prefix = "terraform/production"
}
}
//Import Ansible private key from Google Secrets Manager
resource "google_secret_manager_secret" "ansible_private_key" {
secret_id = var.ansible_private_key_secret_id
replication {
automatic = true
}
}
data "google_secret_manager_secret_version" "ansible_private_key"{
secret = google_secret_manager_secret.ansible_private_key.secret_id
version = var.ansible_private_key_secret_id_version_number
}
resource "local_file" "ansible_imported_local_private_key" {
sensitive_content = data.google_secret_manager_secret_version.ansible_private_key.secret_data
filename = var.ansible_imported_local_private_key_filename
file_permission = "0600"
}
In the case of GCP, I would add the secret in Google Secrets Manager, then use terraform import on the secret, which will in turn write it to the backend bucket. That way it doesn't get stored in Git as plain text, you can have the key file local to your project (.terraform shouldn't be under version control), and it's arguably more secure in the bucket.
So the workflow essentially is:
Human --> Secret Manager
Secret Manager --> Terraform Import --> GCS Bucket Backend
|--> Create .terraform/ssh_key
.terraform/ssh_key --> Terraform/Ansible/Whatever
Hashicorp Vault would be another way to address this

Terraform Populate secrets from central key vault

i am new to Terraform Scripts i am working on Azure with Terraform i have create a Resource Group and in that resource Group i have created a Key Vault i want to Populate secrets from Central Key vault is there any way ?
Yes, you can import secrets using the data source key_vault_secret https://www.terraform.io/docs/providers/azurerm/d/key_vault_secret.html
data "azurerm_key_vault" "existing" {
name = "Test1-KV"
resource_group_name = "Test1-RG"
}
data "azurerm_key_vault_secret" "existing-sauce" {
name = "secret-sauce"
key_vault_id = data.azurerm_key_vault.existing.id
}
resource "azurerm_key_vault" "new" {
name = "New-KV"
resource_group_name = "New-RG"
...
}
resource "azurerm_key_vault_secret" "new-sauce" {
name = "secret-sauce"
value = data.azurerm_key_vault_secret.existing_sauce.value
key_vault_id = azurerm_key_vault.new.id
}
Of course, the user/service principle that you run Terraform with needs to have an access policy on the KeyVault to allow reading secrets.
//edit: As I understand from the comments, you want to iterate through all the existing secrets in a KeyVault and replicate them in another KV. This not possible with Terraform as of today since there is not TF data source that would list all secrets in a KV. To use the aforementioned data source, you need to specify each secret by its name.
To achieve what you want to do you need something like powershell or az CLI.

Moving Certificate from Keyvault to another Keyvault in a diffrent subscription

I am trying to find some way of moving my certificates from a Key Vault in one Azure Subscription to another Azure subscription. Is there anyway of doing this>
Find below an approach to move a self-signed certification created in Azure Key Vault assuming it is already created.
--- Download PFX ---
First, go to the Azure Portal and navigate to the Key Vault that holds the certificate that needs to be moved. Then, select the certificate, the desired version and click Download in PFX/PEM format.
--- Import PFX ---
Now, go to the Key Vault in the destination subscription, Certificates, click +Generate/Import and import the PFX file downloaded in the previous step.
If you need to automate this process, the following article provides good examples related to your question:
https://blogs.technet.microsoft.com/kv/2016/09/26/get-started-with-azure-key-vault-certificates/
I eventually used terraform to achieve this. I referenced the certificates from the azure keyvault secret resource and created new certificates.
the sample code here.
terraform {
required_version = ">= 0.13"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.17.0"
}
}
}
provider "azurerm" {
features {}
}
locals {
certificates = [
"certificate_name_1",
"certificate_name_2",
"certificate_name_3",
"certificate_name_4",
"certificate_name_5",
"certificate_name_6"
]
}
data "azurerm_key_vault" "old" {
name = "old_keyvault_name"
resource_group_name = "old_keyvault_resource_group"
}
data "azurerm_key_vault" "new" {
name = "new_keyvault_name"
resource_group_name = "new_keyvault_resource_group"
}
data "azurerm_key_vault_secret" "secrets" {
for_each = toset(local.certificates)
name = each.value
key_vault_id = data.azurerm_key_vault.old.id
}
resource "azurerm_key_vault_certificate" "secrets" {
for_each = data.azurerm_key_vault_secret.secrets
name = each.value.name
key_vault_id = data.azurerm_key_vault.new.id
certificate {
contents = each.value.value
}
}
wrote a post here as well

Resources