How to generate file and re-use its content with terrafrom - terraform

I would like to generate ssh keys with a local-exec then read the content of the file.
resource "null_resource" "generate-ssh-keys-pair" {
provisioner "local-exec" {
command = <<EOT
ssh-keygen -t rsa -b 4096 -C "test" -P "" -f "testkey"
EOT
}
}
data "local_file" "public-key" {
depends_on = [null_resource.generate-ssh-keys-pair]
filename = "testkey.pub"
}
data "local_file" "private-key" {
depends_on = [null_resource.generate-ssh-keys-pair]
filename = "testkey"
}
terraform plan works but when I run the apply, I got error on testkey and testkey.pub don't exist.
Thanks

Instead of generating a file using an external command and then reading it in, I would suggest to use the Terraform tls provider to generate the key within Terraform itself, using tls_private_key:
terraform {
required_providers {
tls = {
source = "hashicorp/tls"
}
}
}
resource "tls_private_key" "example" {
algorithm = "RSA"
rsa_bits = 4096
}
The tls_private_key resource type exports two attributes that are equivalent to the two files you were intending to read in your example:
tls_private_key.example.private_key_pem: the private key in PEM format
tls_private_key.example.public_key_openssh: the public key in the format OpenSSH expects to find in .ssh/authorized_keys.
Please note the warning in the tls_private_key documentation that using this resource will cause the private key data to be saved in your Terraform state, and so you should protect that state data accordingly. That would also have been true for your approach of reading files from disk using data resources, because any value Terraform has available for use in expressions must always be stored in the state.

I run your code and there are no problems with it. It correctly generates testkey and testkey.pub.
So whatever causes it to fail for you, its not this snipped of code you've provided in the question. The fault must be outside the code snipped.

Generating an SSH key in Terraform is inherently insecure because it gets stored in the tfstate file, however I had a similar problem to solve and thought that the most secure/usable was to use a secret management service + using a cloud bucket for the backend:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.3.0"
}
}
backend "gcs" {
bucket = "tfstatebucket"
prefix = "terraform/production"
}
}
//Import Ansible private key from Google Secrets Manager
resource "google_secret_manager_secret" "ansible_private_key" {
secret_id = var.ansible_private_key_secret_id
replication {
automatic = true
}
}
data "google_secret_manager_secret_version" "ansible_private_key"{
secret = google_secret_manager_secret.ansible_private_key.secret_id
version = var.ansible_private_key_secret_id_version_number
}
resource "local_file" "ansible_imported_local_private_key" {
sensitive_content = data.google_secret_manager_secret_version.ansible_private_key.secret_data
filename = var.ansible_imported_local_private_key_filename
file_permission = "0600"
}
In the case of GCP, I would add the secret in Google Secrets Manager, then use terraform import on the secret, which will in turn write it to the backend bucket. That way it doesn't get stored in Git as plain text, you can have the key file local to your project (.terraform shouldn't be under version control), and it's arguably more secure in the bucket.
So the workflow essentially is:
Human --> Secret Manager
Secret Manager --> Terraform Import --> GCS Bucket Backend
|--> Create .terraform/ssh_key
.terraform/ssh_key --> Terraform/Ansible/Whatever
Hashicorp Vault would be another way to address this

Related

terraform remote state best practice

I am creating a few terraform modules and inside the modules I also create the resources for storing remote state ( a S3 bucket and dynamodb table)
when I then use the module I launch I write something like this:
# terraform {
# backend "s3" {
# bucket = "name"
# key = "xxxx.tfstate"
# region = "rrrr"
# encrypt = true
# dynamodb_table = "trrrrr"
# }
# }
terraform {
required_version = ">= 1.0.0, < 2.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
region = var.region
}
module "mymodule" {
source = "./module/mymodule"
region = "param1"
prefix = "param2"
project = "xxxx"
username = "ddd"
contact = "myemail"
table_name = "table-name"
bucket_name = "uniquebucketname"
}
where I leave commented out the part on remote state and I leave terraform to create a local state and create all resources (including the bucket and the DynamoDB table).
After the resources are created
I re-run terraform init and I migrate the state to s3.
I wonder if this is a good practice or if there is something better for maintaining the state and also provide isolation.
That is an interesting approach. I would create the S3 bucket manually since it's a 1 time create for your state file mgmt. Then I would add a policy to prevent deletion | see here: https://serverfault.com/questions/226700/how-do-i-prevent-deletion-of-s3-buckets | & versioning and/or a bkp.
Beyond this approach there are better practises such as using tools like Terraform Cloud which is free for 5 users. Then in your terraform root module configuration you would put this:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "YOUR-TERRAFORM-CLOUD-ORG"
workspaces {
# name = "" ## For single workspace jobs
# prefix = "" ## for multiple workspaces
name = "YOUR-ROOT-MODULE-WORKSPACE-NAME"
}
}
}
More details in this similar Q&A: Initial setup of terraform backend using terraform

When using terraform to build a mongodb atlas cluster how do i use dynamic secrets through vault

I'm trying to get my terraform (which manages our mongo atlas infrastructure) to use dynamic secrets (through vault, running on localhost for now) when terraform is connecting to atlas, but I cant seem to get it to work.
I can't find any examples of how to do this so I have put together a sample github repo showing the few things I've tried to do so far.
All the magic is contained in the 3 provider files. I have the standard method of connecting to atlas using static keys (even with said keys generated as temporary API keys through vault) see provider1.tf
The problem comes when I try and use the atlas vault secret engine for terraform along with the mongo atlas provider. There are just no examples!
My question is how do I use the atlas vault secret engine to generate and use temporary keys when provisioning infrastructure with terraform?
I've tried two different ways of wiring up the providers, see files provider2.tf and provider3.tf, code is copied here:
provider2.tf
provider "mongodbatlas" {
public_key = vault_database_secrets_mount.db.mongodbatlas[0].public_key
private_key = vault_database_secrets_mount.db.mongodbatlas[0].private_key
}
provider "vault" {
address = "http://127.0.0.1:8200"
}
resource "vault_database_secrets_mount" "db" {
path = "db"
mongodbatlas {
name = "foo"
public_key = var.mongodbatlas_org_public_key
private_key = var.mongodbatlas_org_private_key
project_id = var.mongodbatlas_project_id
}
}
provider3.tf
provider "mongodbatlas" {
public_key = vault_database_secret_backend_connection.db2.mongodbatlas[0].public_key
private_key = vault_database_secret_backend_connection.db2.mongodbatlas[0].private_key
}
resource "vault_mount" "db1" {
path = "mongodbatlas01"
type = "database"
}
resource "vault_database_secret_backend_connection" "db2" {
backend = vault_mount.db1.path
name = "mongodbatlas02"
allowed_roles = ["dev", "prod"]
mongodbatlas {
public_key = var.mongodbatlas_org_public_key
private_key = var.mongodbatlas_org_private_key
project_id = var.mongodbatlas_project_id
}
}
Both methods give the same sort of error:
mongodbatlas_cluster.cluster-terraform01: Creating...
╷
│ Error: error creating MongoDB Cluster: POST https://cloud.mongodb.com/api/atlas/v1.0/groups/10000000000000000000001/clusters: 401 (request "") You are not authorized for this resource.
│
│ with mongodbatlas_cluster.cluster-terraform01,
│ on main.tf line 1, in resource "mongodbatlas_cluster" "cluster-terraform01":
│ 1: resource "mongodbatlas_cluster" "cluster-terraform01" {
Any pointers or examples would be greatly appreciated
many thanks
Once you have setup and started vault, enabled mongodbatlas, added config and roles to vault, its actually rather easy to connect terraform to atlas using dynamic ephemeral keys created by vault.
Run these commands first to start and configure vault locally:
vault server -dev
export VAULT_ADDR='http://127.0.0.1:8200'
vault secrets enable mongodbatlas
# Write your master API keys into vault
vault write mongodbatlas/config public_key=org-api-public-key private_key=org-api-private-key
vault write mongodbatlas/roles/test project_id=100000000000000000000001 roles=GROUP_OWNER ttl=2h max_ttl=5h cidr_blocks=123.45.67.1/24
Now add the vault provider and data source to your terraform config:
provider "vault" {
address = "http://127.0.0.1:8200"
}
data "vault_generic_secret" "mongodbatlas" {
path = "mongodbatlas/creds/test"
}
Finally, add the mongodbatlas provider with the keys as provided by the vault data source:
provider "mongodbatlas" {
public_key = data.vault_generic_secret.mongodbatlas.data["public_key"]
private_key = data.vault_generic_secret.mongodbatlas.data["private_key"]
}
This is shown in a full example in this example github repo

Using terraform variable in hcl extention ( vault )

I'am tring to do automation for path and policies creation in vault.
Do you know how I can proceed please ? variables declared in terraform are not reconized in .hcl file.
I tried to rename my file client-ro-policy.hcl to client-ro-policy.tf but I have same issue
Varibales is recognized in file with .tf extention
Thanks
main.tf
# Use Vault provider
provider "vault" {
# It is strongly recommended to configure this provider through the
# environment variables:
# - VAULT_ADDR
# - VAULT_TOKEN
# - VAULT_CACERT
# - VAULT_CAPATH
# - etc.
}
acl-ro-policy.hcl
path "${var.client[0]}/k8s/preprod/*" {
capabilities = ["read"]
}
policies.tf
#---------------------
# Create policies
#---------------------
# Create 'client' policy
resource "vault_policy" "ro-client" {
name = "${var.client[0]}_k8s_preprod_ro"
policy = file("./hcl-ro-policy.tf")
}
variables.tf
variable "client" {
type = list(string)
}
variables.tfvars
client = ["titi", "toto","itutu"]
Result in vault:
Even though Terraform and Vault both use HCL as the underlying syntax of their respective configuration languages, their language interpreters are totally separate and so the Vault policy language implementation cannot make direct use of any values defined in the Terraform language.
Instead, you'll need to use the Terraform language to construct a suitable configuration for Vault. Vault supports a JSON variant of its policy language in order to make it easier to programmatically generate it, and so you can use Terraform's jsonencode function to build a JSON-based policy from the result of a Terraform expression, which may itself include references to values elsewhere in Terraform.
For example:
locals {
vault_ro_policy = {
path = {
"${var.client[0]}/k8s/preprod/*" = {
capabilities = ["read"]
}
}
}
}
resource "vault_policy" "ro-client" {
name = "${var.client[0]}_k8s_preprod_ro"
policy = jsonencode(local.var_ro_policy)
}
The value of local.vault_ro_policy should encode to JSON as follows, assuming that var.client[0] has the value "example":
{
"path": {
"example/k8s/preprod/*": {
"capabilities": ["read"]
}
}
}
Assuming that this is valid Vault JSON policy syntax (which I've not verified), this should be accepted by Vault as a valid policy. If I didn't get the JSON policy syntax exactly right then hopefully you can see how to adjust it to be valid; my expertise is with Terraform, so I focused on the Terraform language part here.

Terraform Incapsula provider fails to create custom certificate resource

We are trying to use Terraform Incapsula privider to manage Imperva site and custom certificate resources.
We are able to create Imperva site resources but certificate resource creation fails.
Our use-case is to get the certificate from Azure KeyVault and import it to Imperva using Incapsula Privider. We get the certificate from KeyVault using Terraform "azurerm_key_vault_secret" data source. It returns the certificate as Base64 string that we pass as "certificate" parameter into Terraform "incapsula_custom_certificate" resource along with siteID that was created using Terraform "incapsula_site" resource. When we run "terraform apply" we get the error below.
incapsula_custom_certificate.custom-certificate: Creating...
Error: Error from Incapsula service when adding custom certificate for site_id ******807: {"res":2,"res_message":"Invalid input","debug_info":{"certificate":"invalid certificate or passphrase","id-info":"13007"}}
on main.tf line 36, in resource "incapsula_custom_certificate" "custom-certificate":
36: resource "incapsula_custom_certificate" "custom-certificate" {
We tried reading the certificate from PFX file in Base64 encoding using Terraform "filebase64" function, but we get the same error.
Here is our Terraform code:
provider "azurerm" {
version = "=2.12.0"
features {}
}
data "azurerm_key_vault_secret" "imperva_api_id" {
name = var.imperva-api-id
key_vault_id = var.kv.id
}
data "azurerm_key_vault_secret" "imperva_api_key" {
name = var.imperva-api-key
key_vault_id = var.kv.id
}
data "azurerm_key_vault_secret" "cert" {
name = var.certificate_name
key_vault_id = var.kv.id
}
provider "incapsula" {
api_id = data.azurerm_key_vault_secret.imperva_api_id.value
api_key = data.azurerm_key_vault_secret.imperva_api_key.value
}
resource "incapsula_site" "site" {
domain = var.client_facing_fqdn
send_site_setup_emails = true
site_ip = var.tm_cname
force_ssl = true
}
resource "incapsula_custom_certificate" "custom-certificate" {
site_id = incapsula_site.site.id
certificate = data.azurerm_key_vault_secret.cert.value
#certificate = filebase64("certificate.pfx")
}
We were able to import the same PFX certificate file using the same Site ID, Imperva API ID and Key by calling directly Imperva API from a Python script.
The certificate doesn't have a passphase.
Are we doing something wrong or is this an Incapsula provider issue?
Looking through the source code of the provider it looks like it is already performing a base64 encode operation as part of the AddCertificate function, which means using the Terraform filebase64 function is double-encoding the certificate.
Instead, I think it should look like this:
resource "incapsula_custom_certificate" "custom-certificate" {
site_id = incapsula_site.site.id
certificate = file("certificate.pfx")
}
If the returned value from azure is base64 then something like this could work too.
certificate = base64decode(data.azurerm_key_vault_secret.cert.value)
Have you tried creating a self-signed cert, converting it to PFX with a passphrase, and using that?
I ask because Azure's PFX output has a blank/non-existent passphrase, and I've had issues with a handful of tools over the years that simply won't import a PFX unless you set a passphrase.

How to do a terraform module of data sources?

I want to do a module of data sources, but I am unsure how to declare them? The different accounts are gonna use the same, and they are already in place.
the data sources are regarding iam and policies.
I know usually you do :
module "iam" {
source = "folder"
name = "blabla"
... }
Thanks a lot!
You could create an own 'environment' for that. Let's name it general.
If you assign an own backend for it and configure it to use an S3 bucket as remote storage (recommended anyways if you work with multiple contributors), you can work with terraform_remote_state.
Just import the state of general into your environment with
data "terraform_remote_state" "general" {
backend = "s3"
config {
region = "..." # e.g. "eu-central-1"
bucket = "..." # the remote-storage bucket name of 'general'
key = "..." # e.g. "environments/general/terraform.tfstate" (as defined in the 'general' backend!
}
}
You can then access the resources from that state with ami = "${data.terraform_remote_state.general.ami}" if you declared them as output variable:
output "ami" {
description = "The ID of the default EC2 AMI"
value = "${var.ami}"
}
Of course you can also output resource attributes:
output "vpc_id" {
description = "The ID of the created VPC"
value = "${aws_vpc.vpc.id}"
}

Resources