We are trying to use Terraform Incapsula privider to manage Imperva site and custom certificate resources.
We are able to create Imperva site resources but certificate resource creation fails.
Our use-case is to get the certificate from Azure KeyVault and import it to Imperva using Incapsula Privider. We get the certificate from KeyVault using Terraform "azurerm_key_vault_secret" data source. It returns the certificate as Base64 string that we pass as "certificate" parameter into Terraform "incapsula_custom_certificate" resource along with siteID that was created using Terraform "incapsula_site" resource. When we run "terraform apply" we get the error below.
incapsula_custom_certificate.custom-certificate: Creating...
Error: Error from Incapsula service when adding custom certificate for site_id ******807: {"res":2,"res_message":"Invalid input","debug_info":{"certificate":"invalid certificate or passphrase","id-info":"13007"}}
on main.tf line 36, in resource "incapsula_custom_certificate" "custom-certificate":
36: resource "incapsula_custom_certificate" "custom-certificate" {
We tried reading the certificate from PFX file in Base64 encoding using Terraform "filebase64" function, but we get the same error.
Here is our Terraform code:
provider "azurerm" {
version = "=2.12.0"
features {}
}
data "azurerm_key_vault_secret" "imperva_api_id" {
name = var.imperva-api-id
key_vault_id = var.kv.id
}
data "azurerm_key_vault_secret" "imperva_api_key" {
name = var.imperva-api-key
key_vault_id = var.kv.id
}
data "azurerm_key_vault_secret" "cert" {
name = var.certificate_name
key_vault_id = var.kv.id
}
provider "incapsula" {
api_id = data.azurerm_key_vault_secret.imperva_api_id.value
api_key = data.azurerm_key_vault_secret.imperva_api_key.value
}
resource "incapsula_site" "site" {
domain = var.client_facing_fqdn
send_site_setup_emails = true
site_ip = var.tm_cname
force_ssl = true
}
resource "incapsula_custom_certificate" "custom-certificate" {
site_id = incapsula_site.site.id
certificate = data.azurerm_key_vault_secret.cert.value
#certificate = filebase64("certificate.pfx")
}
We were able to import the same PFX certificate file using the same Site ID, Imperva API ID and Key by calling directly Imperva API from a Python script.
The certificate doesn't have a passphase.
Are we doing something wrong or is this an Incapsula provider issue?
Looking through the source code of the provider it looks like it is already performing a base64 encode operation as part of the AddCertificate function, which means using the Terraform filebase64 function is double-encoding the certificate.
Instead, I think it should look like this:
resource "incapsula_custom_certificate" "custom-certificate" {
site_id = incapsula_site.site.id
certificate = file("certificate.pfx")
}
If the returned value from azure is base64 then something like this could work too.
certificate = base64decode(data.azurerm_key_vault_secret.cert.value)
Have you tried creating a self-signed cert, converting it to PFX with a passphrase, and using that?
I ask because Azure's PFX output has a blank/non-existent passphrase, and I've had issues with a handful of tools over the years that simply won't import a PFX unless you set a passphrase.
Related
I have a certificate for an app service in an azure keyvault, I want to import a key vault certificate to my web app in terraform but am not sure where i would refer to the keyvault in the below example?
resource "azurerm_app_service_certificate_binding" "example" {
hostname_binding_id = azurerm_app_service_custom_hostname_binding.example.id
certificate_id = azurerm_app_service_managed_certificate.example.id
ssl_state = "SniEnabled"
}
To bind the existing key vault certificate with your webapp need to use as mentioned below by #json we need to first call key vault certificate using data then bind with webapp .
//First Read the External Key Vault
data "azurerm_key_vault" "production_keyvault" {
name = "testingkeyvault2022"
resource_group_name = "KeyVaultWestEuropeBackend"
}
// Now Read the Certificate
data "azurerm_key_vault_secret" "prod_certificate" {
name = "testcert"
key_vault_id = data.azurerm_key_vault.production_keyvault.id
}
// Now bind the webapp to the domain and look for certificate.
resource "azurerm_app_service_custom_hostname_binding" "website_app_hostname_bind" { //Website App
depends_on = [
azurerm_app_service_certificate.cert,
]
hostname = var.websiteurlbind
app_service_name = data.azurerm_app_service.read_website_app.name
resource_group_name = data.azurerm_resource_group.Terraform.name
ssl_state = "SniEnabled"
thumbprint = azurerm_app_service_certificate.cert.thumbprint
}
// Get Certificate from External KeyVault
resource "azurerm_app_service_certificate" "cert" {
name = "testingcert"
resource_group_name = data.azurerm_resource_group.Terraform.name
location = data.azurerm_resource_group.Terraform.location
pfx_blob = data.azurerm_key_vault_secret.prod_certificate.value
}
Note:- I have not tested due to some access issue from my end,but it should work.
Please find this SO THREAD for more information.
If still the issue persists please re-open at this GitHub issue.
I'm trying to get my terraform (which manages our mongo atlas infrastructure) to use dynamic secrets (through vault, running on localhost for now) when terraform is connecting to atlas, but I cant seem to get it to work.
I can't find any examples of how to do this so I have put together a sample github repo showing the few things I've tried to do so far.
All the magic is contained in the 3 provider files. I have the standard method of connecting to atlas using static keys (even with said keys generated as temporary API keys through vault) see provider1.tf
The problem comes when I try and use the atlas vault secret engine for terraform along with the mongo atlas provider. There are just no examples!
My question is how do I use the atlas vault secret engine to generate and use temporary keys when provisioning infrastructure with terraform?
I've tried two different ways of wiring up the providers, see files provider2.tf and provider3.tf, code is copied here:
provider2.tf
provider "mongodbatlas" {
public_key = vault_database_secrets_mount.db.mongodbatlas[0].public_key
private_key = vault_database_secrets_mount.db.mongodbatlas[0].private_key
}
provider "vault" {
address = "http://127.0.0.1:8200"
}
resource "vault_database_secrets_mount" "db" {
path = "db"
mongodbatlas {
name = "foo"
public_key = var.mongodbatlas_org_public_key
private_key = var.mongodbatlas_org_private_key
project_id = var.mongodbatlas_project_id
}
}
provider3.tf
provider "mongodbatlas" {
public_key = vault_database_secret_backend_connection.db2.mongodbatlas[0].public_key
private_key = vault_database_secret_backend_connection.db2.mongodbatlas[0].private_key
}
resource "vault_mount" "db1" {
path = "mongodbatlas01"
type = "database"
}
resource "vault_database_secret_backend_connection" "db2" {
backend = vault_mount.db1.path
name = "mongodbatlas02"
allowed_roles = ["dev", "prod"]
mongodbatlas {
public_key = var.mongodbatlas_org_public_key
private_key = var.mongodbatlas_org_private_key
project_id = var.mongodbatlas_project_id
}
}
Both methods give the same sort of error:
mongodbatlas_cluster.cluster-terraform01: Creating...
╷
│ Error: error creating MongoDB Cluster: POST https://cloud.mongodb.com/api/atlas/v1.0/groups/10000000000000000000001/clusters: 401 (request "") You are not authorized for this resource.
│
│ with mongodbatlas_cluster.cluster-terraform01,
│ on main.tf line 1, in resource "mongodbatlas_cluster" "cluster-terraform01":
│ 1: resource "mongodbatlas_cluster" "cluster-terraform01" {
Any pointers or examples would be greatly appreciated
many thanks
Once you have setup and started vault, enabled mongodbatlas, added config and roles to vault, its actually rather easy to connect terraform to atlas using dynamic ephemeral keys created by vault.
Run these commands first to start and configure vault locally:
vault server -dev
export VAULT_ADDR='http://127.0.0.1:8200'
vault secrets enable mongodbatlas
# Write your master API keys into vault
vault write mongodbatlas/config public_key=org-api-public-key private_key=org-api-private-key
vault write mongodbatlas/roles/test project_id=100000000000000000000001 roles=GROUP_OWNER ttl=2h max_ttl=5h cidr_blocks=123.45.67.1/24
Now add the vault provider and data source to your terraform config:
provider "vault" {
address = "http://127.0.0.1:8200"
}
data "vault_generic_secret" "mongodbatlas" {
path = "mongodbatlas/creds/test"
}
Finally, add the mongodbatlas provider with the keys as provided by the vault data source:
provider "mongodbatlas" {
public_key = data.vault_generic_secret.mongodbatlas.data["public_key"]
private_key = data.vault_generic_secret.mongodbatlas.data["private_key"]
}
This is shown in a full example in this example github repo
I am deploying a azure application gateway using terraform. In the resource setting, I have this configuration:
ssl_certificate {
data = filebase64(var.ssl_certificate_path)
name = "demo-app-gateway-certificate"
password = var.ssl_certificate_password
}
trusted_root_certificate {
data = filebase64(var.ssl_certificate_path)
name = "demo-trusted-root-ca-certificate"
}
The certificate path is set as a variable:
variable "ssl_certificate_path" {
default = "./certificate.pfx"
}
But when I run terraform apply I get the following error:
Error: creating/updating Application Gateway: (Name "demo-app-gateway" / Resource Group "XXXX"): network.ApplicationGatewaysClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="ApplicationGatewayTrustedRootCertificateInvalidData" Message="Data for certificate /subscriptions/XXXXX/resourceGroups/XXXX/providers/Microsoft.Network/applicationGateways/demo-app-gateway/trustedRootCertificates/demo-trusted-root-ca-certificate is invalid." Details=[]
I tried to use filebase64 and base64 but I keep getting the same error.
Can please anyone explain me what am I doing wrong here? thank you so much.
As mentioned in Comments , SSL Certificate Should be of .pfx format as it requires a private key and Trusted Root Certificate should be of .cer format.
For more information you can refer to Manage Certificate section in this Microsoft Document.
So as a solution , you have to use as below:
ssl_certificate {
name = "app_listener"
data = filebase64("C:/powershellpfx.pfx")
password = "Password!1234"
}
trusted_root_certificate {
data = filebase64("C:/PowerShellCert.cer")
name = "demo-trusted-root-ca-certificate"
}
Output:
I'm trying to create azurerm backend_http_settings in an Azure Application Gateway v2.0 using Terraform and Letsencrypt via the ACME provider.
I can successfully create a cert and import the .pfx into the frontend https listener, acme and azurerm providers provide everything you need to handle pkcs12.
Unfortunatley the backend wants a .cer file, presumably encoded in base64, not DER, and I can't get it to work no matter what I try. My understanding is that a letsencrypt .pem file should be fine for this, but when I attempt to use the acme provider's certificate_pem as the trusted_root_certificate, I get the following error:
Error: Error Creating/Updating Application Gateway "agw-frontproxy" (Resource Group "rg-mir"): network.ApplicationGatewaysClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="ApplicationGatewayTrustedRootCertificateInvalidData" Message="Data for certificate .../providers/Microsoft.Network/applicationGateways/agw-frontproxy/trustedRootCertificates/vnet-mir-be-cert is invalid." Details=[]
terraform plan works fine, the above error happens during terraform apply, when the azurerm provider gets angry that the cert data are invalid. I have written the certs to disk and they look as I'd expect. Here is a code snippet with the relevant code:
locals {
https_setting_name = "${azurerm_virtual_network.vnet-mir.name}-be-tls-htst"
https_frontend_cert_name = "${azurerm_virtual_network.vnet-mir.name}-fe-cert"
https_backend_cert_name = "${azurerm_virtual_network.vnet-mir.name}-be-cert"
}
provider "azurerm" {
version = "~>2.7"
features {
key_vault {
purge_soft_delete_on_destroy = true
}
}
}
provider "acme" {
server_url = "https://acme-staging-v02.api.letsencrypt.org/directory"
}
resource "acme_certificate" "certificate" {
account_key_pem = acme_registration.reg.account_key_pem
common_name = "cert-test.example.com"
subject_alternative_names = ["cert-test2.example.com", "cert-test3.example.com"]
certificate_p12_password = "<your password here>"
dns_challenge {
provider = "cloudflare"
config = {
CF_API_EMAIL = "<your email here>"
CF_DNS_API_TOKEN = "<your token here>"
CF_ZONE_API_TOKEN = "<your token here>"
}
}
}
resource "azurerm_application_gateway" "agw-frontproxy" {
name = "agw-frontproxy"
location = azurerm_resource_group.rg-mir.location
resource_group_name = azurerm_resource_group.rg-mir.name
sku {
name = "Standard_v2"
tier = "Standard_v2"
capacity = 2
}
trusted_root_certificate {
name = local.https_backend_cert_name
data = acme_certificate.certificate.certificate_pem
}
ssl_certificate {
name = local.https_frontend_cert_name
data = acme_certificate.certificate.certificate_p12
password = "<your password here>"
}
# Create HTTPS listener and backend
backend_http_settings {
name = local.https_setting_name
cookie_based_affinity = "Disabled"
port = 443
protocol = "Https"
request_timeout = 20
trusted_root_certificate_names = [local.https_backend_cert_name]
}
How do I get AzureRM Application Gateway to take ACME .PEM cert as trusted_root_certificates in AGW SSL end-to-end config?
For me the only thing that worked is using tightly coupled Windows tools. If you follow the below documentation it's going to work. Spend 2 days fighting the same issue :)
Microsoft Docs
If you don't specify any certificate, the Azure v2 application gateway will default to using the certificate in the backend web server that it is directing traffic to. this eliminates the redundant installation of certificates, one in the web server (in this case a Traefik edge router) and one in the AGW backend.
This works around the question of how to get the certificate formatted correctly altogether. Unfortunately, I never could get a certificate installed, even with a Microsoft Support Engineer on the phone. He was like "yeah, it looks good, it should work, don't know why it doesn't, can you just avoid it by using a v2 gateway and not installing a cert at all on the backend?"
A v2 gateway requires static public IP and "Standard_v2" sku type and tier to work as shown above.
Seems that if you set the password to empty string like this: https://github.com/vancluever/terraform-provider-acme/issues/135
it suddenly works. This is because the certificate format already includes the password. It is unfortunate that it is not written in the documentation. I am going to try it now and give my feedback on that.
I am trying to find some way of moving my certificates from a Key Vault in one Azure Subscription to another Azure subscription. Is there anyway of doing this>
Find below an approach to move a self-signed certification created in Azure Key Vault assuming it is already created.
--- Download PFX ---
First, go to the Azure Portal and navigate to the Key Vault that holds the certificate that needs to be moved. Then, select the certificate, the desired version and click Download in PFX/PEM format.
--- Import PFX ---
Now, go to the Key Vault in the destination subscription, Certificates, click +Generate/Import and import the PFX file downloaded in the previous step.
If you need to automate this process, the following article provides good examples related to your question:
https://blogs.technet.microsoft.com/kv/2016/09/26/get-started-with-azure-key-vault-certificates/
I eventually used terraform to achieve this. I referenced the certificates from the azure keyvault secret resource and created new certificates.
the sample code here.
terraform {
required_version = ">= 0.13"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.17.0"
}
}
}
provider "azurerm" {
features {}
}
locals {
certificates = [
"certificate_name_1",
"certificate_name_2",
"certificate_name_3",
"certificate_name_4",
"certificate_name_5",
"certificate_name_6"
]
}
data "azurerm_key_vault" "old" {
name = "old_keyvault_name"
resource_group_name = "old_keyvault_resource_group"
}
data "azurerm_key_vault" "new" {
name = "new_keyvault_name"
resource_group_name = "new_keyvault_resource_group"
}
data "azurerm_key_vault_secret" "secrets" {
for_each = toset(local.certificates)
name = each.value
key_vault_id = data.azurerm_key_vault.old.id
}
resource "azurerm_key_vault_certificate" "secrets" {
for_each = data.azurerm_key_vault_secret.secrets
name = each.value.name
key_vault_id = data.azurerm_key_vault.new.id
certificate {
contents = each.value.value
}
}
wrote a post here as well