Im trying to create a Cloudform distribution using an existing ACM Certificate:
data "aws_acm_certificate" "issued" {
domain = "*.mydomain.com"
statuses = ["ISSUED"]
}
resource "aws_cloudfront_distribution" "cloudfront" {
...
viewer_certificate {
cloudfront_default_certificate = false
acm_certificate_arn = data.aws_acm_certificate.issued.id
minimum_protocol_version = "TLSv1.1_2016"
ssl_support_method = "sni-only"
}
...
}
I'm getting the error: Error: error updating CloudFront Distribution (EMLDE0O3OG6CZ): InvalidViewerCertificate: The specified SSL certificate doesn't exist, isn't in us-east-1 region, isn't valid, or doesn't include a valid certificate chain.
The certificate is already in use with another manually created distribution, also when I replace data.aws_acm_certificate.issued.id by the certificate ARN as a string everything works fine.
Ok so looking a bit closer I've realised that the certificate was coming from the region that I'm deploying my resources and not "us-east-1"
Based on this answer, this is how I've fixed the problem:
provider "aws" {
region = var.aws_region
}
provider "aws" {
alias = "virginia"
region = "us-east-1"
}
data "aws_acm_certificate" "issued" {
domain = "*.example.com"
statuses = ["ISSUED"]
provider = aws.virginia
}
According with Terraform's docs, the provider without an alias is the default and I'll use the second only to fetch my certificate data!
Related
I have a domain name registered in AWS Route53 with an ACM certificate. I am now attempting to both move that domain name and certificate to a new account as well as manage the resources with Terraform. I used the AWS CLI to move the domain name to the new account and it appears to have worked fine. Then I tried running this Terraform code to create a new certificate and hosted zone for the domain.
resource "aws_acm_certificate" "default" {
domain_name = "mydomain.io"
validation_method = "DNS"
}
resource "aws_route53_zone" "external" {
name = "mydomain.io"
}
resource "aws_route53_record" "validation" {
name = aws_acm_certificate.default.domain_validation_options.0.resource_record_name
type = aws_acm_certificate.default.domain_validation_options.0.resource_record_type
zone_id = aws_route53_zone.external.zone_id
records = [aws_acm_certificate.default.domain_validation_options.0.resource_record_value]
ttl = "60"
}
resource "aws_acm_certificate_validation" "default" {
certificate_arn = aws_acm_certificate.default.arn
validation_record_fqdns = [
aws_route53_record.validation.fqdn,
]
}
There are two things that are strange about this. Primarily, the certificate is created but the validation never completes. It's still in Pending validation status. I read somewhere after this failed that you can't auto validate and you need to create the CNAME record manually. So I went into the console and clicked the "add cname to route 53" button. This added the CNAME record appropriately to my new Route53 record that Terraform created. But it's been pending for hours. I've clicked that same button several times, only one CNAME was created, subsequent clicks have no effect.
Another oddity, and perhaps a clue, is that my website is still up and working. I believe this should have broken the website since the domain is now owned by a new account, routing to a different hosted zone on that new account, and has a certificate that's now still pending. However, everything still works as normal. So I think it's possible that the old certificate and hosted zone is effecting this. Do they need to release the domain and do I need to delete that certificate? Deleting the certificate on the old account sounds unnecessary. I should just no longer be given out.
I have not, yet, associated the certificate with Cloudfront or ALB which I intend to do. But since it's not validated, my Terrform code for creating a Cloudfront instance dies.
It turns out that my transferred domain came transferred with a set of name servers, however, the name servers in the Route53 hosted zone were all different. When these are created together through the console, it does the right thing. I'm not sure how to do the right thing here with Terraform, which I'm going to post another question about in the moment. But for now, the solution is to change the name servers on either the hosted zone or the registered domain to match each other.
It's working for me
######################
data "aws_route53_zone" "main" {
name = var.domain
private_zone = false
}
locals {
final_domain = var.wildcard_enable == true ? *.var.domain : var.domain
# final_domain = "${var.wildcard_enable == true ? "*.${var.domain}" : var.domain}"
}
resource "aws_acm_certificate" "cert" {
domain_name = local.final_domain
validation_method = "DNS"
tags = {
"Name" = var.domain
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_route53_record" "cert_validation" {
depends_on = [aws_acm_certificate.cert]
zone_id = data.aws_route53_zone.main.id
name = sort(aws_acm_certificate.cert.domain_validation_options[*].resource_record_name)[0]
type = "CNAME"
ttl = "300"
records = [sort(aws_acm_certificate.cert.domain_validation_options[*].resource_record_value)[0]]
allow_overwrite = true
}
resource "aws_acm_certificate_validation" "cert" {
certificate_arn = aws_acm_certificate.cert.arn
validation_record_fqdns = [
aws_route53_record.cert_validation.fqdn
]
timeouts {
create = "60m"
}
}
We are trying to use Terraform Incapsula privider to manage Imperva site and custom certificate resources.
We are able to create Imperva site resources but certificate resource creation fails.
Our use-case is to get the certificate from Azure KeyVault and import it to Imperva using Incapsula Privider. We get the certificate from KeyVault using Terraform "azurerm_key_vault_secret" data source. It returns the certificate as Base64 string that we pass as "certificate" parameter into Terraform "incapsula_custom_certificate" resource along with siteID that was created using Terraform "incapsula_site" resource. When we run "terraform apply" we get the error below.
incapsula_custom_certificate.custom-certificate: Creating...
Error: Error from Incapsula service when adding custom certificate for site_id ******807: {"res":2,"res_message":"Invalid input","debug_info":{"certificate":"invalid certificate or passphrase","id-info":"13007"}}
on main.tf line 36, in resource "incapsula_custom_certificate" "custom-certificate":
36: resource "incapsula_custom_certificate" "custom-certificate" {
We tried reading the certificate from PFX file in Base64 encoding using Terraform "filebase64" function, but we get the same error.
Here is our Terraform code:
provider "azurerm" {
version = "=2.12.0"
features {}
}
data "azurerm_key_vault_secret" "imperva_api_id" {
name = var.imperva-api-id
key_vault_id = var.kv.id
}
data "azurerm_key_vault_secret" "imperva_api_key" {
name = var.imperva-api-key
key_vault_id = var.kv.id
}
data "azurerm_key_vault_secret" "cert" {
name = var.certificate_name
key_vault_id = var.kv.id
}
provider "incapsula" {
api_id = data.azurerm_key_vault_secret.imperva_api_id.value
api_key = data.azurerm_key_vault_secret.imperva_api_key.value
}
resource "incapsula_site" "site" {
domain = var.client_facing_fqdn
send_site_setup_emails = true
site_ip = var.tm_cname
force_ssl = true
}
resource "incapsula_custom_certificate" "custom-certificate" {
site_id = incapsula_site.site.id
certificate = data.azurerm_key_vault_secret.cert.value
#certificate = filebase64("certificate.pfx")
}
We were able to import the same PFX certificate file using the same Site ID, Imperva API ID and Key by calling directly Imperva API from a Python script.
The certificate doesn't have a passphase.
Are we doing something wrong or is this an Incapsula provider issue?
Looking through the source code of the provider it looks like it is already performing a base64 encode operation as part of the AddCertificate function, which means using the Terraform filebase64 function is double-encoding the certificate.
Instead, I think it should look like this:
resource "incapsula_custom_certificate" "custom-certificate" {
site_id = incapsula_site.site.id
certificate = file("certificate.pfx")
}
If the returned value from azure is base64 then something like this could work too.
certificate = base64decode(data.azurerm_key_vault_secret.cert.value)
Have you tried creating a self-signed cert, converting it to PFX with a passphrase, and using that?
I ask because Azure's PFX output has a blank/non-existent passphrase, and I've had issues with a handful of tools over the years that simply won't import a PFX unless you set a passphrase.
I'm trying to create azurerm backend_http_settings in an Azure Application Gateway v2.0 using Terraform and Letsencrypt via the ACME provider.
I can successfully create a cert and import the .pfx into the frontend https listener, acme and azurerm providers provide everything you need to handle pkcs12.
Unfortunatley the backend wants a .cer file, presumably encoded in base64, not DER, and I can't get it to work no matter what I try. My understanding is that a letsencrypt .pem file should be fine for this, but when I attempt to use the acme provider's certificate_pem as the trusted_root_certificate, I get the following error:
Error: Error Creating/Updating Application Gateway "agw-frontproxy" (Resource Group "rg-mir"): network.ApplicationGatewaysClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="ApplicationGatewayTrustedRootCertificateInvalidData" Message="Data for certificate .../providers/Microsoft.Network/applicationGateways/agw-frontproxy/trustedRootCertificates/vnet-mir-be-cert is invalid." Details=[]
terraform plan works fine, the above error happens during terraform apply, when the azurerm provider gets angry that the cert data are invalid. I have written the certs to disk and they look as I'd expect. Here is a code snippet with the relevant code:
locals {
https_setting_name = "${azurerm_virtual_network.vnet-mir.name}-be-tls-htst"
https_frontend_cert_name = "${azurerm_virtual_network.vnet-mir.name}-fe-cert"
https_backend_cert_name = "${azurerm_virtual_network.vnet-mir.name}-be-cert"
}
provider "azurerm" {
version = "~>2.7"
features {
key_vault {
purge_soft_delete_on_destroy = true
}
}
}
provider "acme" {
server_url = "https://acme-staging-v02.api.letsencrypt.org/directory"
}
resource "acme_certificate" "certificate" {
account_key_pem = acme_registration.reg.account_key_pem
common_name = "cert-test.example.com"
subject_alternative_names = ["cert-test2.example.com", "cert-test3.example.com"]
certificate_p12_password = "<your password here>"
dns_challenge {
provider = "cloudflare"
config = {
CF_API_EMAIL = "<your email here>"
CF_DNS_API_TOKEN = "<your token here>"
CF_ZONE_API_TOKEN = "<your token here>"
}
}
}
resource "azurerm_application_gateway" "agw-frontproxy" {
name = "agw-frontproxy"
location = azurerm_resource_group.rg-mir.location
resource_group_name = azurerm_resource_group.rg-mir.name
sku {
name = "Standard_v2"
tier = "Standard_v2"
capacity = 2
}
trusted_root_certificate {
name = local.https_backend_cert_name
data = acme_certificate.certificate.certificate_pem
}
ssl_certificate {
name = local.https_frontend_cert_name
data = acme_certificate.certificate.certificate_p12
password = "<your password here>"
}
# Create HTTPS listener and backend
backend_http_settings {
name = local.https_setting_name
cookie_based_affinity = "Disabled"
port = 443
protocol = "Https"
request_timeout = 20
trusted_root_certificate_names = [local.https_backend_cert_name]
}
How do I get AzureRM Application Gateway to take ACME .PEM cert as trusted_root_certificates in AGW SSL end-to-end config?
For me the only thing that worked is using tightly coupled Windows tools. If you follow the below documentation it's going to work. Spend 2 days fighting the same issue :)
Microsoft Docs
If you don't specify any certificate, the Azure v2 application gateway will default to using the certificate in the backend web server that it is directing traffic to. this eliminates the redundant installation of certificates, one in the web server (in this case a Traefik edge router) and one in the AGW backend.
This works around the question of how to get the certificate formatted correctly altogether. Unfortunately, I never could get a certificate installed, even with a Microsoft Support Engineer on the phone. He was like "yeah, it looks good, it should work, don't know why it doesn't, can you just avoid it by using a v2 gateway and not installing a cert at all on the backend?"
A v2 gateway requires static public IP and "Standard_v2" sku type and tier to work as shown above.
Seems that if you set the password to empty string like this: https://github.com/vancluever/terraform-provider-acme/issues/135
it suddenly works. This is because the certificate format already includes the password. It is unfortunate that it is not written in the documentation. I am going to try it now and give my feedback on that.
I use terraform to deploy lambda to one aws account and s3 trigger for lambda in other. Because of that, I created two separate folders and each of them holds state of specific account.
However, I'd like to merge everything into one template. Is it possible to do it? Example:
provider "aws" {
profile = "${var.aws_profile}"
region = "eu-west-1"
}
provider "aws" {
alias = "bucket-trigger-account"
region = "eu-west-1"
profile = "${var.aws_bucket_trigger_profile}
}
I want thie following resource to be provisioned by aws bucket-trigger-account. How can I do it?
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = "${var.notifications_bucket}"
lambda_function {
lambda_function_arn = "arn:aws:lambda:eu-west-1-xxx"
events = ["s3:ObjectCreated:*"]
filter_suffix = ".test"
}
}
Found out that simply using provider argument on resource let's you use a different provider for that resource: provider = "aws.bucket-trigger-account"
I am trying to find some way of moving my certificates from a Key Vault in one Azure Subscription to another Azure subscription. Is there anyway of doing this>
Find below an approach to move a self-signed certification created in Azure Key Vault assuming it is already created.
--- Download PFX ---
First, go to the Azure Portal and navigate to the Key Vault that holds the certificate that needs to be moved. Then, select the certificate, the desired version and click Download in PFX/PEM format.
--- Import PFX ---
Now, go to the Key Vault in the destination subscription, Certificates, click +Generate/Import and import the PFX file downloaded in the previous step.
If you need to automate this process, the following article provides good examples related to your question:
https://blogs.technet.microsoft.com/kv/2016/09/26/get-started-with-azure-key-vault-certificates/
I eventually used terraform to achieve this. I referenced the certificates from the azure keyvault secret resource and created new certificates.
the sample code here.
terraform {
required_version = ">= 0.13"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.17.0"
}
}
}
provider "azurerm" {
features {}
}
locals {
certificates = [
"certificate_name_1",
"certificate_name_2",
"certificate_name_3",
"certificate_name_4",
"certificate_name_5",
"certificate_name_6"
]
}
data "azurerm_key_vault" "old" {
name = "old_keyvault_name"
resource_group_name = "old_keyvault_resource_group"
}
data "azurerm_key_vault" "new" {
name = "new_keyvault_name"
resource_group_name = "new_keyvault_resource_group"
}
data "azurerm_key_vault_secret" "secrets" {
for_each = toset(local.certificates)
name = each.value
key_vault_id = data.azurerm_key_vault.old.id
}
resource "azurerm_key_vault_certificate" "secrets" {
for_each = data.azurerm_key_vault_secret.secrets
name = each.value.name
key_vault_id = data.azurerm_key_vault.new.id
certificate {
contents = each.value.value
}
}
wrote a post here as well