AWS NLB over Helm in Terraform - how to find DNS name? - terraform

I am using Helm chart provisioned by Terraform which creates Network Load Balancer, but I do not know how to get DNS name of this balancer so I can create Route53 records in Terraform for it.
If I can get it's ARN, I can call it over data block and read dns_name, however there is nothing like thit that Helm can return for me.
Do you have any suggestions?
I would like to keep it as IaC as possible
PS: I am passing some values to Helm chart so it's creating NLB, native functionality of this Chart is to create Classic LB.
service.beta.kubernetes.io/aws-load-balancer-type: nlb

I just found and answer, it's simple using:
Note: I had to specify namespace, otherwise was service null (not found).
data "kubernetes_service" "ingress_nginx" {
metadata {
name = "ingress-nginx-controller"
namespace = "kube-system"
}
}
output "k8s_service_ingress" {
description = "External DN name of load balancer"
value = data.kubernetes_service.ingress_nginx.status.0.load_balancer.0.ingress.0.hostname
}
It can be found in official docs too - https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/service

I had to use kubernetes_ingress_v1 so to create a Route 53 entry for the ingress hostname:
data "kubernetes_ingress_v1" "this" {
metadata {
name = "ingress-myservice"
namespace = "myservice"
}
depends_on = [
module.myservice-eks
]
}
resource "aws_route53_record" "this" {
zone_id = local.route53_zone_id
name = "whatever.myservice.com"
type = "CNAME"
ttl = "300"
records = [data.kubernetes_ingress_v1.this.status.0.load_balancer.0.ingress.0.hostname]
}

Related

What does `cluster_security_group_id` do in the Terraform EKS module?

From the documentation,
Description: Existing security group ID to be attached to the cluster. Required if create_cluster_security_group = false
I have set create_cluster_security_group = false and assigned a security group (let's call it data-sg) I've created elsewhere to the cluster_security_group_id input. This security group is also attached to a couple RDS instances, and data-sg has a rule allowing all ingress within itself:
module "data_sg" {
source = "terraform-aws-modules/security-group/aws"
version = "4.9.0"
name = "data-sg"
vpc_id = module.vpc.vpc_id
ingress_with_self = [
{
description = "All ingress within data-sg"
rule = "all-all"
self = true
}
]
...
}
Yet, pods in my EKS cluster cannot reach the RDS instances.
It seems that the pods are not within the data-sg security group, which begs the first question:
What is cluster_security_group_id actually used for?
Assuming this security group isn't actually being used in the way that I thought, it begs a second question:
What security group do I need to allow RDS-type ingress from in the data-sg security group?
Seems like there's an output from the EKS module called cluster_primary_security_group_id, but adding this
ingress_with_source_security_group_id = [
{
description = "RDS ingress from EKS cluster"
from_port = 5432
to_port = 5432
source_security_group_id = var.cluster_primary_security_group_id
}
]
to the module "aws_sg" { ... above also does not allow me to connect to RDS from EKS pods.

Terraform: How to retrieve the aks managed outbound ip

In an aks managed slb for standard sku, azure assigns a public ip automatically.
The name of this public ip is auto generated but has the following tags
"tags": {
"aks-managed-type": "aks-slb-managed-outbound-ip"
},
Im unable to retrieve this ip after its created.
The name is also auto generated
"name": "[parameters('publicIPAddresses_837ca1c7_1817_43b7_8f4d_34b750419d4b_name')]",
I tried to filter using the azurerm_public_ip data source and use tags for filtering but this is not working.
data "azurerm_public_ip" "example" {
resource_group_name = "rg-sample-004"
filter {
name = "tag:aks-managed-type"
values = [ "aks-slb-managed-outbound-ip" ]
}
}
This above code is incorrect as the name parameter is not provided, but I don't know the name until its created.
I want to whitelist this IP for the Azure MySQL database i create at apply stage.
Is there any other way to retrieve this public ip during terraform apply?
Here you go, we use this to whitelist access from AKS to key vaults etc:
data "azurerm_public_ip" "aks_outgoing" {
name = join("", (regex("([^/]+)$", join("", azurerm_kubernetes_cluster.aks.network_profile[0].load_balancer_profile[0].effective_outbound_ips))))
resource_group_name = "YOUR_RG"
}

Terraform error - Error update server is not set

Using Terraform v0.14.7 I'm creating a cname record set, but when I put the command terraform apply, I got this message error:
Error: update server is not set
In the Azure plataform in the "DNS ZONES" I have already created a zone called: tech.com.br:
provider "azurerm" {
features {}
}
resource "dns_cname_record" "foo" {
zone = "tech.com.br."
name = "foo"
cname = "info.tech.com.br."
ttl = 3600
}
Anyone could help me?
If you have created a DNS zone in the Azure platform, you can manage DNS CNAME Records within Azure DNS by using resource azurerm_dns_cname_record instead of resource "dns_cname_record". Also, we don't need to provide the zone name with the dot . suffix.
Change your code like this:
resource "azurerm_dns_cname_record" "foo" {
zone_name = "tech.com.br"
name = "foo"
record = "info.tech.com.br"
ttl = 3600
resource_group_name = "YourDNSZoneRG"
}

Why isn't my AWS ACM certificate validating?

I have a domain name registered in AWS Route53 with an ACM certificate. I am now attempting to both move that domain name and certificate to a new account as well as manage the resources with Terraform. I used the AWS CLI to move the domain name to the new account and it appears to have worked fine. Then I tried running this Terraform code to create a new certificate and hosted zone for the domain.
resource "aws_acm_certificate" "default" {
domain_name = "mydomain.io"
validation_method = "DNS"
}
resource "aws_route53_zone" "external" {
name = "mydomain.io"
}
resource "aws_route53_record" "validation" {
name = aws_acm_certificate.default.domain_validation_options.0.resource_record_name
type = aws_acm_certificate.default.domain_validation_options.0.resource_record_type
zone_id = aws_route53_zone.external.zone_id
records = [aws_acm_certificate.default.domain_validation_options.0.resource_record_value]
ttl = "60"
}
resource "aws_acm_certificate_validation" "default" {
certificate_arn = aws_acm_certificate.default.arn
validation_record_fqdns = [
aws_route53_record.validation.fqdn,
]
}
There are two things that are strange about this. Primarily, the certificate is created but the validation never completes. It's still in Pending validation status. I read somewhere after this failed that you can't auto validate and you need to create the CNAME record manually. So I went into the console and clicked the "add cname to route 53" button. This added the CNAME record appropriately to my new Route53 record that Terraform created. But it's been pending for hours. I've clicked that same button several times, only one CNAME was created, subsequent clicks have no effect.
Another oddity, and perhaps a clue, is that my website is still up and working. I believe this should have broken the website since the domain is now owned by a new account, routing to a different hosted zone on that new account, and has a certificate that's now still pending. However, everything still works as normal. So I think it's possible that the old certificate and hosted zone is effecting this. Do they need to release the domain and do I need to delete that certificate? Deleting the certificate on the old account sounds unnecessary. I should just no longer be given out.
I have not, yet, associated the certificate with Cloudfront or ALB which I intend to do. But since it's not validated, my Terrform code for creating a Cloudfront instance dies.
It turns out that my transferred domain came transferred with a set of name servers, however, the name servers in the Route53 hosted zone were all different. When these are created together through the console, it does the right thing. I'm not sure how to do the right thing here with Terraform, which I'm going to post another question about in the moment. But for now, the solution is to change the name servers on either the hosted zone or the registered domain to match each other.
It's working for me
######################
data "aws_route53_zone" "main" {
name = var.domain
private_zone = false
}
locals {
final_domain = var.wildcard_enable == true ? *.var.domain : var.domain
# final_domain = "${var.wildcard_enable == true ? "*.${var.domain}" : var.domain}"
}
resource "aws_acm_certificate" "cert" {
domain_name = local.final_domain
validation_method = "DNS"
tags = {
"Name" = var.domain
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_route53_record" "cert_validation" {
depends_on = [aws_acm_certificate.cert]
zone_id = data.aws_route53_zone.main.id
name = sort(aws_acm_certificate.cert.domain_validation_options[*].resource_record_name)[0]
type = "CNAME"
ttl = "300"
records = [sort(aws_acm_certificate.cert.domain_validation_options[*].resource_record_value)[0]]
allow_overwrite = true
}
resource "aws_acm_certificate_validation" "cert" {
certificate_arn = aws_acm_certificate.cert.arn
validation_record_fqdns = [
aws_route53_record.cert_validation.fqdn
]
timeouts {
create = "60m"
}
}

How to add resource dependencies in terraform

I have created a gcp kubernetes cluster using terraform and configured a few kubernetes resources such as namespaces and helm releases. I would like terraform to automatically destroy/recreate all the kubernetes cluster resources if the gcp cluster is destroyed/recreated but I cant seem to figure out how to do it.
The behavior I am trying to recreate is similar to what you would get if you used triggers with null_resources. Is this possible with normal resources?
resource "google_container_cluster" "primary" {
name = "marcellus-wallace"
location = "us-central1-a"
initial_node_count = 3
resource "kubernetes_namespace" "example" {
metadata {
annotations = {
name = "example-annotation"
}
labels = {
mylabel = "label-value"
}
name = "terraform-example-namespace"
#Something like this, but this only works with null_resources
triggers {
cluster_id = "${google_container_cluster.primary.id}"
}
}
}
In your specific case, you don't need to specify any explicit dependencies. They will be set automatically because you have cluster_id = "${google_container_cluster.primary.id}" in your second resource.
In case when you need to set manual dependency you can use depends_on meta-argument.

Resources