How to refer to resource created via module in same file? - terraform

Here is the setup
a module to create CloudFront distribution
use CloudFront distribution module to create a distribution for web app
add route53 record for the web app to point to CloudFront distribution
Here is the code
locals {
globals = jsondecode(file("${path.module}/../../../globals.json"))
}
module "cloudfront_distribution" {
source = "../../../modules/cloudfront-distribution/"
env = "prod"
resource_name = "www-example-com"
cloudfront_domain_name = "www.example.com"
cloudfront_domain_aliases = ["www.example.com"]
origin_domain_name = "app.example-origin.com"
subdomain_name = ""
price_class = "PriceClass_100"
origin_https_port = 443
origin_http_port = 80
origin_ssl_protocols = ["TLSv1.2"]
is_route53_record_needed = false
is_prerender_function_attached = true
}
resource "aws_route53_record" "www_example_com" {
zone_id = "123456"
name = "www"
type = "CNAME"
ttl = 5
records = [module.cloudfront_distribution.domain_name]
}
I am getting an error in the last line where I am trying to reference to the domain name of the CloudFront created to create the route53 record. This domain name is something like this d8493jg8r.cloudfront.net

You have to declare the distribution as an output inside the module. Then you can reference the module's outputs at the top level.

Related

How do I mark a LogDNA or SysDig instance as the default destination for Platform Logs or Metrics?

I'm using the Terraform provider for IBM Cloud to create a LogDNA instance. I'd like to mark this instance as the destination for Platform Logs.
Here is my Terraform:
resource ibm_resource_instance logdna_us_south {
name = "logging-us-south"
location = "us-south"
service = "logdna"
plan = "7-day"
resource_group_id = ibm_resource_group.dev.id
}
Is it possible?
You need to set the default_receiver parameter when creating the instance as described in https://cloud.ibm.com/docs/Log-Analysis-with-LogDNA?topic=Log-Analysis-with-LogDNA-config_svc_logs#platform_logs_enabling_cli
Your terraform should look like:
resource ibm_resource_instance logdna_us_south {
name = "logging-us-south"
location = "us-south"
service = "logdna"
plan = "7-day"
resource_group_id = ibm_resource_group.dev.id
parameters = {
"default_receiver" = true
}
}

Terraform, ElasticSearch: This site can’t be reached

I provisioned Elasticsearch. I got URL outputs of "domain_endpoint", "domain_hostname", "kibana_endpoint" and "kibana_hostname". But, I cannot hit any of these URLS. I got, "This site can’t be reached". What do I miss? Below is the code:
main.tf:
module "elasticsearch" {
source = "git::https://github.com/cloudposse/terraform-aws-elasticsearch.git?ref=tags/0.24.1"
security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
zone_awareness_enabled = var.zone_awareness_enabled
subnet_ids = slice(data.terraform_remote_state.vpc.outputs.private_subnets, 0, 2)
elasticsearch_version = var.elasticsearch_version
instance_type = var.instance_type
instance_count = var.instance_count
encrypt_at_rest_enabled = var.encrypt_at_rest_enabled
dedicated_master_enabled = var.dedicated_master_enabled
create_iam_service_linked_role = var.create_iam_service_linked_role
kibana_subdomain_name = var.kibana_subdomain_name
ebs_volume_size = var.ebs_volume_size
dns_zone_id = var.dns_zone_id
kibana_hostname_enabled = var.kibana_hostname_enabled
domain_hostname_enabled = var.domain_hostname_enabled
advanced_options = {
"rest.action.multi.allow_explicit_index" = "true"
}
context = module.this.context
}
terraform.tfvars:
enabled = true
region = "us-west-2"
namespace = "dev"
stage = "abcd"
name = "abcd"
instance_type = "m5.xlarge.elasticsearch"
elasticsearch_version = "7.7"
instance_count = 2
zone_awareness_enabled = true
encrypt_at_rest_enabled = false
dedicated_master_enabled = false
elasticsearch_subdomain_name = "abcd"
kibana_subdomain_name = "abcd"
ebs_volume_size = 250
create_iam_service_linked_role = false
dns_zone_id = "Z08006012KJUIEOPDLIUQ"
kibana_hostname_enabled = true
domain_hostname_enabled = true
You are placing your ES domain in VPC in private subnets. It does not matter if its public or private subent, public access does not apply here. From the AWS docs:
To perform even basic GET requests, your computer must be able to connect to the VPC. This connection often takes the form of a VPN, managed network, or proxy server.
Even if you place it in public subnet, it will not be accessible over internet. A popular solution to this issue is through ssh tunnel which is also described in AWS docs for ES:
Testing VPC Domains

Terraform fires Cycle error when applying

I am trying to build a galera cluster using terraform. To do that I need to render the galera config with the nodes ip, so I use a file template.
When applying, terraform fires an error
Error: Cycle: data.template_file.galera_node_config, hcloud_server.galera_node
It seems there is a circular reference when applying because the servers are not being created before the data template is used.
How may I circumvent this ?
Thanks
galera_node.tf
data "template_file" "galera_node_config" {
template = file("sys/etc/mysql/mariadb.conf/galera.cnf")
vars = {
galera_node0 = hcloud_server.galera_node[0].ipv4_address
galera_node1 = hcloud_server.galera_node[1].ipv4_address
galera_node2 = hcloud_server.galera_node[2].ipv4_address
curnode_ip = hcloud_server.galera_node[count.index].ipv4_address
curnode = hcloud_server.galera_node[count.index].id
}
}
resource "hcloud_server" "galera_node" {
count = var.galera_nodes
name = "galera-${count.index}"
image = var.os_type
server_type = var.server_type
location = var.location
ssh_keys = [hcloud_ssh_key.default.id]
labels = {
type = "cluster"
}
user_data = file("galera_cluster.sh")
provisioner "file" {
content = data.template_file.galera_node_config.rendered
destination = "/tmp/galera_cnf"
connection {
type = "ssh"
user = "root"
host = self.ipv4_address
private_key = file("~/.ssh/id_rsa")
}
}
}
The problem here is that you have multiple nodes that all depend on each other, and so there is no valid order for Terraform to create them: they must all be created before any other one can be created.
To address this will require a different approach. There are a few different options for this, but the one that seems closest to what you were already trying is to use the special resource type null_resource to factor out the provisioning into a separate resource that Terraform can work on only after all of the hcloud_server instances are ready.
Note also that the template_file data source is deprecated in favor of the templatefile function, so this is a good opportunity to simplify the configuration by using the function instead.
Both of those changes together lead to this:
resource "hcloud_server" "galera_node" {
count = var.galera_nodes
name = "galera-${count.index}"
image = var.os_type
server_type = var.server_type
location = var.location
ssh_keys = [hcloud_ssh_key.default.id]
labels = {
type = "cluster"
}
user_data = file("galera_cluster.sh")
}
resource "null_resource" "galera_config" {
count = length(hcloud_server.galera_node)
triggers = {
config_file = templatefile("${path.module}/sys/etc/mysql/mariadb.conf/galera.cnf", {
all_addresses = hcloud_server.galera_node[*].ipv4_address
this_address = hcloud_server.galera_node[count.index].ipv4_address
this_id = hcloud_server.galera_node[count.index].id
})
}
provisioner "file" {
content = self.triggers.config_file
destination = "/tmp/galera_cnf"
connection {
type = "ssh"
user = "root"
host = hcloud_server.galera_node[count.index].ipv4_address
private_key = file("~/.ssh/id_rsa")
}
}
}
The triggers argument above serves to tell Terraform that it must re-run the provisioner each time the configuration file changes in any way, which could for example be because you've added a new node: all of the existing nodes would then be reprovisioned to include that additional node in their configurations.
Provisioners are considered a last resort in the Terraform documentation, but in this particular case the alternatives would likely be considerably more complicated. A typical non-provisioner answer to this would be to use a service discovery system where each node can register itself on startup and then discover the other nodes, for example with HashiCorp Consul's service catalog. But unless you have lots of similar use-cases in your infrastructure which could all share the Consul cluster, having to run another service is likely an unreasonable cost in comparison to just using a provisioner.
You really try to use data.template_file.galera_node_config inside of your resource "hcloud_server" "galera_node" and use hcloud_server.galera_node in your data.template_file.
To avoid this problem:
Remove provisioner "file" from your hcloud_server.galera_node
Move this provisioner "file" to a new null_resource e.g. like that:
resource "null_resource" template_upload {
count = var.galera_nodes
provisioner "file" {
content = data.template_file.galera_node_config.rendered
destination = "/tmp/galera_cnf"
connection {
type = "ssh"
user = "root"
host = hcloud_server.galera_nodes[count.index].ipv4_address
private_key = file("~/.ssh/id_rsa")
}
depends_on = [hcloud_server.galera_node]
}

GCP Terraform: Setting port for backend service

Reading through the docs here:
https://www.terraform.io/docs/providers/google/r/compute_backend_service.html
We can define backend service:
resource "google_compute_backend_service" "kubernetes-nginx-prod" {
name = "kubernetes-nginx-prod"
health_checks = [google_compute_health_check.kubernetes-nginx-prod-healthcheck.self_link]
backend {
group = replace(google_container_node_pool.pool-1.instance_group_urls[0], "instanceGroupManagers", "instanceGroups")
# TODO missing port 31443
}
}
It seems we are unable to set backend service port via the Terraform settings:
Recreating the backend service without this settings actually leads to downtime for us and the port must be written manually.
We need to reference the port name that we gave in the instance group for e.g.
resource "google_compute_backend_service" "test" {
name = "test-service"
port_name = "test-port"
protocol = "HTTP"
timeout_sec = 5
health_checks = []
backend {
group = "${google_compute_instance_group.test-ig.self_link}"
}
}
resource "google_compute_instance_group" "test-ig" {
name = "test-ig"
instances = []
named_port {
name = "test-port"
port = "${var.app_port}"
}
zone = "${var.zone}"
}
I resolved this by using a terraform data source to extract the instance group data from my google_container_node_pool, and then appending a google_compute_instance_group_named_port resource to it.
NOTE my google_container_node_pool spans 3 zones (a,b,c) and the below code highlights the solution for only zone c
# extract google_compute_instance_group from google_container_node_pool
data "google_compute_instance_group" "instance_group_zone_c" {
# .2 refers to the last of the 3 instance groups in my node pool (zone c)
name = regex("([^/]+)/?$", "${google_container_node_pool.k8s_node_pool.instance_group_urls.2}").0
zone = regex("${var.region}-?[abc]", "${google_container_node_pool.k8s_node_pool.instance_group_urls.2}")
project = var.project_id
}
# define a named port to attach to the google_compute_instance group in my node pool
resource "google_compute_instance_group_named_port" "named_port_zone_c" {
group = data.google_compute_instance_group.instance_group_zone_c.id
zone = data.google_compute_instance_group.instance_group_zone_c.zone
name = "port31090"
port = 31090
}

Terraform: How to get value of second public ip in module cloudposse ec2

How I can get value of second IP address from Terraform module EC2.
Module- https://github.com/cloudposse/terraform-aws-ec2-instance
I've created instance EC2 with parameter additional_ips_count = 1. In this situation instance has create with two network interface and I need get value public IP address of second network interface.
Normally the module allows me to extract the value of public ip from the value of public_ip. For example, if I create a module called server, I can get the value of the public IP address from the first network interface using the value module.server.public_ip but how to do it for the second network created using a variable additional_ips_count = 1.
You can parse out the values for the value returned from the output module.multiple_ip.additional_eni_ids.
module "multiple_ip" {
source = "git::https://github.com/cloudposse/terraform-aws-ec2-instance.git?ref=master"
ssh_key_pair = var.key_name
vpc_id = var.vpc_id
security_groups = [data.aws_security_group.this.id]
subnet = data.aws_subnet.private_subnet.id
associate_public_ip_address = true
name = "multiple-ip"
namespace = "eg"
stage = "dev"
additional_ips_count = 1
ebs_volume_count = 2
allowed_ports = [22, 80, 443]
instance_type = var.instance_type
}
output "multiple_ip_additional_eni_ids" {
value = module.multiple_ip.additional_eni_ids
}
output "z_additional_eni_public_ip" {
value = values(module.multiple_ip.additional_eni_ids)[0]
}
This will return the ip want.
Outputs:
multiple_ip_additional_eni_ids = {
"eni-0cc853fb9301b2bc8" = "100.20.97.243"
}
z_additional_eni_public_ip = 100.20.97.243

Resources