How do I get the elastic load balancer dns name? - terraform

I've created an elastic beanstalk environment through terraform. I'd like to add a route53 record pointing at the load balancer's dns but I can't figure out how to get the full url from the outputs of the EB environment.
The aws_elastic_beanstalk_environment.xxx.load_balancers property contains the name but not the FQDN.

Once you have created both beanstalk resources like so
resource "aws_elastic_beanstalk_application" "eb_app" {
name = "eb_app"
description = "some description"
}
resource "aws_elastic_beanstalk_environment" "eb_env" {
name = "eb_env"
application = "${aws_elastic_beanstalk_application.eb_app.name}"
solution_stack_name = "64bit Amazon Linux 2015.03 v2.0.3 running Go 1.4"
setting {
namespace = "aws:ec2:vpc"
name = "VPCId"
value = "something"
}
setting {
namespace = "aws:ec2:vpc"
name = "Subnets"
value = "somethingelse"
}
}
The documentation included here specifies that you can use 'cname' to output the fully qualified DNS name for the environment
To do that you would write something like
output "cname" {
value = "${aws_elastic_beanstalk_environment.eb_env.cname}"
}
Then, in the same directory that you invoked the module, you could
cname_to_pass_to_route53 = "${module.some_eb_module.cname}"
If the cname is not the exact verision of the url that you need, you could append or prepend the variable name when passing it in. It does say fully qualified DNS name though, so I don't think you'd need to do that.
cname_to_pass_to_route53 = "maybeHere://${module.some_eb_module.cname}/orOverHere"

Related

How to create an IBM Cloud VSI using a custom image

I want to use Terraform to create a new virtual server using an existing Customer Image, just like manually under https://cloud.ibm.com/vpc-ext/compute/images.
I used an example code snippet and only replaced the name of the image (r010-...).
data "ibm_is_image" "centos" {
name = "r010-489ff05b-1494-4a05-8b12-c6f44a958859"
}
# Virtual Server Insance
resource "ibm_is_instance" "vsi1" {
name = "${local.BASENAME}-vsi1"
vpc = ibm_is_vpc.vpc-instance.id
keys = [data.ibm_is_ssh_key.ssh_key_id.id]
zone = local.ZONE
image = data.ibm_is_image.centos.id
profile = "cx2-2x4"
# References to the subnet and security groups
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
The error message is:
Error: No image found with name r010-489ff05b-1494-4a05-8b12-c6f44a958859
It seems that only public AWS images can be used.
Seems like you're using id in place of name here
data "ibm_is_image" "centos" {
name = "r010-489ff05b-1494-4a05-8b12-c6f44a958859"
}
try using the name of the image
Here is an example: https://github.com/IBM-Cloud/isv-vsi-product-deploy-sample/blob/main/image-map.tf
This terraform file has image ids for different regions. Based on your VSI region, it will fetch the image id.
Your custom images are private. The visibility is an attribute that you can specify when looking up the data using ibm_is_image.
Thus, I recommend trying:
data "ibm_is_image" "centos" {
name = "r010-489ff05b-1494-4a05-8b12-c6f44a958859"
visibility = "private"
}
I confused name with id. The image name is expected rather than the id. Thanks!

Terraform: What is the simplest way to Incrementally add servers to a deployment?

I am a newbie with terraform so donĀ“t laugh :) I want to deploy a number of instances of a server, then add their IPs to a Route53 hosted zone. I will be using Terraform v0.12.24 no chance of 0.14 at the moment.
So far, I have working the "easy", spaghetti approach:
module server: buys and creates a list of servers
module route53: adds route53 records, parameter=aray of ips
main.tf
module "hostedzone" {
source = "./route53"
ncs_domain = var.ncs_domain
}
module "server" {
source = "./server"
name = "${var.ncs_hostname}-${var.ncs_id}"
hcloud_token = var.server_htk
servers = [
{
type = "cx11",
location = "fsn1",
},
{
type = "cx11",
location = "fsn1",
}
]
}
resource "aws_route53_record" "server1-record" {
zone_id = module.hostedzone.zone.zone_id
name = "${var.ncs_hostname}.${var.ncs_domain}"
type = "A"
ttl = "300"
records = module.server.server.*.ipv4_address
}
and the relevant server resource array:
resource "hcloud_server" "apiserver" {
count = length(var.servers)
# Create a server
name = "${var.name}-${count.index}"
# Name server
image = var.image
# Basic image
server_type = var.servers[count.index].type
# Instance type
location = var.servers[count.index].location
}
So if I run terraform apply, I get the server array created. Cool !
Now, I would like to be able to run this module to create and destroy specific servers on demand, like:
initially deploy the platform with one or two servers.
remove one of the initial servers in the array
add new servers
So, how could I use this incrementally, that is, without providing the whole array of servers everytime? Like just adding one to the existing list, or remove other.

How can I overcome "Error: Cycle" on digitalocean droplet

I am sure this is a simple problem that I don't know how to interpret it at the moment.
I use 3 droplets(called rs) and have a template file which configures each.
[..]
data "template_file" "rsdata" {
template = file("files/rsdata.tmpl")
count = var.count_rs_nodes
vars = {
docker_version = var.docker_version
private_ip_rs = digitalocean_droplet.rs[count.index].ipv4_address_private
private_ip_mysql = digitalocean_droplet.mysql.ipv4_address_private
}
}
resource "digitalocean_droplet" "rs" {
count = var.count_rs_nodes
image = var.image
name = "${var.prefix}-rs-${count.index}"
region = var.region
size = var.rs_size
private_networking = true
user_data = data.template_file.rsdata.rendered
ssh_keys = var.ssh_keys
depends_on = ["digitalocean_droplet.mysql"]
}
[..]
When I do a terraform apply I get:
Error: Cycle: digitalocean_droplet.rs, data.template_file.rsdata
Note this is terraform 0.12
What am I doing wrong and how can I overcome this please?
This error is returned because the data.template_file.rsdata resource refers to the digitalocean_droplet.rs resource and vice-versa. That creates an impossible situation for Terraform: there is no ordering Terraform could use to process these that would ensure that all of the necessary data is available at each step.
The key problem is that the private IPv4 address for a droplet is allocated as part of creating it, but the user_data is used as part of that creation and so it cannot include the IPv4 address that is yet to be allocated.
The most straightforward way to deal with this would be to not include the droplet's IP address as part of its user_data and instead arrange for whatever software is processing that user_data to fetch the IP address of the host directly from the network interface at runtime. The kernel running in that droplet will know what the IP address is, so you can retrieve it from there in principle.
If for some reason including the IP addresses in the user_data is unavoidable (this can occur, for example, if there are a set of virtual machines that all need to be aware of each other) then a more complex alternative is to separate the allocation of the IP addresses from the creation of the instances. DigitalOcean doesn't have a mechanism to create private network interfaces separately from the droplets they belong to, so in this case it would be necessary to use public IP addresses via digitalocean_floating_ip, which may not be appropriate for all situations:
resource "digitalocean_floating_ip" "rs" {
count = var.count_rs_nodes
region = var.region
}
resource "digitalocean_droplet" "rs" {
count = var.count_rs_nodes
image = var.image
name = "${var.prefix}-rs-${count.index}"
region = var.region
size = var.rs_size
private_networking = true
ssh_keys = var.ssh_keys
user_data = templatefile("${path.module}/files/rsdata.tmpl", {
docker_version = var.docker_version
private_ip_rs = digitalocean_floating_ip.rs[count.index].ip_address
private_ip_mysql = digitalocean_droplet.mysql.ipv4_address_private
})
}
resource "digitalocean_floating_ip_assignment" "rs" {
count = var.count_rs_nodes
ip_address = digitalocean_floating_ip.rs[count.index].ip_address
droplet_id = digitalocean_droplet.rs[count.index].id
}
Because the "floating IP assignment" is created as a separate step after the droplet is launched, there will be some delay before the floating IP is associated with the instance and so whatever software is relying on that IP address must be resilient to running before the association is created.
Note that I also switched from using data "template_file" to the templatefile function because the data source is offered only for backward-compatibility with Terraform 0.11 configurations; the built-in function is the recommended way to render external template files in Terraform 0.12, and avoids the need for a separate configuration block.

terraform google cloud nat using reserved static ip

We have reserved static (whitelisted) IP addresses that need to be assigned to a CloudNAT on GCP by terraform. The IPs are reserved and registered with a service provider, which takes weeks to get approved and added to their firewalls, so dynamic allocation is not an option.
The main problem for us is that the google_compute_router_nat section requires the nat_ip_allocate_option, but in this case the IP address has already been allocated, so it fails with an error stating exactly that. The only options for allocate are AUTO_ONLY and MANUAL_ONLY, but it seems maybe an EXISTING or RESERVED might be needed, unless I'm missing something obvious.
Here is the failing configuration:
resource "google_compute_address" "static_ip" {
name = "whitelisted-static-ip"
region = "${var.project_region}"
}
resource "google_compute_router_nat" "cluster-nat" {
name = "cluster-stg-nat"
router = "${google_compute_router.router.name}"
region = "${google_compute_router.router.region}"
nat_ip_allocate_option = "MANUAL_ONLY"
nat_ips = ["${google_compute_address.static_ip.self_link}"]
source_subnetwork_ip_ranges_to_nat = "LIST_OF_SUBNETWORKS"
subnetwork {
name = "${google_compute_subnetwork.service.self_link}"
source_ip_ranges_to_nat = ["ALL_IP_RANGES"]
}
}
Results in the following error:
Error: Error creating Address: googleapi: Error 409: The resource 'projects/staging-cluster/regions/us-central1/addresses/whitelisted-static-ip' already exists, alreadyExists
because the static IP resource is already reserved in GCP External IP Addresses and registered with the service provider.
Changing the google_compute_address resource to a data object was the magic. I modified it to be:
data "google_compute_address" "static_ip" {
name = "whitelisted-static-ip"
region = "${var.project_region}"
}
Where the name of "whitelisted-static-ip" is what we assigned to the reserved external IP address when we created it. The updated router NAT resource then became:
resource "google_compute_router_nat" "cluster-nat" {
name = "${var.cluster_name}-nat"
router = "${google_compute_router.router.name}"
region = "${google_compute_router.router.region}"
nat_ip_allocate_option = "MANUAL_ONLY"
nat_ips = ["${data.google_compute_address.static_ip.self_link}"]
source_subnetwork_ip_ranges_to_nat = "LIST_OF_SUBNETWORKS"
subnetwork {
name = "${google_compute_subnetwork.service.self_link}"
source_ip_ranges_to_nat = ["PRIMARY_IP_RANGE"]
}
}
which is only a mod to the nat_ips field to point to the data object. Simple two word change and we're good to go. Excellent!
It looks like the problem is with the google_compute_address resource, not the NAT. You are trying to create a resource that already exists. Instead you should do one of the following:
If you want Terraform to manage this resource for you import the resource into Terraform, see here https://www.terraform.io/docs/import/ and here https://www.terraform.io/docs/providers/google/r/compute_address.html#import
If you do not want Terraform to manage the IP address for you then you can use a data object instead of a resource object. This is essentially a read only resource lookup so that you can reference it in Terraform but manage it somewhere else. See here https://www.terraform.io/docs/configuration/data-sources.html and here https://www.terraform.io/docs/providers/google/d/datasource_compute_address.html

Google TXT record verification with Terraform on Digital Ocean

I am trying to verify I couple websites I host under an domain I own. Google tells me
Add the TXT record below to the DNS configuration for cescoferraro.xyz.
google-site-verification=RANDOM_HASH
What I am trying:
resource "digitalocean_domain" "domain" {
name = "cescoferraro.xyz"
ip_address = "${digitalocean_droplet.master.ipv4_address}"
}
resource "digitalocean_record" "googleconfirmation" {
domain = "${digitalocean_domain.domain.name}"
type = "TXT"
name = "google-site-verification"
value = "RANDOM_HASH"
}
resource "digitalocean_record" "googleconfirmationnnn" {
domain = "${digitalocean_domain.domain.name}"
type = "TXT"
name = "what"
value = "google-site-verification=RANDOM_HASH"
}
resource "digitalocean_record" "googleconfirmationnssnn" {
domain = "${digitalocean_domain.domain.name}"
type = "TXT"
name = "#"
value = "google-site-verification=RANDOM_HASH"
}
I have not been able to verify my domain yet, might be due to DNS cache. I know this things take a while, but whats is the right way?
In my expirience, google made a typo in the verification intructions when you are not trying to verify www.domain.com.
When trying to verify the domain mycoolhostname.domain.com, as per question, it explicitly tell you:
Add the TXT record below to the DNS configuration for domain.com.
When it should be saying
Add the TXT record below to the DNS configuration for mycoolhostname.domain.com.
So terraform code, AS #DusanBajic suggested, would be something like this:
resource "digitalocean_domain" "mycoolhostname" {
name = "mycoolhostname.domain.com"
ip_address = "${digitalocean_droplet.master.ipv4_address}"
}
resource "digitalocean_record" "google-mycoolhostname-confirmation" {
domain = "${digitalocean_domain.mycoolhostname.name}"
type = "TXT"
name = "#"
value = "google-site-verification=COOL_HASH"
}

Resources