Add private IPs to a file - Terraform - terraform

I want to be able to dump the private IPs for the EC2 servers created by Terraform.
resource "aws_instance" "hello" {
count = "3"
tags {
Name = "${var.name}"
}
ami = "${var.AWS_AMI}"
instance_type = "${var.aws_instance_type}"
subnet_id = "${var.aws_subnet_id}"
How will I dump the private IPs created for this instance as a comma separated list to a file so that another bash script can read it from there?

One option is to specify an output variable in the terraform file :
Example:
output hello_ec2_private_ip {
value = "${join(",",aws_instance.hello.*.private_ip)}"
}
then use the terraform output command to list the output
and pipe it to a file
Example:
terraform output hello_ec2_private_ip > private_hello.txt

Related

How to combine "IP" and "name" in a list of instances with Terraform local-exec

I am trying to read from Terraform the instance public_IP and the instance_name and then write them into a file in the same line.
Whit the next command, I write the next file:
provisioner "local-exec" {
command = "echo \"${join("\n", aws_instance.nodeStream.*.public_ip)}\" >> ../ouput_file"
}
output_file:
34.14.219.13
64.2.201.14
59.12.31.15
What I want is to have the next output_file:
34.14.219.13 instance_name1
64.2.201.14 instance_name2
59.12.31.15 instance_name3
So I have try the next to concat both lists:
provisioner "local-exec" {
command = "echo \"${concat(sort(lookup(aws_instance.node1Stream.*.tags, "Name")), sort(aws_instance.node1Stream.*.public_ip))}\" >> ../../output_file"
}
The previous throws:
Error: Invalid function argument: Invalid value for "inputMap" parameter: lookup() requires a map as the first argument.
Since your goal is to produce a string from a data structure, this seems like a good use for string templates:
locals {
hosts_file_content = <<EOT
%{ for inst in aws_instance.node1Stream ~}
${inst.private_ip} ${inst.tags["Name"]}
%{ endfor ~}
EOT
}
With that local value defined, you can include it in the command argument of the provisioner like this:
provisioner "local-exec" {
command = "echo '${local.hosts_file_content}' >> ../../output_file"
}
If just getting that data into a file is your end goal, and that wasn't just a contrived example for the sake of this question, I'd recommend using the local_file resource instead so that Terraform can manage that file like any other resource, including potentially updating it if the inputs change without the need for any special provisioner triggering:
resource "local_file" "hosts_file" {
filename = "${path.root}/../../output_file"
content = <<EOT
%{ for inst in aws_instance.node1Stream ~}
${inst.private_ip} ${inst.tags["Name"]}
%{ endfor ~}
EOT
}
With that said, the caveat on the local_file documentation page applies both to this resource-based approach and the provisioner-based approach: Terraform is designed primarily for managing remote objects that can persist from one Terraform run to the next, not for objects that live only on the system where Terraform is currently running. Although these features do allow creating and modifying local files, it'll be up to you to make sure that the previous file is consistently available at the same location relative to the Terraform configuration next time you apply a change, or else Terraform will see the file gone and be forced to recreate it.

how to pass list input to aws vpc elb in terraform

here i'm trying to provision a aws classic ELB in a VPC where i have 2 public subnets. These subnets are also provisioned by terraform and i'm trying to pass both the subnets ids to elb module.SO the problem is i'm not able to give list input to elb subnets field
public_subnet variable works fine as i have used it for route table association it's just that i'm not able to handle the list and give it as input to vpc.
it works if i use subnets = [var.public_subnet.0,var.public_subnet.1]
here's my code
resource "aws_elb" "webelb" {
name = "foobar-terraform-elb"
#availability_zones = [var.public_subnet]
subnets = [var.public_subnet]
#
#
#
}
variable "public_subnet" {
type = list
}
subnet.tf
output "public_subnet" {
value = aws_subnet.public.*.id
}```
Error:
```Error: Incorrect attribute value type
on elb/elb.tf line 4, in resource "aws_elb" "webelb":
4: availability_zones = [var.public_subnet]
Inappropriate value for attribute "availability_zones": element 0: string
required.```
Since var.public_subnet is already a list. [var.public_subnet] is equivalent to [["192.168.0.0/32"]] instead of the expected, un-nested input ["102.168.0.0/32"]
ie...just use var.public_subnet

How do I assign unique "Name" tag to the EC2 instance(s)?

I am using Terraform 0.12. I am trying to build EC2 in bulk for a project and instead of sequentially naming the ec2's I will to name the instances by providing unique names.
I think of using dynamic tags, however, not quite sure how to incorporate in the code.
resource "aws_instance" "tf_server" {
count = var.instance_count
instance_type = var.instance_type
ami = data.aws_ami.server_ami.id
associate_public_ip_address = var.associate_public_ip_address
##This provides sequential name.
tags = {
Name = "tf_server-${count.index +1}"
}
key_name = "${aws_key_pair.tf_auth.id}"
vpc_security_group_ids = ["${var.security_group}"]
subnet_id = "${element(var.subnets, count.index)}"
}
If I understand your requirement correctly, you can pass the list of VM names as a terraform variable and use count.index to get the name from a specific position in the list based on the count.
# variables.tf
# Length of list should be the same as the count of instances being created
variable "instance_names" {
default = ["apple", "banana", "carrot"]
}
#main.tf
resource "aws_instance" "tf_server" {
count = var.instance_count
instance_type = var.instance_type
ami = data.aws_ami.server_ami.id
associate_public_ip_address = var.associate_public_ip_address
##This provides names as per requirement from the list.
tags = {
Name = "${element(var.instance_names, count.index)}"
}
key_name = "${aws_key_pair.tf_auth.id}"
vpc_security_group_ids = ["${var.security_group}"]
subnet_id = "${element(var.subnets, count.index)}"
}
Would the following be similar to what you are after?
Define a list of name prefixes as a variable and then cycle through the naming prefixes using the element function.
variable "name_prefixes" {
default = ["App", "Db", "Web"]
}
...
##This provides sequential name.
tags = {
Name = "${element(var.name_prefixes, count.index)}${count.index + 1}"
}
...
The result would be App1, Db2, Web3, App4, Db5... The numbering is not ideal, but at least you would have a distinct name per instance.
The only way I can think of naming them sequentially (e.g. App1, App2, Db1, Db2 etc.) would require an individual resource for each type of instance and then just use count.index on the name like your original code.

Concatenate Public IP output with Server Name

I have written a Terraform script to create a few Azure Virtual Machines.
The number of VMs created is based upon a variable called type in my .tfvars file:
type = [ "Master-1", "Master-2", "Master-3", "Slave-1", "Slave-2", "Slave-3" ]
My variables.tf file contains the following local:
count_of_types = "${length(var.type)}"
And my resources.tf file contains the code required to actual create the relevant number of VMs from this information:
resource "azurerm_virtual_machine" "vm" {
count = "${local.count_of_types}"
name = "${replace(local.prefix_specific,"##TYPE##",var.type[count.index])}-VM"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
network_interface_ids = ["${azurerm_network_interface.main.*.id[count.index]}"]
vm_size = "Standard_B2ms"
tags = "${local.tags}"
Finally, in my output.tf file, I output the IP address of each server:
output "public_ip_address" {
value = ["${azurerm_public_ip.main.*.ip_address}"]
}
I am creating a Kubernetes cluster with 1x Master and 1x Slave VM. For this purpose, the script works fine - the first IP output is the Master and the second IP output is the Slave.
However, when I move to 8+ VMs in total, I'd like to know which IP refers to which VM.
Is there a way of amending my output to include the type local, or just the server's hostname alongside the Public IP?
E.g. 54.10.31.100 // Master-1.
Take a look at formatlist (which is one of the functions for string manipulations) and can be used to iterate over the instance attributes and list tags and other attributes of interest.
output "ip-address-hostname" {
value = "${
formatlist(
"%s:%s",
azurerm_public_ip.resource_name.*.fqdn,
azurerm_public_ip.resource_name.*.ip_address
)
}"
}
Note this is just a draft pseudo code. You may have to tweak this and create additional data sources in your TF file for effective enums
More reading available - https://www.terraform.io/docs/configuration/functions/formatlist.html
Raunak Jhawar's answer pointed me in the right direction, and therefore got the green tick.
For reference, here's the exact code I used in the end:
output "public_ip_address" {
value = "${formatlist("%s: %s", azurerm_virtual_machine.vm.*.name, azurerm_public_ip.main.*.ip_address)}"
}
This resulted in the following output:

Terraform target aws_volume_attachment with only its corresponding aws_instance resource from a list

I am not able to target a single aws_volume_attachment with its corresponding aws_instance via -target.
The problem is that the aws_instance is taken from a list by using count.index, which forces terraform to refresh all aws_instance resources from that list.
In my concrete case I am trying to manage a consul cluster with terraform.
The goal is to be able to reinit a single aws_instance resource via the -target flag, so I can upgrade/change the whole cluster node by node without downtime.
I have the following tf code:
### IP suffixes
variable "subnet_cidr" { "10.10.0.0/16" }
// I want nodes with addresses 10.10.1.100, 10.10.1.101, 10.10.1.102
variable "consul_private_ips_suffix" {
default = {
"0" = "100"
"1" = "101"
"2" = "102"
}
}
###########
# EBS
#
// Get existing data EBS via Name Tag
data "aws_ebs_volume" "consul-data" {
count = "${length(keys(var.consul_private_ips_suffix))}"
filter {
name = "volume-type"
values = ["gp2"]
}
filter {
name = "tag:Name"
values = ["${var.platform_type}.${var.platform_id}.consul.data.${count.index}"]
}
}
#########
# EC2
#
resource "aws_instance" "consul" {
count = "${length(keys(var.consul_private_ips_suffix))}"
...
private_ip = "${cidrhost(aws_subnet.private-b.cidr_block, lookup(var.consul_private_ips_suffix, count.index))}"
}
resource "aws_volume_attachment" "consul-data" {
count = "${length(keys(var.consul_private_ips_suffix))}"
device_name = "/dev/sdh"
volume_id = "${element(data.aws_ebs_volume.consul-data.*.id, count.index)}"
instance_id = "${element(aws_instance.consul.*.id, count.index)}"
}
This works perfectly fine for initializing the cluster.
Now I make a change in my user_data init script of the consul nodes and want to rollout node by node.
I run terraform plan -target=aws_volume_attachment.consul_data[0] to reinit node 0.
This is when I run into the above mentioned problem, that terraform renders all aws_instance resources because of instance_id = "${element(aws_instance.consul.*.id, count.index)}".
Is there a way to "force" tf to target a single aws_volume_attachment with only its corresponding aws_instance resource?
At the time of writing this sort of usage is not possible due to the fact that, as you've seen, an expression like aws_instance.consul.*.id creates a dependency on all the instances, before the element function is applied.
The -target option is not intended for routine use and is instead provided only for exceptional circumstances such as recovering carefully from an unintended change.
For this specific situation it may work better to use the ignore_changes lifecycle setting to prevent automatic replacement of the instances when user_data changes, like this:
resource "aws_instance" "consul" {
count = "${length(keys(var.consul_private_ips_suffix))}"
...
private_ip = "${cidrhost(aws_subnet.private-b.cidr_block, lookup(var.consul_private_ips_suffix, count.index))}"
lifecycle {
ignore_changes = ["user_data"]
}
}
With this set, Terraform will detect but ignore changes to the user_data attribute. You can then get the gradual replacement behavior you want by manually tainting the resources one at a time:
$ terraform taint aws_instance.consul[0]
On the next plan, Terraform will then see that this resource instance is tainted and produce a plan to replace it. This gives you direct control over when the resources are replaced, so you can therefore ensure that e.g. the consul leave step gets a chance to run first, or whatever other cleanup you need to do.
This workflow is recommended over -target because it makes the replacement step explicit. -target can be confusing in a collaborative environment because there is no evidence of its use, and thus no clear explanation of how the current state was reached. taint, on the other hand, explicitly marks your intention in the state where other team members can see it, and then replaces the resource via the normal plan/apply steps.

Resources