Is there a way to get the Terraform output for only the added resources? - terraform

I'm creating an azurerm_windows_virtual_machine resource that has a count which changes overtime.
resource "azurerm_windows_virtual_machine" "this" {
# The count variable could change
count = var.virtual_machine_count[terraform.workspace]
name = "vmt${count.index}"
...
tags = local.tags
}
After creating the VM resource, I'm creating a local_file that I would need to use as an Ansible inventory to configure the VMs.
resource "local_file" "ansible_inventory" {
depends_on = [azurerm_windows_virtual_machine.this]
content = templatefile(var.template_file_name,
{
private-ips = azurerm_windows_virtual_machine.this.*.private_ip_address,
ansible-user = var.ansible_user,
ansible-password = var.ansible_password,
ansible-ssh-port = var.ansible_ssh_port,
ansible-connection = var.ansible_connection,
ansible-winrm-transport = var.ansible_winrm_transport,
ansible-winrm-server-cert-validation = var.ansible_winrm_server_cert_validation
}
)
filename = var.template_output_file_name
}
The current behavior of this code is whenever I add another VM to the VM count, the local file resource adds the IP addresses of all VMs, including existing ones to the template file.
What I was hoping to do was to add the IP address of just the newly added VM resource to the template file.
Not sure if this is possible with Terraform only or if I need an external script to keep track of the existing and new IP addresses. Thanks.

Related

Terraform: What is the simplest way to Incrementally add servers to a deployment?

I am a newbie with terraform so donĀ“t laugh :) I want to deploy a number of instances of a server, then add their IPs to a Route53 hosted zone. I will be using Terraform v0.12.24 no chance of 0.14 at the moment.
So far, I have working the "easy", spaghetti approach:
module server: buys and creates a list of servers
module route53: adds route53 records, parameter=aray of ips
main.tf
module "hostedzone" {
source = "./route53"
ncs_domain = var.ncs_domain
}
module "server" {
source = "./server"
name = "${var.ncs_hostname}-${var.ncs_id}"
hcloud_token = var.server_htk
servers = [
{
type = "cx11",
location = "fsn1",
},
{
type = "cx11",
location = "fsn1",
}
]
}
resource "aws_route53_record" "server1-record" {
zone_id = module.hostedzone.zone.zone_id
name = "${var.ncs_hostname}.${var.ncs_domain}"
type = "A"
ttl = "300"
records = module.server.server.*.ipv4_address
}
and the relevant server resource array:
resource "hcloud_server" "apiserver" {
count = length(var.servers)
# Create a server
name = "${var.name}-${count.index}"
# Name server
image = var.image
# Basic image
server_type = var.servers[count.index].type
# Instance type
location = var.servers[count.index].location
}
So if I run terraform apply, I get the server array created. Cool !
Now, I would like to be able to run this module to create and destroy specific servers on demand, like:
initially deploy the platform with one or two servers.
remove one of the initial servers in the array
add new servers
So, how could I use this incrementally, that is, without providing the whole array of servers everytime? Like just adding one to the existing list, or remove other.

How to create Virtual servers in IBM cloud Terraform with for loop?

I have a Virtual server in IBM cloud created using Terraform
resource "ibm_is_instance" "vsi1" {
name = "${local.BASENAME}-vsi1"
vpc = ibm_is_vpc.vpc.id
zone = local.ZONE
keys = [data.ibm_is_ssh_key.ssh_key_id.id]
image = data.ibm_is_image.ubuntu.id
profile = "cc1-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
How to create Virtual Servers with Terraform For loops
vsi1 , vsi2, vsi3, vsi4, vsi5
for full code Please refer IBM Cloud Terraform getting started tutorial
You may not require a for or for-each loop for achieving what you need. A simple count will do the required. Once you add count(number of instances), all you need to do is pass count.index in the VSI name.
resource "ibm_is_instance" "vsi" {
count = 4
name = "${local.BASENAME}-vsi-${count.index}"
vpc = ibm_is_vpc.vpc.id
zone = local.ZONE
keys = [data.ibm_is_ssh_key.ssh_key_id.id]
image = data.ibm_is_image.ubuntu.id
profile = "cc1-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
This will create instances with names vsi-0,vsi-1...

How can I overcome "Error: Cycle" on digitalocean droplet

I am sure this is a simple problem that I don't know how to interpret it at the moment.
I use 3 droplets(called rs) and have a template file which configures each.
[..]
data "template_file" "rsdata" {
template = file("files/rsdata.tmpl")
count = var.count_rs_nodes
vars = {
docker_version = var.docker_version
private_ip_rs = digitalocean_droplet.rs[count.index].ipv4_address_private
private_ip_mysql = digitalocean_droplet.mysql.ipv4_address_private
}
}
resource "digitalocean_droplet" "rs" {
count = var.count_rs_nodes
image = var.image
name = "${var.prefix}-rs-${count.index}"
region = var.region
size = var.rs_size
private_networking = true
user_data = data.template_file.rsdata.rendered
ssh_keys = var.ssh_keys
depends_on = ["digitalocean_droplet.mysql"]
}
[..]
When I do a terraform apply I get:
Error: Cycle: digitalocean_droplet.rs, data.template_file.rsdata
Note this is terraform 0.12
What am I doing wrong and how can I overcome this please?
This error is returned because the data.template_file.rsdata resource refers to the digitalocean_droplet.rs resource and vice-versa. That creates an impossible situation for Terraform: there is no ordering Terraform could use to process these that would ensure that all of the necessary data is available at each step.
The key problem is that the private IPv4 address for a droplet is allocated as part of creating it, but the user_data is used as part of that creation and so it cannot include the IPv4 address that is yet to be allocated.
The most straightforward way to deal with this would be to not include the droplet's IP address as part of its user_data and instead arrange for whatever software is processing that user_data to fetch the IP address of the host directly from the network interface at runtime. The kernel running in that droplet will know what the IP address is, so you can retrieve it from there in principle.
If for some reason including the IP addresses in the user_data is unavoidable (this can occur, for example, if there are a set of virtual machines that all need to be aware of each other) then a more complex alternative is to separate the allocation of the IP addresses from the creation of the instances. DigitalOcean doesn't have a mechanism to create private network interfaces separately from the droplets they belong to, so in this case it would be necessary to use public IP addresses via digitalocean_floating_ip, which may not be appropriate for all situations:
resource "digitalocean_floating_ip" "rs" {
count = var.count_rs_nodes
region = var.region
}
resource "digitalocean_droplet" "rs" {
count = var.count_rs_nodes
image = var.image
name = "${var.prefix}-rs-${count.index}"
region = var.region
size = var.rs_size
private_networking = true
ssh_keys = var.ssh_keys
user_data = templatefile("${path.module}/files/rsdata.tmpl", {
docker_version = var.docker_version
private_ip_rs = digitalocean_floating_ip.rs[count.index].ip_address
private_ip_mysql = digitalocean_droplet.mysql.ipv4_address_private
})
}
resource "digitalocean_floating_ip_assignment" "rs" {
count = var.count_rs_nodes
ip_address = digitalocean_floating_ip.rs[count.index].ip_address
droplet_id = digitalocean_droplet.rs[count.index].id
}
Because the "floating IP assignment" is created as a separate step after the droplet is launched, there will be some delay before the floating IP is associated with the instance and so whatever software is relying on that IP address must be resilient to running before the association is created.
Note that I also switched from using data "template_file" to the templatefile function because the data source is offered only for backward-compatibility with Terraform 0.11 configurations; the built-in function is the recommended way to render external template files in Terraform 0.12, and avoids the need for a separate configuration block.

Terrafrom v11.13 Attach Multiple Data Templates To Single Resource

I'm running Terraform v11.13 with the AWS provider. Is it possible to attach multiple data template files to a single resource?
An example of this is where you have a single aws_iam_policy resouce, but for it to create multiple IAM polices from different data template files.
It works when it is just a single data template file with a count index. It also works when the file is static, as in not a template file.
Here is the code example
variable "policy_list"{
type = "list"
default = ["s3,"emr","lambda"]
}
resource "aws_iam_policy" "many_policies" {
count = "${length(var.policy_list)}"
name = "Policy_${var.policy_list[count.index]}_${var.environment}"
policy = "${file("${path.module}/files/policies/${var.environment}/${var.policy_list[count.index]}.json")}"
}
resource "aws_iam_role_policy_attachment" "many_policies_attachment" {
count = "${length(var.policy_list)}"
role = "${aws_iam_role.iam_roles.*.name[index(var.role_list, "MyRole"))]}"
policy_arn = "${aws_iam_policy.many_policies.*.arn[count.index]}"
}
But what fails is
resource "aws_iam_policy" "many_policies" {
count = "${length(var.policy_list)}"
name = "Policy_${var.policy_list[count.index]}_${var.environment}"
policy = "${data.template_file.${var.policy_list[count.index]}_policy_file.*.rendered[count.index]}"
}
With an error message similar to
parse error expected "}" but found invalid sequence "$"
Any ideas on how this can be achieved?
Based on the errors messages and the suggestion by Matt Schuchard, it's fair to conclude that the data.template_file option does not support interpolation in v11.13

How to prevent data loss in persistent volume when server is recreated

I am working with terraform and openstack and using a persistent volume to store data. When recreating the server only and reattaching the same vol, this data is sometimes corrupted/lost. How do i prevent this?
I taint the server and then terraform apply to recreate. This detaches the vol , destroys the server, recreates and attaches back the volume. However, sometimes the data in the volume is lost or corrupted. This vol contains postgreSQL DB files.
I tried to use terraform destroy - but that will cause the volume to be destroyed as well.
This is the module
data "template_file" "init-config" {
template = "${file("modules/postgres-server/init-config.tpl")}"
vars {
instance_name = "${var.instance_name}"
tenant_name = "${var.tenant_name}"
}
}
# Define instance properties.
# You should provide the variables in main.tf
resource "openstack_compute_instance_v2" "server" {
name = "${var.instance_name}"
image_name = "${var.image_name}"
flavor_name = "${var.flavor_name}"
key_pair = "${var.key_name}"
security_groups = ["default", "${var.secgroup_name}"]
user_data = "${data.template_file.init-config.rendered}"
stop_before_destroy = "true"
network {
name = "${var.tenant_name}-net"
}
}
# Define a floating ip resoruce
resource "openstack_networking_floatingip_v2" "server_float" {
pool = "net-iaas-external-dev"
}
# Associate the instance and floating ip resources
resource "openstack_compute_floatingip_associate_v2" "server_float_assoc" {
floating_ip = "${openstack_networking_floatingip_v2.server_float.address}"
instance_id = "${openstack_compute_instance_v2.server.id}"
}
# Create persistent vol
resource "openstack_blockstorage_volume_v2" "pgvol" {
name = "postgreSQL-DATA-${var.instance_name}"
description = "Data Vol for ${var.instance_name}"
size = 50
}
# Attach the persistent data to the instance
resource "openstack_compute_volume_attach_v2" "pgvol_attach" {
instance_id = "${openstack_compute_instance_v2.server.id}"
volume_id = "${openstack_blockstorage_volume_v2.pgvol.id}"
device = "/dev/vdc"
}
This is the main.tf
module "postgre-server" {
source = "./modules/postgres-server"
instance_name = "INST_NAME"
image_name = "centos7"
flavor_name = "r1.medium"
key_name = "${module.keypair.output_key_name}"
secgroup_name = "${module.secgroup.output_secgroup_name}"
tenant_name = "${var.tenant_name}"
}
Expected result is that volume data is not lost and when I attached back to the newly re-created server, the filesystems in that volume and all the data is there.
Thanks. Appreciate any insights on how to do this.
A quick way is to split code into two stacks, one stack (module #1) manages the storage only, the other (module #2) manage the rest.
After split, you can change module #2 anytime, whatever apply or destroy.
Between two stacks, you can reference storage resource with several ways.
Way one:
reference from data source terraform_remote_state, which you need set an output as below
output "persistant_storage_id" {
value = "${openstack_blockstorage_volume_v2.pgvol.id}"
}
then use below code in module 2 to reference the persistent storage.
data "terraform_remote_state" "persistent_storage" {
backend = "xxx"
config {
name = "hashicorp/persistent-storage"
}
}
so module #2 can reference it as ${data.terraform_remote_state.persistent_storage.persistent_storage_id}"
Way two:
reference persistent storage volume id with data source openstack_blockstorage_availability_zones_v3 directly
way three:
way #3 is similar as #1.
you need output the value "${openstack_blockstorage_volume_v2.pgvol.id}" in module #1,
output "persistant_storage_id" {
value = "${openstack_blockstorage_volume_v2.pgvol.id}"
}
call the module #1
module "persistent_storage" {
...
}
then reference it as ${module.persistent_storage.persistent_storage_id}"
This is working when I unmount the filesystems in the vol prior to using terraform to recreate the instance. I thought the stop_before_destroy = "true" would have gracefully stopped the instance and detach vol, but it didn't work in my case :)

Resources