How can I overcome "Error: Cycle" on digitalocean droplet - terraform

I am sure this is a simple problem that I don't know how to interpret it at the moment.
I use 3 droplets(called rs) and have a template file which configures each.
[..]
data "template_file" "rsdata" {
template = file("files/rsdata.tmpl")
count = var.count_rs_nodes
vars = {
docker_version = var.docker_version
private_ip_rs = digitalocean_droplet.rs[count.index].ipv4_address_private
private_ip_mysql = digitalocean_droplet.mysql.ipv4_address_private
}
}
resource "digitalocean_droplet" "rs" {
count = var.count_rs_nodes
image = var.image
name = "${var.prefix}-rs-${count.index}"
region = var.region
size = var.rs_size
private_networking = true
user_data = data.template_file.rsdata.rendered
ssh_keys = var.ssh_keys
depends_on = ["digitalocean_droplet.mysql"]
}
[..]
When I do a terraform apply I get:
Error: Cycle: digitalocean_droplet.rs, data.template_file.rsdata
Note this is terraform 0.12
What am I doing wrong and how can I overcome this please?

This error is returned because the data.template_file.rsdata resource refers to the digitalocean_droplet.rs resource and vice-versa. That creates an impossible situation for Terraform: there is no ordering Terraform could use to process these that would ensure that all of the necessary data is available at each step.
The key problem is that the private IPv4 address for a droplet is allocated as part of creating it, but the user_data is used as part of that creation and so it cannot include the IPv4 address that is yet to be allocated.
The most straightforward way to deal with this would be to not include the droplet's IP address as part of its user_data and instead arrange for whatever software is processing that user_data to fetch the IP address of the host directly from the network interface at runtime. The kernel running in that droplet will know what the IP address is, so you can retrieve it from there in principle.
If for some reason including the IP addresses in the user_data is unavoidable (this can occur, for example, if there are a set of virtual machines that all need to be aware of each other) then a more complex alternative is to separate the allocation of the IP addresses from the creation of the instances. DigitalOcean doesn't have a mechanism to create private network interfaces separately from the droplets they belong to, so in this case it would be necessary to use public IP addresses via digitalocean_floating_ip, which may not be appropriate for all situations:
resource "digitalocean_floating_ip" "rs" {
count = var.count_rs_nodes
region = var.region
}
resource "digitalocean_droplet" "rs" {
count = var.count_rs_nodes
image = var.image
name = "${var.prefix}-rs-${count.index}"
region = var.region
size = var.rs_size
private_networking = true
ssh_keys = var.ssh_keys
user_data = templatefile("${path.module}/files/rsdata.tmpl", {
docker_version = var.docker_version
private_ip_rs = digitalocean_floating_ip.rs[count.index].ip_address
private_ip_mysql = digitalocean_droplet.mysql.ipv4_address_private
})
}
resource "digitalocean_floating_ip_assignment" "rs" {
count = var.count_rs_nodes
ip_address = digitalocean_floating_ip.rs[count.index].ip_address
droplet_id = digitalocean_droplet.rs[count.index].id
}
Because the "floating IP assignment" is created as a separate step after the droplet is launched, there will be some delay before the floating IP is associated with the instance and so whatever software is relying on that IP address must be resilient to running before the association is created.
Note that I also switched from using data "template_file" to the templatefile function because the data source is offered only for backward-compatibility with Terraform 0.11 configurations; the built-in function is the recommended way to render external template files in Terraform 0.12, and avoids the need for a separate configuration block.

Related

Is there a way to get the Terraform output for only the added resources?

I'm creating an azurerm_windows_virtual_machine resource that has a count which changes overtime.
resource "azurerm_windows_virtual_machine" "this" {
# The count variable could change
count = var.virtual_machine_count[terraform.workspace]
name = "vmt${count.index}"
...
tags = local.tags
}
After creating the VM resource, I'm creating a local_file that I would need to use as an Ansible inventory to configure the VMs.
resource "local_file" "ansible_inventory" {
depends_on = [azurerm_windows_virtual_machine.this]
content = templatefile(var.template_file_name,
{
private-ips = azurerm_windows_virtual_machine.this.*.private_ip_address,
ansible-user = var.ansible_user,
ansible-password = var.ansible_password,
ansible-ssh-port = var.ansible_ssh_port,
ansible-connection = var.ansible_connection,
ansible-winrm-transport = var.ansible_winrm_transport,
ansible-winrm-server-cert-validation = var.ansible_winrm_server_cert_validation
}
)
filename = var.template_output_file_name
}
The current behavior of this code is whenever I add another VM to the VM count, the local file resource adds the IP addresses of all VMs, including existing ones to the template file.
What I was hoping to do was to add the IP address of just the newly added VM resource to the template file.
Not sure if this is possible with Terraform only or if I need an external script to keep track of the existing and new IP addresses. Thanks.

Terraform: What is the simplest way to Incrementally add servers to a deployment?

I am a newbie with terraform so donĀ“t laugh :) I want to deploy a number of instances of a server, then add their IPs to a Route53 hosted zone. I will be using Terraform v0.12.24 no chance of 0.14 at the moment.
So far, I have working the "easy", spaghetti approach:
module server: buys and creates a list of servers
module route53: adds route53 records, parameter=aray of ips
main.tf
module "hostedzone" {
source = "./route53"
ncs_domain = var.ncs_domain
}
module "server" {
source = "./server"
name = "${var.ncs_hostname}-${var.ncs_id}"
hcloud_token = var.server_htk
servers = [
{
type = "cx11",
location = "fsn1",
},
{
type = "cx11",
location = "fsn1",
}
]
}
resource "aws_route53_record" "server1-record" {
zone_id = module.hostedzone.zone.zone_id
name = "${var.ncs_hostname}.${var.ncs_domain}"
type = "A"
ttl = "300"
records = module.server.server.*.ipv4_address
}
and the relevant server resource array:
resource "hcloud_server" "apiserver" {
count = length(var.servers)
# Create a server
name = "${var.name}-${count.index}"
# Name server
image = var.image
# Basic image
server_type = var.servers[count.index].type
# Instance type
location = var.servers[count.index].location
}
So if I run terraform apply, I get the server array created. Cool !
Now, I would like to be able to run this module to create and destroy specific servers on demand, like:
initially deploy the platform with one or two servers.
remove one of the initial servers in the array
add new servers
So, how could I use this incrementally, that is, without providing the whole array of servers everytime? Like just adding one to the existing list, or remove other.

How to create Virtual servers in IBM cloud Terraform with for loop?

I have a Virtual server in IBM cloud created using Terraform
resource "ibm_is_instance" "vsi1" {
name = "${local.BASENAME}-vsi1"
vpc = ibm_is_vpc.vpc.id
zone = local.ZONE
keys = [data.ibm_is_ssh_key.ssh_key_id.id]
image = data.ibm_is_image.ubuntu.id
profile = "cc1-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
How to create Virtual Servers with Terraform For loops
vsi1 , vsi2, vsi3, vsi4, vsi5
for full code Please refer IBM Cloud Terraform getting started tutorial
You may not require a for or for-each loop for achieving what you need. A simple count will do the required. Once you add count(number of instances), all you need to do is pass count.index in the VSI name.
resource "ibm_is_instance" "vsi" {
count = 4
name = "${local.BASENAME}-vsi-${count.index}"
vpc = ibm_is_vpc.vpc.id
zone = local.ZONE
keys = [data.ibm_is_ssh_key.ssh_key_id.id]
image = data.ibm_is_image.ubuntu.id
profile = "cc1-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
This will create instances with names vsi-0,vsi-1...

terraform google cloud nat using reserved static ip

We have reserved static (whitelisted) IP addresses that need to be assigned to a CloudNAT on GCP by terraform. The IPs are reserved and registered with a service provider, which takes weeks to get approved and added to their firewalls, so dynamic allocation is not an option.
The main problem for us is that the google_compute_router_nat section requires the nat_ip_allocate_option, but in this case the IP address has already been allocated, so it fails with an error stating exactly that. The only options for allocate are AUTO_ONLY and MANUAL_ONLY, but it seems maybe an EXISTING or RESERVED might be needed, unless I'm missing something obvious.
Here is the failing configuration:
resource "google_compute_address" "static_ip" {
name = "whitelisted-static-ip"
region = "${var.project_region}"
}
resource "google_compute_router_nat" "cluster-nat" {
name = "cluster-stg-nat"
router = "${google_compute_router.router.name}"
region = "${google_compute_router.router.region}"
nat_ip_allocate_option = "MANUAL_ONLY"
nat_ips = ["${google_compute_address.static_ip.self_link}"]
source_subnetwork_ip_ranges_to_nat = "LIST_OF_SUBNETWORKS"
subnetwork {
name = "${google_compute_subnetwork.service.self_link}"
source_ip_ranges_to_nat = ["ALL_IP_RANGES"]
}
}
Results in the following error:
Error: Error creating Address: googleapi: Error 409: The resource 'projects/staging-cluster/regions/us-central1/addresses/whitelisted-static-ip' already exists, alreadyExists
because the static IP resource is already reserved in GCP External IP Addresses and registered with the service provider.
Changing the google_compute_address resource to a data object was the magic. I modified it to be:
data "google_compute_address" "static_ip" {
name = "whitelisted-static-ip"
region = "${var.project_region}"
}
Where the name of "whitelisted-static-ip" is what we assigned to the reserved external IP address when we created it. The updated router NAT resource then became:
resource "google_compute_router_nat" "cluster-nat" {
name = "${var.cluster_name}-nat"
router = "${google_compute_router.router.name}"
region = "${google_compute_router.router.region}"
nat_ip_allocate_option = "MANUAL_ONLY"
nat_ips = ["${data.google_compute_address.static_ip.self_link}"]
source_subnetwork_ip_ranges_to_nat = "LIST_OF_SUBNETWORKS"
subnetwork {
name = "${google_compute_subnetwork.service.self_link}"
source_ip_ranges_to_nat = ["PRIMARY_IP_RANGE"]
}
}
which is only a mod to the nat_ips field to point to the data object. Simple two word change and we're good to go. Excellent!
It looks like the problem is with the google_compute_address resource, not the NAT. You are trying to create a resource that already exists. Instead you should do one of the following:
If you want Terraform to manage this resource for you import the resource into Terraform, see here https://www.terraform.io/docs/import/ and here https://www.terraform.io/docs/providers/google/r/compute_address.html#import
If you do not want Terraform to manage the IP address for you then you can use a data object instead of a resource object. This is essentially a read only resource lookup so that you can reference it in Terraform but manage it somewhere else. See here https://www.terraform.io/docs/configuration/data-sources.html and here https://www.terraform.io/docs/providers/google/d/datasource_compute_address.html

How to prevent data loss in persistent volume when server is recreated

I am working with terraform and openstack and using a persistent volume to store data. When recreating the server only and reattaching the same vol, this data is sometimes corrupted/lost. How do i prevent this?
I taint the server and then terraform apply to recreate. This detaches the vol , destroys the server, recreates and attaches back the volume. However, sometimes the data in the volume is lost or corrupted. This vol contains postgreSQL DB files.
I tried to use terraform destroy - but that will cause the volume to be destroyed as well.
This is the module
data "template_file" "init-config" {
template = "${file("modules/postgres-server/init-config.tpl")}"
vars {
instance_name = "${var.instance_name}"
tenant_name = "${var.tenant_name}"
}
}
# Define instance properties.
# You should provide the variables in main.tf
resource "openstack_compute_instance_v2" "server" {
name = "${var.instance_name}"
image_name = "${var.image_name}"
flavor_name = "${var.flavor_name}"
key_pair = "${var.key_name}"
security_groups = ["default", "${var.secgroup_name}"]
user_data = "${data.template_file.init-config.rendered}"
stop_before_destroy = "true"
network {
name = "${var.tenant_name}-net"
}
}
# Define a floating ip resoruce
resource "openstack_networking_floatingip_v2" "server_float" {
pool = "net-iaas-external-dev"
}
# Associate the instance and floating ip resources
resource "openstack_compute_floatingip_associate_v2" "server_float_assoc" {
floating_ip = "${openstack_networking_floatingip_v2.server_float.address}"
instance_id = "${openstack_compute_instance_v2.server.id}"
}
# Create persistent vol
resource "openstack_blockstorage_volume_v2" "pgvol" {
name = "postgreSQL-DATA-${var.instance_name}"
description = "Data Vol for ${var.instance_name}"
size = 50
}
# Attach the persistent data to the instance
resource "openstack_compute_volume_attach_v2" "pgvol_attach" {
instance_id = "${openstack_compute_instance_v2.server.id}"
volume_id = "${openstack_blockstorage_volume_v2.pgvol.id}"
device = "/dev/vdc"
}
This is the main.tf
module "postgre-server" {
source = "./modules/postgres-server"
instance_name = "INST_NAME"
image_name = "centos7"
flavor_name = "r1.medium"
key_name = "${module.keypair.output_key_name}"
secgroup_name = "${module.secgroup.output_secgroup_name}"
tenant_name = "${var.tenant_name}"
}
Expected result is that volume data is not lost and when I attached back to the newly re-created server, the filesystems in that volume and all the data is there.
Thanks. Appreciate any insights on how to do this.
A quick way is to split code into two stacks, one stack (module #1) manages the storage only, the other (module #2) manage the rest.
After split, you can change module #2 anytime, whatever apply or destroy.
Between two stacks, you can reference storage resource with several ways.
Way one:
reference from data source terraform_remote_state, which you need set an output as below
output "persistant_storage_id" {
value = "${openstack_blockstorage_volume_v2.pgvol.id}"
}
then use below code in module 2 to reference the persistent storage.
data "terraform_remote_state" "persistent_storage" {
backend = "xxx"
config {
name = "hashicorp/persistent-storage"
}
}
so module #2 can reference it as ${data.terraform_remote_state.persistent_storage.persistent_storage_id}"
Way two:
reference persistent storage volume id with data source openstack_blockstorage_availability_zones_v3 directly
way three:
way #3 is similar as #1.
you need output the value "${openstack_blockstorage_volume_v2.pgvol.id}" in module #1,
output "persistant_storage_id" {
value = "${openstack_blockstorage_volume_v2.pgvol.id}"
}
call the module #1
module "persistent_storage" {
...
}
then reference it as ${module.persistent_storage.persistent_storage_id}"
This is working when I unmount the filesystems in the vol prior to using terraform to recreate the instance. I thought the stop_before_destroy = "true" would have gracefully stopped the instance and detach vol, but it didn't work in my case :)

Resources