How to create Virtual servers in IBM cloud Terraform with for loop? - terraform

I have a Virtual server in IBM cloud created using Terraform
resource "ibm_is_instance" "vsi1" {
name = "${local.BASENAME}-vsi1"
vpc = ibm_is_vpc.vpc.id
zone = local.ZONE
keys = [data.ibm_is_ssh_key.ssh_key_id.id]
image = data.ibm_is_image.ubuntu.id
profile = "cc1-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
How to create Virtual Servers with Terraform For loops
vsi1 , vsi2, vsi3, vsi4, vsi5
for full code Please refer IBM Cloud Terraform getting started tutorial

You may not require a for or for-each loop for achieving what you need. A simple count will do the required. Once you add count(number of instances), all you need to do is pass count.index in the VSI name.
resource "ibm_is_instance" "vsi" {
count = 4
name = "${local.BASENAME}-vsi-${count.index}"
vpc = ibm_is_vpc.vpc.id
zone = local.ZONE
keys = [data.ibm_is_ssh_key.ssh_key_id.id]
image = data.ibm_is_image.ubuntu.id
profile = "cc1-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
This will create instances with names vsi-0,vsi-1...

Related

Is there a way to get the Terraform output for only the added resources?

I'm creating an azurerm_windows_virtual_machine resource that has a count which changes overtime.
resource "azurerm_windows_virtual_machine" "this" {
# The count variable could change
count = var.virtual_machine_count[terraform.workspace]
name = "vmt${count.index}"
...
tags = local.tags
}
After creating the VM resource, I'm creating a local_file that I would need to use as an Ansible inventory to configure the VMs.
resource "local_file" "ansible_inventory" {
depends_on = [azurerm_windows_virtual_machine.this]
content = templatefile(var.template_file_name,
{
private-ips = azurerm_windows_virtual_machine.this.*.private_ip_address,
ansible-user = var.ansible_user,
ansible-password = var.ansible_password,
ansible-ssh-port = var.ansible_ssh_port,
ansible-connection = var.ansible_connection,
ansible-winrm-transport = var.ansible_winrm_transport,
ansible-winrm-server-cert-validation = var.ansible_winrm_server_cert_validation
}
)
filename = var.template_output_file_name
}
The current behavior of this code is whenever I add another VM to the VM count, the local file resource adds the IP addresses of all VMs, including existing ones to the template file.
What I was hoping to do was to add the IP address of just the newly added VM resource to the template file.
Not sure if this is possible with Terraform only or if I need an external script to keep track of the existing and new IP addresses. Thanks.

Terraform: What is the simplest way to Incrementally add servers to a deployment?

I am a newbie with terraform so donĀ“t laugh :) I want to deploy a number of instances of a server, then add their IPs to a Route53 hosted zone. I will be using Terraform v0.12.24 no chance of 0.14 at the moment.
So far, I have working the "easy", spaghetti approach:
module server: buys and creates a list of servers
module route53: adds route53 records, parameter=aray of ips
main.tf
module "hostedzone" {
source = "./route53"
ncs_domain = var.ncs_domain
}
module "server" {
source = "./server"
name = "${var.ncs_hostname}-${var.ncs_id}"
hcloud_token = var.server_htk
servers = [
{
type = "cx11",
location = "fsn1",
},
{
type = "cx11",
location = "fsn1",
}
]
}
resource "aws_route53_record" "server1-record" {
zone_id = module.hostedzone.zone.zone_id
name = "${var.ncs_hostname}.${var.ncs_domain}"
type = "A"
ttl = "300"
records = module.server.server.*.ipv4_address
}
and the relevant server resource array:
resource "hcloud_server" "apiserver" {
count = length(var.servers)
# Create a server
name = "${var.name}-${count.index}"
# Name server
image = var.image
# Basic image
server_type = var.servers[count.index].type
# Instance type
location = var.servers[count.index].location
}
So if I run terraform apply, I get the server array created. Cool !
Now, I would like to be able to run this module to create and destroy specific servers on demand, like:
initially deploy the platform with one or two servers.
remove one of the initial servers in the array
add new servers
So, how could I use this incrementally, that is, without providing the whole array of servers everytime? Like just adding one to the existing list, or remove other.

Attach Leaf interface to EPG on Cisco ACI with Terraform

I'm trying to create an EPG on Cisco ACI using Terraform. EPG is created but Leaf's interface isn't attached.
The terraform synthax to attach Leaf interface is :
resource "aci_application_epg" "VLAN-616-EPG" {
...
relation_fv_rs_path_att = ["topology/pod-1/paths-103/pathep-[eth1/1]"]
...
}
It works when I do it manually through ACI web interface or REST API
I don't believe that this has been implemented yet. If you look in the code for the provider there is no test for that attribute, and I find this line in the examples for the EPGs. Both things lead me to believe it's not completed. Also, that particular item requires an encapsulation with VLAN/VXLAN, or QinQ, so that would need to be included if this was to work.
relation_fv_rs_path_att = ["testpathatt"]
Probably the best you could do is either make a direct REST call (act_rest in the terraform provider), or use an Ansible provider to create it (I'm investigating this now).
I ask to Cisco support and they send me this solution :
resource "aci_application_epg" "terraform-epg" {
application_profile_dn = "${aci_application_profile.terraform-app.id}"
name = "TerraformEPG1"
}
resource "aci_rest" "epg_path_relation" {
path = "api/node/mo/${aci_application_epg.terraform-epg.id}.json"
class_name = "fvRsPathAtt"
content = {
"encap":"vlan-907"
"tDn":"topology/pod-1/paths-101/pathep-[eth1/1]"
}
}
The solution with latest provider version is to do this:
data "aci_physical_domain" "physdom" {
name = "phys"
}
resource "aci_application_epg" "on_prem_epg" {
application_profile_dn = aci_application_profile.on_prem_app.id
name = "db"
relation_fv_rs_dom_att = [data.aci_physical_domain.physdom.id]
}
resource "aci_epg_to_domain" "rs_on_prem_epg_to_physdom" {
application_epg_dn = aci_application_epg.on_prem_epg.id
tdn = data.aci_physical_domain.physdom.id
}
resource "aci_epg_to_static_path" "leaf_101_eth1_23" {
application_epg_dn = aci_application_epg.on_prem_epg.id
tdn = "topology/pod-1/paths-101/pathep-[eth1/23]"
encap = "vlan-1100"
}

How can I overcome "Error: Cycle" on digitalocean droplet

I am sure this is a simple problem that I don't know how to interpret it at the moment.
I use 3 droplets(called rs) and have a template file which configures each.
[..]
data "template_file" "rsdata" {
template = file("files/rsdata.tmpl")
count = var.count_rs_nodes
vars = {
docker_version = var.docker_version
private_ip_rs = digitalocean_droplet.rs[count.index].ipv4_address_private
private_ip_mysql = digitalocean_droplet.mysql.ipv4_address_private
}
}
resource "digitalocean_droplet" "rs" {
count = var.count_rs_nodes
image = var.image
name = "${var.prefix}-rs-${count.index}"
region = var.region
size = var.rs_size
private_networking = true
user_data = data.template_file.rsdata.rendered
ssh_keys = var.ssh_keys
depends_on = ["digitalocean_droplet.mysql"]
}
[..]
When I do a terraform apply I get:
Error: Cycle: digitalocean_droplet.rs, data.template_file.rsdata
Note this is terraform 0.12
What am I doing wrong and how can I overcome this please?
This error is returned because the data.template_file.rsdata resource refers to the digitalocean_droplet.rs resource and vice-versa. That creates an impossible situation for Terraform: there is no ordering Terraform could use to process these that would ensure that all of the necessary data is available at each step.
The key problem is that the private IPv4 address for a droplet is allocated as part of creating it, but the user_data is used as part of that creation and so it cannot include the IPv4 address that is yet to be allocated.
The most straightforward way to deal with this would be to not include the droplet's IP address as part of its user_data and instead arrange for whatever software is processing that user_data to fetch the IP address of the host directly from the network interface at runtime. The kernel running in that droplet will know what the IP address is, so you can retrieve it from there in principle.
If for some reason including the IP addresses in the user_data is unavoidable (this can occur, for example, if there are a set of virtual machines that all need to be aware of each other) then a more complex alternative is to separate the allocation of the IP addresses from the creation of the instances. DigitalOcean doesn't have a mechanism to create private network interfaces separately from the droplets they belong to, so in this case it would be necessary to use public IP addresses via digitalocean_floating_ip, which may not be appropriate for all situations:
resource "digitalocean_floating_ip" "rs" {
count = var.count_rs_nodes
region = var.region
}
resource "digitalocean_droplet" "rs" {
count = var.count_rs_nodes
image = var.image
name = "${var.prefix}-rs-${count.index}"
region = var.region
size = var.rs_size
private_networking = true
ssh_keys = var.ssh_keys
user_data = templatefile("${path.module}/files/rsdata.tmpl", {
docker_version = var.docker_version
private_ip_rs = digitalocean_floating_ip.rs[count.index].ip_address
private_ip_mysql = digitalocean_droplet.mysql.ipv4_address_private
})
}
resource "digitalocean_floating_ip_assignment" "rs" {
count = var.count_rs_nodes
ip_address = digitalocean_floating_ip.rs[count.index].ip_address
droplet_id = digitalocean_droplet.rs[count.index].id
}
Because the "floating IP assignment" is created as a separate step after the droplet is launched, there will be some delay before the floating IP is associated with the instance and so whatever software is relying on that IP address must be resilient to running before the association is created.
Note that I also switched from using data "template_file" to the templatefile function because the data source is offered only for backward-compatibility with Terraform 0.11 configurations; the built-in function is the recommended way to render external template files in Terraform 0.12, and avoids the need for a separate configuration block.

Using terraform to create an aggregate ethernet interface on paloalto?

I've been trying to get terraform to create a new ae interface with no luck.
My tf files are very basic working with a factory reset PA3020 that only has the user, password, and IP preconfigured.
It's connecting correctly as I've been able to create/modify other values such as a management profile.
Has anyone successfuly been able to create an aggregate group in paloalso using terraform? If so how was that done?
provider "panos" {
hostname = "${var.pa-mgt-ip}"
username = "${var.pa-username}"
password = "${var.pa-password}"
}
resource "panos_ethernet_interface" "ae_int1" {
name = "ae1"
vsys = "vsys1"
mode = "layer3"
comment = "AE interface from TF"
}
resource "panos_ethernet_interface" "phy_int1" {
name = "ethernet1/3"
vsys = "vsys1"
mode = "aggregate-group"
aggregate_group = "${panos_ethernet_interface.ae_int1.name}"
comment = "AE1 physical interface from TF"
}
resource "panos_ethernet_interface" "phy_int2" {
name = "ethernet1/4"
vsys = "vsys1"
mode = "aggregate-group"
aggregate_group = "${panos_ethernet_interface.ae_int1.name}"
comment = "AE1 physical interface from TF"
}
The error is ae1 'ae1' is not a valid reference and the interface is not getting created. If I manually create the ae1 interface in the UI and set the group to ae1 in for the physical interfaces in the TF file they fail with the error aggregate-group is invalid.
Does panos not currently support creating AE interfaces? I couldn't find any issues in github related to creating interfaces.

Resources