Attach Leaf interface to EPG on Cisco ACI with Terraform - terraform

I'm trying to create an EPG on Cisco ACI using Terraform. EPG is created but Leaf's interface isn't attached.
The terraform synthax to attach Leaf interface is :
resource "aci_application_epg" "VLAN-616-EPG" {
...
relation_fv_rs_path_att = ["topology/pod-1/paths-103/pathep-[eth1/1]"]
...
}
It works when I do it manually through ACI web interface or REST API

I don't believe that this has been implemented yet. If you look in the code for the provider there is no test for that attribute, and I find this line in the examples for the EPGs. Both things lead me to believe it's not completed. Also, that particular item requires an encapsulation with VLAN/VXLAN, or QinQ, so that would need to be included if this was to work.
relation_fv_rs_path_att = ["testpathatt"]
Probably the best you could do is either make a direct REST call (act_rest in the terraform provider), or use an Ansible provider to create it (I'm investigating this now).

I ask to Cisco support and they send me this solution :
resource "aci_application_epg" "terraform-epg" {
application_profile_dn = "${aci_application_profile.terraform-app.id}"
name = "TerraformEPG1"
}
resource "aci_rest" "epg_path_relation" {
path = "api/node/mo/${aci_application_epg.terraform-epg.id}.json"
class_name = "fvRsPathAtt"
content = {
"encap":"vlan-907"
"tDn":"topology/pod-1/paths-101/pathep-[eth1/1]"
}
}

The solution with latest provider version is to do this:
data "aci_physical_domain" "physdom" {
name = "phys"
}
resource "aci_application_epg" "on_prem_epg" {
application_profile_dn = aci_application_profile.on_prem_app.id
name = "db"
relation_fv_rs_dom_att = [data.aci_physical_domain.physdom.id]
}
resource "aci_epg_to_domain" "rs_on_prem_epg_to_physdom" {
application_epg_dn = aci_application_epg.on_prem_epg.id
tdn = data.aci_physical_domain.physdom.id
}
resource "aci_epg_to_static_path" "leaf_101_eth1_23" {
application_epg_dn = aci_application_epg.on_prem_epg.id
tdn = "topology/pod-1/paths-101/pathep-[eth1/23]"
encap = "vlan-1100"
}

Related

How to deploy Kind : V2 connection using Terraforms?

Trying to deploy Queue Api connection with Kind V2 as have to get runtime URL which is only possible if its Kind : V2 right now its get deployed as V1
resource "azurerm_api_connection" "azurequeuesconnect" {
name = "azurequeues"
resource_group_name = data.azurerm_resource_group.resource_group.name
managed_api_id = data.azurerm_managed_api.azurequeuesmp.id
display_name = "azurequeues"
parameter_values = {
"storageaccount" = data.azurerm_storage_account.blobStorageAccount.name
"sharedkey" = data.azurerm_storage_account.blobStorageAccount.primary_access_key
}
tags = {
"environment-id" = "testtag"
}
}
As far as I know, this is currently not possible. See the github issue
I had a similar problem, and to workaround it I've used an ARM template and the 'azurerm_resource_group_template_deployment' terraform module.
Here is a reference:
https://github.com/microsoft/AzureTRE/blob/main/templates/shared_services/airlock_notifier/terraform/airlock_notifier.tf#L58
This can be done by using arm template with terraform template deployments

How to create Virtual servers in IBM cloud Terraform with for loop?

I have a Virtual server in IBM cloud created using Terraform
resource "ibm_is_instance" "vsi1" {
name = "${local.BASENAME}-vsi1"
vpc = ibm_is_vpc.vpc.id
zone = local.ZONE
keys = [data.ibm_is_ssh_key.ssh_key_id.id]
image = data.ibm_is_image.ubuntu.id
profile = "cc1-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
How to create Virtual Servers with Terraform For loops
vsi1 , vsi2, vsi3, vsi4, vsi5
for full code Please refer IBM Cloud Terraform getting started tutorial
You may not require a for or for-each loop for achieving what you need. A simple count will do the required. Once you add count(number of instances), all you need to do is pass count.index in the VSI name.
resource "ibm_is_instance" "vsi" {
count = 4
name = "${local.BASENAME}-vsi-${count.index}"
vpc = ibm_is_vpc.vpc.id
zone = local.ZONE
keys = [data.ibm_is_ssh_key.ssh_key_id.id]
image = data.ibm_is_image.ubuntu.id
profile = "cc1-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
This will create instances with names vsi-0,vsi-1...

Using terraform to create an aggregate ethernet interface on paloalto?

I've been trying to get terraform to create a new ae interface with no luck.
My tf files are very basic working with a factory reset PA3020 that only has the user, password, and IP preconfigured.
It's connecting correctly as I've been able to create/modify other values such as a management profile.
Has anyone successfuly been able to create an aggregate group in paloalso using terraform? If so how was that done?
provider "panos" {
hostname = "${var.pa-mgt-ip}"
username = "${var.pa-username}"
password = "${var.pa-password}"
}
resource "panos_ethernet_interface" "ae_int1" {
name = "ae1"
vsys = "vsys1"
mode = "layer3"
comment = "AE interface from TF"
}
resource "panos_ethernet_interface" "phy_int1" {
name = "ethernet1/3"
vsys = "vsys1"
mode = "aggregate-group"
aggregate_group = "${panos_ethernet_interface.ae_int1.name}"
comment = "AE1 physical interface from TF"
}
resource "panos_ethernet_interface" "phy_int2" {
name = "ethernet1/4"
vsys = "vsys1"
mode = "aggregate-group"
aggregate_group = "${panos_ethernet_interface.ae_int1.name}"
comment = "AE1 physical interface from TF"
}
The error is ae1 'ae1' is not a valid reference and the interface is not getting created. If I manually create the ae1 interface in the UI and set the group to ae1 in for the physical interfaces in the TF file they fail with the error aggregate-group is invalid.
Does panos not currently support creating AE interfaces? I couldn't find any issues in github related to creating interfaces.

How to prevent data loss in persistent volume when server is recreated

I am working with terraform and openstack and using a persistent volume to store data. When recreating the server only and reattaching the same vol, this data is sometimes corrupted/lost. How do i prevent this?
I taint the server and then terraform apply to recreate. This detaches the vol , destroys the server, recreates and attaches back the volume. However, sometimes the data in the volume is lost or corrupted. This vol contains postgreSQL DB files.
I tried to use terraform destroy - but that will cause the volume to be destroyed as well.
This is the module
data "template_file" "init-config" {
template = "${file("modules/postgres-server/init-config.tpl")}"
vars {
instance_name = "${var.instance_name}"
tenant_name = "${var.tenant_name}"
}
}
# Define instance properties.
# You should provide the variables in main.tf
resource "openstack_compute_instance_v2" "server" {
name = "${var.instance_name}"
image_name = "${var.image_name}"
flavor_name = "${var.flavor_name}"
key_pair = "${var.key_name}"
security_groups = ["default", "${var.secgroup_name}"]
user_data = "${data.template_file.init-config.rendered}"
stop_before_destroy = "true"
network {
name = "${var.tenant_name}-net"
}
}
# Define a floating ip resoruce
resource "openstack_networking_floatingip_v2" "server_float" {
pool = "net-iaas-external-dev"
}
# Associate the instance and floating ip resources
resource "openstack_compute_floatingip_associate_v2" "server_float_assoc" {
floating_ip = "${openstack_networking_floatingip_v2.server_float.address}"
instance_id = "${openstack_compute_instance_v2.server.id}"
}
# Create persistent vol
resource "openstack_blockstorage_volume_v2" "pgvol" {
name = "postgreSQL-DATA-${var.instance_name}"
description = "Data Vol for ${var.instance_name}"
size = 50
}
# Attach the persistent data to the instance
resource "openstack_compute_volume_attach_v2" "pgvol_attach" {
instance_id = "${openstack_compute_instance_v2.server.id}"
volume_id = "${openstack_blockstorage_volume_v2.pgvol.id}"
device = "/dev/vdc"
}
This is the main.tf
module "postgre-server" {
source = "./modules/postgres-server"
instance_name = "INST_NAME"
image_name = "centos7"
flavor_name = "r1.medium"
key_name = "${module.keypair.output_key_name}"
secgroup_name = "${module.secgroup.output_secgroup_name}"
tenant_name = "${var.tenant_name}"
}
Expected result is that volume data is not lost and when I attached back to the newly re-created server, the filesystems in that volume and all the data is there.
Thanks. Appreciate any insights on how to do this.
A quick way is to split code into two stacks, one stack (module #1) manages the storage only, the other (module #2) manage the rest.
After split, you can change module #2 anytime, whatever apply or destroy.
Between two stacks, you can reference storage resource with several ways.
Way one:
reference from data source terraform_remote_state, which you need set an output as below
output "persistant_storage_id" {
value = "${openstack_blockstorage_volume_v2.pgvol.id}"
}
then use below code in module 2 to reference the persistent storage.
data "terraform_remote_state" "persistent_storage" {
backend = "xxx"
config {
name = "hashicorp/persistent-storage"
}
}
so module #2 can reference it as ${data.terraform_remote_state.persistent_storage.persistent_storage_id}"
Way two:
reference persistent storage volume id with data source openstack_blockstorage_availability_zones_v3 directly
way three:
way #3 is similar as #1.
you need output the value "${openstack_blockstorage_volume_v2.pgvol.id}" in module #1,
output "persistant_storage_id" {
value = "${openstack_blockstorage_volume_v2.pgvol.id}"
}
call the module #1
module "persistent_storage" {
...
}
then reference it as ${module.persistent_storage.persistent_storage_id}"
This is working when I unmount the filesystems in the vol prior to using terraform to recreate the instance. I thought the stop_before_destroy = "true" would have gracefully stopped the instance and detach vol, but it didn't work in my case :)

Terraform doesn't build triton machine

I've set my first steps into the world of terraform, I'm trying to deploy infrastructure on Joyent triton.
After setup, I wrote my first .tf (well, copied from the examples) and hit terraform apply. All seems to go well, it doesn't break on errors, but it doesn't actually provision my container ?? I doublechecked in the triton web gui and with "triton instance list". Nothing there.
Any ideas what's going on here ?
provider "triton" {
account = "tralala"
key_id = "my-pub-key"
url = "https://eu-ams-1.api.joyentcloud.com"
}
resource "triton_machine" "test-smartos" {
name = "test-smartos"
package = "g4-highcpu-128M"
image = "842e6fa6-6e9b-11e5-8402-1b490459e334"
tags {
hello = "world"
role = "database"
}
cns {
services = ["web", "frontend"]
}
}

Resources