Terraform doesn't build triton machine - terraform

I've set my first steps into the world of terraform, I'm trying to deploy infrastructure on Joyent triton.
After setup, I wrote my first .tf (well, copied from the examples) and hit terraform apply. All seems to go well, it doesn't break on errors, but it doesn't actually provision my container ?? I doublechecked in the triton web gui and with "triton instance list". Nothing there.
Any ideas what's going on here ?
provider "triton" {
account = "tralala"
key_id = "my-pub-key"
url = "https://eu-ams-1.api.joyentcloud.com"
}
resource "triton_machine" "test-smartos" {
name = "test-smartos"
package = "g4-highcpu-128M"
image = "842e6fa6-6e9b-11e5-8402-1b490459e334"
tags {
hello = "world"
role = "database"
}
cns {
services = ["web", "frontend"]
}
}

Related

How to deploy Kind : V2 connection using Terraforms?

Trying to deploy Queue Api connection with Kind V2 as have to get runtime URL which is only possible if its Kind : V2 right now its get deployed as V1
resource "azurerm_api_connection" "azurequeuesconnect" {
name = "azurequeues"
resource_group_name = data.azurerm_resource_group.resource_group.name
managed_api_id = data.azurerm_managed_api.azurequeuesmp.id
display_name = "azurequeues"
parameter_values = {
"storageaccount" = data.azurerm_storage_account.blobStorageAccount.name
"sharedkey" = data.azurerm_storage_account.blobStorageAccount.primary_access_key
}
tags = {
"environment-id" = "testtag"
}
}
As far as I know, this is currently not possible. See the github issue
I had a similar problem, and to workaround it I've used an ARM template and the 'azurerm_resource_group_template_deployment' terraform module.
Here is a reference:
https://github.com/microsoft/AzureTRE/blob/main/templates/shared_services/airlock_notifier/terraform/airlock_notifier.tf#L58
This can be done by using arm template with terraform template deployments

Specify proxy for helm repository in terraform for azure china

I am trying to deploy a helm chart via terraform to Azure Kubernetes Service in China. The problem is that I cannot pull images from k8s.gcr.io/ingress-nginx. I need to specify a proxy as described in https://github.com/Azure/container-service-for-azure-china/blob/master/aks/README.md#22-container-registry-proxy but I don't know how to do this via terraform. In west europe my resource simply looks like
resource "helm_release" "nginx_ingress" {
name = "ingress-nginx"
chart = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
namespace = kubernetes_namespace.nginx_ingress.metadata[0].name
set {
name = "controller.service.annotations.service\\.beta\\.kubernetes\\.io/azure-load-balancer-resource-group"
value = azurerm_public_ip.nginx_ingress_pip.resource_group_name
}
set {
name = "controller.service.loadBalancerIP"
value = azurerm_public_ip.nginx_ingress_pip.ip_address
}
}
How do I get the proxy settings in there? Any help is greatly appreciated.
AFIK, Helm provider for terraform does not support proxy settings yet.
There is a pull request being discussed under this thread: https://github.com/hashicorp/terraform-provider-helm/issues/552
Until this feature is implemented you may consider other temporary workarounds like make a copy of the chart on your terraform repo and reference it from the helm provider.
Turns out I had some problems figuring out how to modify the helm chart in the correct way plus the solution was not exactly a proxy configuration but to directly use a different repository for the image pull. This works:
resource "helm_release" "nginx_ingress" {
name = "ingress-nginx"
chart = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
namespace = kubernetes_namespace.nginx_ingress.metadata[0].name
set {
name = "controller.service.annotations.service\\.beta\\.kubernetes\\.io/azure-load-balancer-resource-group"
value = azurerm_public_ip.nginx_ingress_pip.resource_group_name
}
set {
name = "controller.service.loadBalancerIP"
value = azurerm_public_ip.nginx_ingress_pip.ip_address
}
set {
name = "controller.image.repository"
value = "k8sgcr.azk8s.cn/ingress-nginx/controller"
}
}
Thank you anyways for your input!

Terraform: What is the simplest way to Incrementally add servers to a deployment?

I am a newbie with terraform so donĀ“t laugh :) I want to deploy a number of instances of a server, then add their IPs to a Route53 hosted zone. I will be using Terraform v0.12.24 no chance of 0.14 at the moment.
So far, I have working the "easy", spaghetti approach:
module server: buys and creates a list of servers
module route53: adds route53 records, parameter=aray of ips
main.tf
module "hostedzone" {
source = "./route53"
ncs_domain = var.ncs_domain
}
module "server" {
source = "./server"
name = "${var.ncs_hostname}-${var.ncs_id}"
hcloud_token = var.server_htk
servers = [
{
type = "cx11",
location = "fsn1",
},
{
type = "cx11",
location = "fsn1",
}
]
}
resource "aws_route53_record" "server1-record" {
zone_id = module.hostedzone.zone.zone_id
name = "${var.ncs_hostname}.${var.ncs_domain}"
type = "A"
ttl = "300"
records = module.server.server.*.ipv4_address
}
and the relevant server resource array:
resource "hcloud_server" "apiserver" {
count = length(var.servers)
# Create a server
name = "${var.name}-${count.index}"
# Name server
image = var.image
# Basic image
server_type = var.servers[count.index].type
# Instance type
location = var.servers[count.index].location
}
So if I run terraform apply, I get the server array created. Cool !
Now, I would like to be able to run this module to create and destroy specific servers on demand, like:
initially deploy the platform with one or two servers.
remove one of the initial servers in the array
add new servers
So, how could I use this incrementally, that is, without providing the whole array of servers everytime? Like just adding one to the existing list, or remove other.

How to create Virtual servers in IBM cloud Terraform with for loop?

I have a Virtual server in IBM cloud created using Terraform
resource "ibm_is_instance" "vsi1" {
name = "${local.BASENAME}-vsi1"
vpc = ibm_is_vpc.vpc.id
zone = local.ZONE
keys = [data.ibm_is_ssh_key.ssh_key_id.id]
image = data.ibm_is_image.ubuntu.id
profile = "cc1-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
How to create Virtual Servers with Terraform For loops
vsi1 , vsi2, vsi3, vsi4, vsi5
for full code Please refer IBM Cloud Terraform getting started tutorial
You may not require a for or for-each loop for achieving what you need. A simple count will do the required. Once you add count(number of instances), all you need to do is pass count.index in the VSI name.
resource "ibm_is_instance" "vsi" {
count = 4
name = "${local.BASENAME}-vsi-${count.index}"
vpc = ibm_is_vpc.vpc.id
zone = local.ZONE
keys = [data.ibm_is_ssh_key.ssh_key_id.id]
image = data.ibm_is_image.ubuntu.id
profile = "cc1-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
This will create instances with names vsi-0,vsi-1...

Attach Leaf interface to EPG on Cisco ACI with Terraform

I'm trying to create an EPG on Cisco ACI using Terraform. EPG is created but Leaf's interface isn't attached.
The terraform synthax to attach Leaf interface is :
resource "aci_application_epg" "VLAN-616-EPG" {
...
relation_fv_rs_path_att = ["topology/pod-1/paths-103/pathep-[eth1/1]"]
...
}
It works when I do it manually through ACI web interface or REST API
I don't believe that this has been implemented yet. If you look in the code for the provider there is no test for that attribute, and I find this line in the examples for the EPGs. Both things lead me to believe it's not completed. Also, that particular item requires an encapsulation with VLAN/VXLAN, or QinQ, so that would need to be included if this was to work.
relation_fv_rs_path_att = ["testpathatt"]
Probably the best you could do is either make a direct REST call (act_rest in the terraform provider), or use an Ansible provider to create it (I'm investigating this now).
I ask to Cisco support and they send me this solution :
resource "aci_application_epg" "terraform-epg" {
application_profile_dn = "${aci_application_profile.terraform-app.id}"
name = "TerraformEPG1"
}
resource "aci_rest" "epg_path_relation" {
path = "api/node/mo/${aci_application_epg.terraform-epg.id}.json"
class_name = "fvRsPathAtt"
content = {
"encap":"vlan-907"
"tDn":"topology/pod-1/paths-101/pathep-[eth1/1]"
}
}
The solution with latest provider version is to do this:
data "aci_physical_domain" "physdom" {
name = "phys"
}
resource "aci_application_epg" "on_prem_epg" {
application_profile_dn = aci_application_profile.on_prem_app.id
name = "db"
relation_fv_rs_dom_att = [data.aci_physical_domain.physdom.id]
}
resource "aci_epg_to_domain" "rs_on_prem_epg_to_physdom" {
application_epg_dn = aci_application_epg.on_prem_epg.id
tdn = data.aci_physical_domain.physdom.id
}
resource "aci_epg_to_static_path" "leaf_101_eth1_23" {
application_epg_dn = aci_application_epg.on_prem_epg.id
tdn = "topology/pod-1/paths-101/pathep-[eth1/23]"
encap = "vlan-1100"
}

Resources