I have this script which works great. It created 3 instances with the sepcified tags to identify them easily. But issue is i want to add a remote-exec provisioner (currently commented) to the code to install some packages. If i was using count, i could have looped over it to do the remote-exec over all the instances. I could not use count because i had to use for_each to loop over a local list. Since count and for_each cannot be used together, how do i loop over the instances to retrieve their IP addresses for using in the remote-exec provisioner.
On digital ocean and AWS, i was able to get it work using host = "${self.public_ip}"
But it does not work on vultr and gives the Unsupported attribute error
instance.tf
resource "vultr_ssh_key" "kubernetes" {
name = "kubernetes"
ssh_key = file("kubernetes.pub")
}
resource "vultr_instance" "kubernetes_instance" {
for_each = toset(local.expanded_names)
plan = "vc2-1c-2gb"
region = "sgp"
os_id = "387"
label = each.value
tag = each.value
hostname = each.value
enable_ipv6 = true
backups = "disabled"
ddos_protection = false
activation_email = false
ssh_key_ids = [vultr_ssh_key.kubernetes.id]
/* connection {
type = "ssh"
user = "root"
private_key = file("kubernetes")
timeout = "2m"
host = vultr_instance.kubernetes_instance[each.key].ipv4_address
}
provisioner "remote-exec" {
inline = "sudo hostnamectl set-hostname ${each.value}"
} */
}
locals {
expanded_names = flatten([
for name, count in var.host_name : [
for i in range(count) : format("%s-%02d", name, i + 1)
]
])
}
provider.tf
terraform {
required_providers {
vultr = {
source = "vultr/vultr"
version = "2.3.1"
}
}
}
provider "vultr" {
api_key = "***************************"
rate_limit = 700
retry_limit = 3
}
variables.tf
variable "host_name" {
type = map(number)
default = {
"Manager" = 1
"Worker" = 2
}
}
The property you are looking for is called main_ip instead of ip4_address or something like that. Specifically accessible via self.main_ip in your connection block.
Related
I was able to manually create this device instance in OpenStack, and now I am trying to see how to make it work by terraform.
Anyway, this device instance need to do a hard reboot after the volume attachment, and Any cloud-config needs be down after rebooting. Here is the general sketch of my current main.tf file.
# Configure the OpenStack Provider
terraform {
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
}
}
}
data "template_file" "user_data"{
template = file("./userdata.yaml")
}
# create an instance
resource "openstack_compute_instance_v2" "server" {
name = "Device_Instance"
image_id = "xxx..."
image_name = "device_vmdk_1"
flavor_name = "m1.tiny"
key_pair = "my-keypair"
region = "RegionOne"
config_drive = true
network {
name = "main_network"
}
}
resource "openstack_blockstorage_volume_v2" "volume_2" {
name = "device_vmdk_2"
size = 1
image_id = "xxx...."
}
resource "openstack_blockstorage_volume_v3" "volume_3" {
name = "device_vmdk_3"
size = 1
image_id = "xxx..."
}
resource "openstack_compute_volume_attach_v2" "va_1" {
instance_id = "${openstack_compute_instance_v2.server.id}"
volume_id = "${openstack_blockstorage_volume_v2.volume_2.id}"
}
resource "openstack_compute_volume_attach_v2" "va_2" {
instance_id = "${openstack_compute_instance_v2.server.id}"
volume_id = "${openstack_blockstorage_volume_v3.volume_3.id}"
}
resource "null_resource" "reboot_instance" {
provisioner "local-exec" {
on_failure = fail
interpreter = ["/bin/bash", "-c"]
command = <<EOT
openstack server reboot --hard Device_Instance --insecure
echo "................"
EOT
}
depends_on = [openstack_compute_volume_attach_v2.va_1, openstack_compute_volume_attach_v2.va_2]
}
resource "openstack_compute_instance_v2" "server_config" {
name = "Device_Instance"
user_data = data.template_file.user_data.rendered
depends_on = [null_resource.reboot_instance]
}
As of now, it was able to:
have the "Device-Cloud-Instance" generated.
have the "Device-Cloud-Instance" hard-rebooted.
But it fails after rooting. As you may find, I have added this section in the end, but it seems not working.
resource "openstack_compute_instance_v2" "server_config"{}
Any ideas how to make it work?
Thanks,
Jack
this code has worked before, all I'm trying to do is add new frontend endpoints, routing rules, backend pools
I've tried only sharing the code snippets that I think are relevant but let me know if there's some key info you need missing
This one has stumped me for a couple days now and no matter what I've tried I cannot seem to make sense of the error. Its like its indexing out of the variable or searching for something that isn't there but there are something like 6 already there and now I'm adding another.
I'm worried that this front door code has not been ran in awhile and something has gotten screwed up in state. Especially given all the alerts on the accompanying TF docs for this resource - https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/frontdoor_custom_https_configuration
Its been quite awhile but the AzureRM version has gone through several updates - possibly from previous to 2.58 to now past 2.58. I guess I also don't know how to verify/look at the state file and ensure its correct - even looking at the 2.58 upgrade notes its just confusing.
Ideas?
The error
on ..\modules\frontdoor\main.tf line 129, in resource "azurerm_frontdoor_custom_https_configuration" "https_config":
129: frontend_endpoint_id = azurerm_frontdoor.main.frontend_endpoints[each.value]
|----------------
| azurerm_frontdoor.main.frontend_endpoints is map of string with 8 elements
| each.value is "www-sell-dev-contoso-com"
The given key does not identify an element in this collection value.
main.tf
provider "azurerm" {
features {}
}
terraform {
backend "azurerm" {
}
}
#the outputs.tf on this module output things like the frontdoor_endpoints
#the outputs.tf with main.tf also output similar values
module "coreInfraFrontDoor" {
source = "../modules/frontdoor"
resource_group_name = module.coreInfraResourceGroup.resource_group_name
frontdoor_name = "fd-infra-${terraform.workspace}-001"
enforce_backend_pools_certificate_name_check = lookup(var.enforce_backend_pools_certificate_name_check, terraform.workspace)
log_analytics_workspace_id = module.coreInfraLogAnalytics.log_analytics_workspace_id
tags = local.common_tags
health_probes = lookup(var.health_probes, terraform.workspace)
routing_rules = lookup(var.routing_rules, terraform.workspace)
backend_pools = lookup(var.backend_pools, terraform.workspace)
frontend_endpoints = lookup(var.frontend_endpoints, terraform.workspace)
prestage_frontend_endpoints = lookup(var.prestage_frontend_endpoints, terraform.workspace)
frontdoor_firewall_policy_name = "fdfwp${terraform.workspace}001"
frontdoor_firewall_prestage_policy_name = "fdfwp${terraform.workspace}prestage"
mode = lookup(var.mode, terraform.workspace)
ip_whitelist_enable = lookup(var.ip_whitelist_enable, terraform.workspace)
ip_whitelist = lookup(var.ip_whitelist, terraform.workspace)
key_vault_id = module.coreInfraKeyVault.id
}
module main.tf
resource "azurerm_frontdoor" "main" {
name = var.frontdoor_name
location = "global"
resource_group_name = var.resource_group_name
enforce_backend_pools_certificate_name_check = var.enforce_backend_pools_certificate_name_check
tags = var.tags
dynamic "routing_rule {#stuff is here obv}
dynamic "backend_pool {#also here}
#i think this is because there was an issue/needs to be some default value for the first endpoint?
frontend_endpoint {
name = var.frontdoor_name
host_name = "${var.frontdoor_name}.azurefd.net"
web_application_firewall_policy_link_id = azurerm_frontdoor_firewall_policy.main.id
}
#now the dynamic ones from vars
dynamic "frontend_endpoint" {
for_each = var.frontend_endpoints
content {
name = frontend_endpoint.value.name
host_name = frontend_endpoint.value.host_name
session_affinity_enabled = lookup(frontend_endpoint.value, "session_affinity_enabled", false)
web_application_firewall_policy_link_id = azurerm_frontdoor_firewall_policy.main.id
}
}
versions.tf
terraform {
required_version = "~> 0.14.7"
required_providers {
azurerm = "~>2.72.0"
}
}
variables.tf
variable "frontend_endpoints" {
type = map(any)
description = "List of frontend (custom) endpoints. This is in addition to the <frontend_name>.azurefd.net endpoint that this module creates by default."
default = {
dev = [
{
name = "dev-search-contoso-com"
host_name = "dev.search.contoso.com"
},
{
name = "dev-cool-contoso-com"
host_name = "dev.cool.contoso.com"
},
########################
#this is new below
########################
{
name = "dev-sell-contoso-com"
host_name = "dev.sell.contoso.com"
}
]
prod = [ #you get the idea ]
}
I created a resource that produces a list of VM's using for_each argument. I'm having trouble trying to reference this resource to my web_private_group.
resource "google_compute_instance_group" "web_private_group" {
name = "${format("%s","${var.gcp_resource_name}-${var.gcp_env}-vm-group")}"
description = "Web servers instance group"
zone = var.gcp_zone_1
network = google_compute_network.vpc.self_link
# I've added some attempts I've tried that do not work...
instances = [
//google_compute_instance.compute_engines.self_link
//[for o in google_compute_instance.compute_engines : o.self_link]
{for k, o in google_compute_instance.compute_engines : k => o.self_link}
//google_compute_instance.web_private_2.self_link
]
named_port {
name = "http"
port = "80"
}
named_port {
name = "https"
port = "443"
}
}
# Create Google Cloud VMs
resource "google_compute_instance" "compute_engines" {
for_each = var.vm_instances
name = "${format("%s","${var.gcp_resource_name}-${var.gcp_env}-each.value")}"
machine_type = "e2-micro"
zone = var.gcp_zone_1
tags = ["ssh","http"]
boot_disk {
initialize_params {
image = "debian-10"
}
}
}
variable "vm_instances" {
description = "list of VM's"
type = set(string)
default = ["vm1", "vm2", "vm3"]
}
How can I properly link my compute_engines to my web_private_group resource within the instances= [] block?
Edit: To further clarify, how can I state the fact that there are multiple instances within my compute_engines resource?
You probably just need to use a splat expression as follows:
instances = values(google_compute_instance.compute_engines)[*].id
Moreover, a for expression can be used as well:
instances = [for res in google_compute_instance.compute_engines: res.id]
After I run terraform apply and type 'yes' I get the following error 3 times (since I have 3 null resources):
Error: Unsupported attribute: This value does not have any attributes.
I checked each of my entries in my connection block and it seems to be coming from the host attribute. I believe the error is because ips.address is only generated after the server has launched while terraform wants a value for host before the BareMetal server has been deployed. Is there something wrong I'm doing here, either I'm using the wrong value (I've tried ips.id also) or I need to create some sort of output for when ips.address has been generated and then check host. I haven't been able to find any resources on BareMetal provisioning in ScaleWay. Here is my code with instance_number = 3.
provider "scaleway" {
access_key = var.ACCESS_KEY
secret_key = var.SECRET_KEY
organization_id = var.ORGANIZATION_ID
zone = "fr-par-2"
region = "fr-par"
}
resource "scaleway_account_ssh_key" "main" {
name = "main"
public_key = file("~/.ssh/id_rsa.pub")
}
resource "scaleway_baremetal_server" "base" {
count = var.instance_number
name = "${var.env_name}-BareMetal-${count.index}"
offer = var.baremetal_type
os = var.baremetal_image
ssh_key_ids = [scaleway_account_ssh_key.main.id]
tags = [ "BareMetal-${count.index}" ]
}
resource "null_resource" "ssh" {
count = var.instance_number
connection {
type = "ssh"
private_key = file("~/.ssh/id_rsa")
user = "root"
password = ""
host = scaleway_baremetal_server.base[count.index].ips.address
port = 22
}
provisioner "remote-exec" {
script = "provision/install_java_python.sh"
}
}
I'm starting to use (and learn) terraform, for now, I need to create multiple DO droplets and attach them to the aws route53 zone, what I'm trying to do:
My DO terraform file:
# Configure the DigitalOcean Provider
provider "digitalocean" {
token = var.do_token
}
# Create a new tag
resource "digitalocean_tag" "victor" {
name = "victor-fee1good22"
}
resource "digitalocean_droplet" "web" {
count = 2
image = var.do_config["image"]
name = "web-${count.index}"
region = var.do_config["region"]
size = var.do_config["size"]
ssh_keys = [var.public_ssh_key, var.pv_ssh_key]
tags = [digitalocean_tag.victor.name]
}
My route53 file:
provider "aws" {
version = "~> 2.0"
region = "us-east-1"
access_key = var.aws_a_key
secret_key = var.aws_s_key
}
data "aws_route53_zone" "selected" {
name = "devops.rebrain.srwx.net"
}
resource "aws_route53_record" "www" {
сount = length(digitalocean_droplet.web)
zone_id = data.aws_route53_zone.selected.zone_id
name = "web_${count.index}"
type = "A"
ttl = "300"
records = [digitalocean_droplet.web[count.index].ipv4_address]
}
But I always get The "count" object can be used only in "resource" and "data" blocks, and only
when the "count" argument is set. error, what did I wrong?
Thanks!
UPDATE:
I've resolved this one like — add сount = 2 instead of сount = length(digitalocean_droplet.web)
It works but would be better to have the dynamic variable instead of constant count. :)
you want to get number of services, that not yet created. Terraform couldn't do that.
As I think simplest way use common var with the number of droplets.
resource "digitalocean_droplet" "test" {
count = var.number_of_vps
image = "ubuntu-18-04-x64"
name = "test-1"
region = data.digitalocean_regions.available.regions[0].slug
size = "s-1vcpu-1gb"
}
resource "aws_route53_record" "test" {
count = var.number_of_vps
zone_id = data.aws_route53_zone.primary.zone_id
name = "${local.login}-${count.index}.${data.aws_route53_zone.primary.name}"
type = "A"
ttl = "300"
records = [digitalocean_droplet.test[count.index].ipv4_address]
}
This trick helped - https://github.com/hashicorp/terraform/issues/12570#issuecomment-291517239
resource "aws_route53_record" "dns" {
count = "${length(var.ips) > 0 ? length(var.domains) : 0}"
// ...
}