Terraform unsupported attribute error using Scaleway - terraform

After I run terraform apply and type 'yes' I get the following error 3 times (since I have 3 null resources):
Error: Unsupported attribute: This value does not have any attributes.
I checked each of my entries in my connection block and it seems to be coming from the host attribute. I believe the error is because ips.address is only generated after the server has launched while terraform wants a value for host before the BareMetal server has been deployed. Is there something wrong I'm doing here, either I'm using the wrong value (I've tried ips.id also) or I need to create some sort of output for when ips.address has been generated and then check host. I haven't been able to find any resources on BareMetal provisioning in ScaleWay. Here is my code with instance_number = 3.
provider "scaleway" {
access_key = var.ACCESS_KEY
secret_key = var.SECRET_KEY
organization_id = var.ORGANIZATION_ID
zone = "fr-par-2"
region = "fr-par"
}
resource "scaleway_account_ssh_key" "main" {
name = "main"
public_key = file("~/.ssh/id_rsa.pub")
}
resource "scaleway_baremetal_server" "base" {
count = var.instance_number
name = "${var.env_name}-BareMetal-${count.index}"
offer = var.baremetal_type
os = var.baremetal_image
ssh_key_ids = [scaleway_account_ssh_key.main.id]
tags = [ "BareMetal-${count.index}" ]
}
resource "null_resource" "ssh" {
count = var.instance_number
connection {
type = "ssh"
private_key = file("~/.ssh/id_rsa")
user = "root"
password = ""
host = scaleway_baremetal_server.base[count.index].ips.address
port = 22
}
provisioner "remote-exec" {
script = "provision/install_java_python.sh"
}
}

Related

How to access Terraform server `count` in a datasource?

I have a terraform script where I need to pass the count.index to the data block to get the correct IP. For example:
resource "null_resource" "provisioning_disk_config_server" {
count = var.config_server_count
depends_on = [oci_core_volume_attachment.ISCSIDiskAttachment_config_server]
connection {
type = "ssh"
host = data.oci_resourcemanager_private_endpoint_reachable_ip.config_server_reachable_ip_address.ip_address
user = "opc"
private_key = file(var.ssh_private_key)
}.....
Datasource:
data "oci_resourcemanager_private_endpoint_reachable_ip" "config_server_reachable_ip_address" {
private_endpoint_id = oci_resourcemanager_private_endpoint.rms_pe.id
private_ip = oci_core_instance.config_server[count].private_ip
}
How can I access/pass the server count index to the data block oci_resourcemanager_private_endpoint_reachable_ip ?
In this case you would also have to use the count meta-argument:
data "oci_resourcemanager_private_endpoint_reachable_ip" "config_server_reachable_ip_address" {
count = var.config_server_count
private_endpoint_id = oci_resourcemanager_private_endpoint.rms_pe.id
private_ip = oci_core_instance.config_server[count.index].private_ip
}

DigitalOcean droplet provisioning: Cycle Error

I want to create multiple droplets while installing some software onto each of them using a remote provisioner. I have the following code:
resource "digitalocean_droplet" "server" {
for_each = var.servers
name = each.key
image = each.value.image
size = each.value.size
region = each.value.region
ssh_keys = [
data.digitalocean_ssh_key.terraform.id
]
tags = each.value.tags
provisioner "remote-exec" {
inline = [
"mkdir -p /tmp/scripts/",
]
connection {
type = "ssh"
user = "root"
private_key = file("${var.ssh_key}")
host = digitalocean_droplet.server[each.key].ipv4_address
}
}
This always results in the following error:
Error: Cycle: digitalocean_droplet.server["server2"], digitalocean_droplet.server["server1"]
I understand this refers to a circular dependency but how to install the software on each server.
As mentioned in my comment, the issue here is that you are creating a cyclic dependency because you are referring a resource by its name within its own block. To quote [1]:
References create dependencies, and referring to a resource by name within its own block would create a dependency cycle.
To fix this, you can use a special keyword self to reference the same instance that is getting created:
resource "digitalocean_droplet" "server" {
for_each = var.servers
provisioner "remote-exec" {
inline = [
"mkdir -p /tmp/scripts/",
]
connection {
type = "ssh"
user = "root"
private_key = file("${var.ssh_key}")
host = self.ipv4_address # <---- here is where you would use the self keyword
}
}
[1] https://www.terraform.io/language/resources/provisioners/connection#the-self-object

Terraform - azurerm_frontdoor_custom_https_configuration - 'the given key does not identify an element in this collection value'

this code has worked before, all I'm trying to do is add new frontend endpoints, routing rules, backend pools
I've tried only sharing the code snippets that I think are relevant but let me know if there's some key info you need missing
This one has stumped me for a couple days now and no matter what I've tried I cannot seem to make sense of the error. Its like its indexing out of the variable or searching for something that isn't there but there are something like 6 already there and now I'm adding another.
I'm worried that this front door code has not been ran in awhile and something has gotten screwed up in state. Especially given all the alerts on the accompanying TF docs for this resource - https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/frontdoor_custom_https_configuration
Its been quite awhile but the AzureRM version has gone through several updates - possibly from previous to 2.58 to now past 2.58. I guess I also don't know how to verify/look at the state file and ensure its correct - even looking at the 2.58 upgrade notes its just confusing.
Ideas?
The error
on ..\modules\frontdoor\main.tf line 129, in resource "azurerm_frontdoor_custom_https_configuration" "https_config":
129: frontend_endpoint_id = azurerm_frontdoor.main.frontend_endpoints[each.value]
|----------------
| azurerm_frontdoor.main.frontend_endpoints is map of string with 8 elements
| each.value is "www-sell-dev-contoso-com"
The given key does not identify an element in this collection value.
main.tf
provider "azurerm" {
features {}
}
terraform {
backend "azurerm" {
}
}
#the outputs.tf on this module output things like the frontdoor_endpoints
#the outputs.tf with main.tf also output similar values
module "coreInfraFrontDoor" {
source = "../modules/frontdoor"
resource_group_name = module.coreInfraResourceGroup.resource_group_name
frontdoor_name = "fd-infra-${terraform.workspace}-001"
enforce_backend_pools_certificate_name_check = lookup(var.enforce_backend_pools_certificate_name_check, terraform.workspace)
log_analytics_workspace_id = module.coreInfraLogAnalytics.log_analytics_workspace_id
tags = local.common_tags
health_probes = lookup(var.health_probes, terraform.workspace)
routing_rules = lookup(var.routing_rules, terraform.workspace)
backend_pools = lookup(var.backend_pools, terraform.workspace)
frontend_endpoints = lookup(var.frontend_endpoints, terraform.workspace)
prestage_frontend_endpoints = lookup(var.prestage_frontend_endpoints, terraform.workspace)
frontdoor_firewall_policy_name = "fdfwp${terraform.workspace}001"
frontdoor_firewall_prestage_policy_name = "fdfwp${terraform.workspace}prestage"
mode = lookup(var.mode, terraform.workspace)
ip_whitelist_enable = lookup(var.ip_whitelist_enable, terraform.workspace)
ip_whitelist = lookup(var.ip_whitelist, terraform.workspace)
key_vault_id = module.coreInfraKeyVault.id
}
module main.tf
resource "azurerm_frontdoor" "main" {
name = var.frontdoor_name
location = "global"
resource_group_name = var.resource_group_name
enforce_backend_pools_certificate_name_check = var.enforce_backend_pools_certificate_name_check
tags = var.tags
dynamic "routing_rule {#stuff is here obv}
dynamic "backend_pool {#also here}
#i think this is because there was an issue/needs to be some default value for the first endpoint?
frontend_endpoint {
name = var.frontdoor_name
host_name = "${var.frontdoor_name}.azurefd.net"
web_application_firewall_policy_link_id = azurerm_frontdoor_firewall_policy.main.id
}
#now the dynamic ones from vars
dynamic "frontend_endpoint" {
for_each = var.frontend_endpoints
content {
name = frontend_endpoint.value.name
host_name = frontend_endpoint.value.host_name
session_affinity_enabled = lookup(frontend_endpoint.value, "session_affinity_enabled", false)
web_application_firewall_policy_link_id = azurerm_frontdoor_firewall_policy.main.id
}
}
versions.tf
terraform {
required_version = "~> 0.14.7"
required_providers {
azurerm = "~>2.72.0"
}
}
variables.tf
variable "frontend_endpoints" {
type = map(any)
description = "List of frontend (custom) endpoints. This is in addition to the <frontend_name>.azurefd.net endpoint that this module creates by default."
default = {
dev = [
{
name = "dev-search-contoso-com"
host_name = "dev.search.contoso.com"
},
{
name = "dev-cool-contoso-com"
host_name = "dev.cool.contoso.com"
},
########################
#this is new below
########################
{
name = "dev-sell-contoso-com"
host_name = "dev.sell.contoso.com"
}
]
prod = [ #you get the idea ]
}

Retrieve IP address from instances using for_each

I have this script which works great. It created 3 instances with the sepcified tags to identify them easily. But issue is i want to add a remote-exec provisioner (currently commented) to the code to install some packages. If i was using count, i could have looped over it to do the remote-exec over all the instances. I could not use count because i had to use for_each to loop over a local list. Since count and for_each cannot be used together, how do i loop over the instances to retrieve their IP addresses for using in the remote-exec provisioner.
On digital ocean and AWS, i was able to get it work using host = "${self.public_ip}"
But it does not work on vultr and gives the Unsupported attribute error
instance.tf
resource "vultr_ssh_key" "kubernetes" {
name = "kubernetes"
ssh_key = file("kubernetes.pub")
}
resource "vultr_instance" "kubernetes_instance" {
for_each = toset(local.expanded_names)
plan = "vc2-1c-2gb"
region = "sgp"
os_id = "387"
label = each.value
tag = each.value
hostname = each.value
enable_ipv6 = true
backups = "disabled"
ddos_protection = false
activation_email = false
ssh_key_ids = [vultr_ssh_key.kubernetes.id]
/* connection {
type = "ssh"
user = "root"
private_key = file("kubernetes")
timeout = "2m"
host = vultr_instance.kubernetes_instance[each.key].ipv4_address
}
provisioner "remote-exec" {
inline = "sudo hostnamectl set-hostname ${each.value}"
} */
}
locals {
expanded_names = flatten([
for name, count in var.host_name : [
for i in range(count) : format("%s-%02d", name, i + 1)
]
])
}
provider.tf
terraform {
required_providers {
vultr = {
source = "vultr/vultr"
version = "2.3.1"
}
}
}
provider "vultr" {
api_key = "***************************"
rate_limit = 700
retry_limit = 3
}
variables.tf
variable "host_name" {
type = map(number)
default = {
"Manager" = 1
"Worker" = 2
}
}
The property you are looking for is called main_ip instead of ip4_address or something like that. Specifically accessible via self.main_ip in your connection block.

How to use Terraform provisioner with multiple instances

I want to create x instances and run the same provisioner.
resource "aws_instance" "workers" {
ami = "ami-08d658f84a6d84a80"
count = 3
...
provisioner "remote-exec" {
scripts = ["setup-base.sh", "./setup-docker.sh"]
connection {
type = "ssh"
host = "${element(aws_instance.workers.*.public_ip, count.index)}"
user = "ubuntu"
private_key = file("${var.provisionKeyPath}")
agent = false
}
}
I think the host line confuses Terraform. Getting Error: Cycle: aws_instance.workers[2], aws_instance.workers[1], aws_instance.workers[0]
Since I upgrade my terraform version(0.12), I have been encountered the same problem as yours.
You need to use ${self.private_ip} for the host property in your connection object,
and the connection object should be located out of the provisioner "remote-exec"
Details are the below.
resource "aws_instance" "workers" {
ami = "ami-08d658f84a6d84a80"
count = 3
...
connection {
host = "${self.private_ip}"
type = "ssh"
user = "YOUR_USER_NAME"
private_key = "${file("~/YOUR_PEM_FILE.pem")}"
}
provisioner "remote-exec" {
scripts = ["setup-base.sh", "./setup-docker.sh"]
}
...
}
If you need to get more information, the below link is gonna be helping you.
https://github.com/hashicorp/terraform/issues/20286

Resources