I want to create multiple droplets while installing some software onto each of them using a remote provisioner. I have the following code:
resource "digitalocean_droplet" "server" {
for_each = var.servers
name = each.key
image = each.value.image
size = each.value.size
region = each.value.region
ssh_keys = [
data.digitalocean_ssh_key.terraform.id
]
tags = each.value.tags
provisioner "remote-exec" {
inline = [
"mkdir -p /tmp/scripts/",
]
connection {
type = "ssh"
user = "root"
private_key = file("${var.ssh_key}")
host = digitalocean_droplet.server[each.key].ipv4_address
}
}
This always results in the following error:
Error: Cycle: digitalocean_droplet.server["server2"], digitalocean_droplet.server["server1"]
I understand this refers to a circular dependency but how to install the software on each server.
As mentioned in my comment, the issue here is that you are creating a cyclic dependency because you are referring a resource by its name within its own block. To quote [1]:
References create dependencies, and referring to a resource by name within its own block would create a dependency cycle.
To fix this, you can use a special keyword self to reference the same instance that is getting created:
resource "digitalocean_droplet" "server" {
for_each = var.servers
provisioner "remote-exec" {
inline = [
"mkdir -p /tmp/scripts/",
]
connection {
type = "ssh"
user = "root"
private_key = file("${var.ssh_key}")
host = self.ipv4_address # <---- here is where you would use the self keyword
}
}
[1] https://www.terraform.io/language/resources/provisioners/connection#the-self-object
Related
I'm kind of stuck here, not sure exactly what is wrong can someone help me.
Problem: When running below resource in Openstack using terraform, the user "aditya" gets created intermittently.
I need the user to be created every-time.
Not sure if it's error in code or problem of vm's.
resource "openstack_compute_instance_v2" "test-machine" {
region = "zxy"
availability_zone = "zcy"
name = "test-machine"
security_groups = []
user_data = templatefile("/some/path",{
admin_username = "aditya"})
connection {
host = someip
type = "ssh"
user = "aditya"
private_key = test_pem
timeout = "20m"
}
provisioner "remote-exec" {
inline = [
"/bin/bash -c \"while [ ! -f /tmp/done-user-data ]; do sleep 2; done\"",
]
}
}
I have this script which works great. It created 3 instances with the sepcified tags to identify them easily. But issue is i want to add a remote-exec provisioner (currently commented) to the code to install some packages. If i was using count, i could have looped over it to do the remote-exec over all the instances. I could not use count because i had to use for_each to loop over a local list. Since count and for_each cannot be used together, how do i loop over the instances to retrieve their IP addresses for using in the remote-exec provisioner.
On digital ocean and AWS, i was able to get it work using host = "${self.public_ip}"
But it does not work on vultr and gives the Unsupported attribute error
instance.tf
resource "vultr_ssh_key" "kubernetes" {
name = "kubernetes"
ssh_key = file("kubernetes.pub")
}
resource "vultr_instance" "kubernetes_instance" {
for_each = toset(local.expanded_names)
plan = "vc2-1c-2gb"
region = "sgp"
os_id = "387"
label = each.value
tag = each.value
hostname = each.value
enable_ipv6 = true
backups = "disabled"
ddos_protection = false
activation_email = false
ssh_key_ids = [vultr_ssh_key.kubernetes.id]
/* connection {
type = "ssh"
user = "root"
private_key = file("kubernetes")
timeout = "2m"
host = vultr_instance.kubernetes_instance[each.key].ipv4_address
}
provisioner "remote-exec" {
inline = "sudo hostnamectl set-hostname ${each.value}"
} */
}
locals {
expanded_names = flatten([
for name, count in var.host_name : [
for i in range(count) : format("%s-%02d", name, i + 1)
]
])
}
provider.tf
terraform {
required_providers {
vultr = {
source = "vultr/vultr"
version = "2.3.1"
}
}
}
provider "vultr" {
api_key = "***************************"
rate_limit = 700
retry_limit = 3
}
variables.tf
variable "host_name" {
type = map(number)
default = {
"Manager" = 1
"Worker" = 2
}
}
The property you are looking for is called main_ip instead of ip4_address or something like that. Specifically accessible via self.main_ip in your connection block.
After I run terraform apply and type 'yes' I get the following error 3 times (since I have 3 null resources):
Error: Unsupported attribute: This value does not have any attributes.
I checked each of my entries in my connection block and it seems to be coming from the host attribute. I believe the error is because ips.address is only generated after the server has launched while terraform wants a value for host before the BareMetal server has been deployed. Is there something wrong I'm doing here, either I'm using the wrong value (I've tried ips.id also) or I need to create some sort of output for when ips.address has been generated and then check host. I haven't been able to find any resources on BareMetal provisioning in ScaleWay. Here is my code with instance_number = 3.
provider "scaleway" {
access_key = var.ACCESS_KEY
secret_key = var.SECRET_KEY
organization_id = var.ORGANIZATION_ID
zone = "fr-par-2"
region = "fr-par"
}
resource "scaleway_account_ssh_key" "main" {
name = "main"
public_key = file("~/.ssh/id_rsa.pub")
}
resource "scaleway_baremetal_server" "base" {
count = var.instance_number
name = "${var.env_name}-BareMetal-${count.index}"
offer = var.baremetal_type
os = var.baremetal_image
ssh_key_ids = [scaleway_account_ssh_key.main.id]
tags = [ "BareMetal-${count.index}" ]
}
resource "null_resource" "ssh" {
count = var.instance_number
connection {
type = "ssh"
private_key = file("~/.ssh/id_rsa")
user = "root"
password = ""
host = scaleway_baremetal_server.base[count.index].ips.address
port = 22
}
provisioner "remote-exec" {
script = "provision/install_java_python.sh"
}
}
I have this below code when I run apply it gets a timeout. An instance is created but remote-exec commands don't work.
I am running this in the windows 10 machine.
Terraform version is v0.12.12 provider.aws v2.33.0
resource "aws_instance" "web" {
ami = "ami-54d2a63b"
instance_type = "t2.nano"
key_name = "terra"
tags = {
Name = "HelloWorld"
}
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file("C:/Users/Vinayak/Downloads/terra.pem")}"
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"echo cat > test.txt"
]
}
}
Please try to change you host line to
host = "${self.public_ip}"
Letting people know the actual error message you are getting might help too. :)
I want to create x instances and run the same provisioner.
resource "aws_instance" "workers" {
ami = "ami-08d658f84a6d84a80"
count = 3
...
provisioner "remote-exec" {
scripts = ["setup-base.sh", "./setup-docker.sh"]
connection {
type = "ssh"
host = "${element(aws_instance.workers.*.public_ip, count.index)}"
user = "ubuntu"
private_key = file("${var.provisionKeyPath}")
agent = false
}
}
I think the host line confuses Terraform. Getting Error: Cycle: aws_instance.workers[2], aws_instance.workers[1], aws_instance.workers[0]
Since I upgrade my terraform version(0.12), I have been encountered the same problem as yours.
You need to use ${self.private_ip} for the host property in your connection object,
and the connection object should be located out of the provisioner "remote-exec"
Details are the below.
resource "aws_instance" "workers" {
ami = "ami-08d658f84a6d84a80"
count = 3
...
connection {
host = "${self.private_ip}"
type = "ssh"
user = "YOUR_USER_NAME"
private_key = "${file("~/YOUR_PEM_FILE.pem")}"
}
provisioner "remote-exec" {
scripts = ["setup-base.sh", "./setup-docker.sh"]
}
...
}
If you need to get more information, the below link is gonna be helping you.
https://github.com/hashicorp/terraform/issues/20286