How to use Terraform provisioner with multiple instances - terraform

I want to create x instances and run the same provisioner.
resource "aws_instance" "workers" {
ami = "ami-08d658f84a6d84a80"
count = 3
...
provisioner "remote-exec" {
scripts = ["setup-base.sh", "./setup-docker.sh"]
connection {
type = "ssh"
host = "${element(aws_instance.workers.*.public_ip, count.index)}"
user = "ubuntu"
private_key = file("${var.provisionKeyPath}")
agent = false
}
}
I think the host line confuses Terraform. Getting Error: Cycle: aws_instance.workers[2], aws_instance.workers[1], aws_instance.workers[0]

Since I upgrade my terraform version(0.12), I have been encountered the same problem as yours.
You need to use ${self.private_ip} for the host property in your connection object,
and the connection object should be located out of the provisioner "remote-exec"
Details are the below.
resource "aws_instance" "workers" {
ami = "ami-08d658f84a6d84a80"
count = 3
...
connection {
host = "${self.private_ip}"
type = "ssh"
user = "YOUR_USER_NAME"
private_key = "${file("~/YOUR_PEM_FILE.pem")}"
}
provisioner "remote-exec" {
scripts = ["setup-base.sh", "./setup-docker.sh"]
}
...
}
If you need to get more information, the below link is gonna be helping you.
https://github.com/hashicorp/terraform/issues/20286

Related

Need help in using count in remote-exec provisioner to retrieve multiple VMs IPs

I want to use count to install a package on 2 of my VMs using a single remote-exec provisioner. As of now, I am doing that individually in 2 provisioners blocks as below.
----present code to use remote provisioner for 2 vms-----
resource "null_resource" "install_nginx_host1" {
provisioner "remote-exec" {
inline = [
"sudo apt install nginx -y"
]
}
connection {
type = "ssh"
user = "ubuntu"
private_key = file("~/.ssh/id_rsa")
host = module.virtual-machine[0].linux_vm_public_ips.instance-0
}
}
resource "null_resource" "install_nginx_host2" {
provisioner "remote-exec" {
inline = [
"sudo apt install nginx -y"
]
}
connection {
type = "ssh"
user = "ubuntu"
private_key = file("~/.ssh/id_rsa")
host = module.virtual-machine[1].linux_vm_public_ips.instance-1
}
}
Can someone please help me in getting the value which I should should to set host using count.index? I tried multiple things e.g.
host = "module.virtual-machine[${count.index}].linux_vm_public_ips.instance-${count.index}"
But it returns the host strings as:
module.virtual-machine[0].linux_vm_public_ips.instance-0
module.virtual-machine[1].linux_vm_public_ips.instance-1
while I want the value of above strings.
This should be pretty straightforward to achieve:
resource "null_resource" "install_nginx_host1" {
count = 2
provisioner "remote-exec" {
inline = [
"sudo apt install nginx -y"
]
}
connection {
type = "ssh"
user = "ubuntu"
private_key = file("~/.ssh/id_rsa")
host = module.virtual-machine[count.index].linux_vm_public_ips["instance-${count.index}"]
}
}
Please make sure you understand how to use the count meta-argument [1].
[1] https://www.terraform.io/language/meta-arguments/count

DigitalOcean droplet provisioning: Cycle Error

I want to create multiple droplets while installing some software onto each of them using a remote provisioner. I have the following code:
resource "digitalocean_droplet" "server" {
for_each = var.servers
name = each.key
image = each.value.image
size = each.value.size
region = each.value.region
ssh_keys = [
data.digitalocean_ssh_key.terraform.id
]
tags = each.value.tags
provisioner "remote-exec" {
inline = [
"mkdir -p /tmp/scripts/",
]
connection {
type = "ssh"
user = "root"
private_key = file("${var.ssh_key}")
host = digitalocean_droplet.server[each.key].ipv4_address
}
}
This always results in the following error:
Error: Cycle: digitalocean_droplet.server["server2"], digitalocean_droplet.server["server1"]
I understand this refers to a circular dependency but how to install the software on each server.
As mentioned in my comment, the issue here is that you are creating a cyclic dependency because you are referring a resource by its name within its own block. To quote [1]:
References create dependencies, and referring to a resource by name within its own block would create a dependency cycle.
To fix this, you can use a special keyword self to reference the same instance that is getting created:
resource "digitalocean_droplet" "server" {
for_each = var.servers
provisioner "remote-exec" {
inline = [
"mkdir -p /tmp/scripts/",
]
connection {
type = "ssh"
user = "root"
private_key = file("${var.ssh_key}")
host = self.ipv4_address # <---- here is where you would use the self keyword
}
}
[1] https://www.terraform.io/language/resources/provisioners/connection#the-self-object

Retrieve IP address from instances using for_each

I have this script which works great. It created 3 instances with the sepcified tags to identify them easily. But issue is i want to add a remote-exec provisioner (currently commented) to the code to install some packages. If i was using count, i could have looped over it to do the remote-exec over all the instances. I could not use count because i had to use for_each to loop over a local list. Since count and for_each cannot be used together, how do i loop over the instances to retrieve their IP addresses for using in the remote-exec provisioner.
On digital ocean and AWS, i was able to get it work using host = "${self.public_ip}"
But it does not work on vultr and gives the Unsupported attribute error
instance.tf
resource "vultr_ssh_key" "kubernetes" {
name = "kubernetes"
ssh_key = file("kubernetes.pub")
}
resource "vultr_instance" "kubernetes_instance" {
for_each = toset(local.expanded_names)
plan = "vc2-1c-2gb"
region = "sgp"
os_id = "387"
label = each.value
tag = each.value
hostname = each.value
enable_ipv6 = true
backups = "disabled"
ddos_protection = false
activation_email = false
ssh_key_ids = [vultr_ssh_key.kubernetes.id]
/* connection {
type = "ssh"
user = "root"
private_key = file("kubernetes")
timeout = "2m"
host = vultr_instance.kubernetes_instance[each.key].ipv4_address
}
provisioner "remote-exec" {
inline = "sudo hostnamectl set-hostname ${each.value}"
} */
}
locals {
expanded_names = flatten([
for name, count in var.host_name : [
for i in range(count) : format("%s-%02d", name, i + 1)
]
])
}
provider.tf
terraform {
required_providers {
vultr = {
source = "vultr/vultr"
version = "2.3.1"
}
}
}
provider "vultr" {
api_key = "***************************"
rate_limit = 700
retry_limit = 3
}
variables.tf
variable "host_name" {
type = map(number)
default = {
"Manager" = 1
"Worker" = 2
}
}
The property you are looking for is called main_ip instead of ip4_address or something like that. Specifically accessible via self.main_ip in your connection block.

How to send local files using Terraform Cloud as remote backend?

I am creating AWS EC2 instance and I am using Terraform Cloud as backend.
in ./main.tf:
terraform {
required_version = "~> 0.12"
backend "remote" {
hostname = "app.terraform.io"
organization = "organization"
workspaces { prefix = "test-dev-" }
}
in ./modules/instances/function.tf
resource "aws_instance" "test" {
ami = "${var.ami_id}"
instance_type = "${var.instance_type}"
subnet_id = "${var.private_subnet_id}"
vpc_security_group_ids = ["${aws_security_group.test_sg.id}"]
key_name = "${var.test_key}"
tags = {
Name = "name"
Function = "function"
}
provisioner "remote-exec" {
inline = [
"sudo useradd someuser"
]
connection {
host = "${self.public_ip}"
type = "ssh"
user = "ubuntu"
private_key = "${file("~/.ssh/mykey.pem")}"
}
}
}
and as a result, I got the following error:
Call to function "file" failed: no file exists at /home/terraform/.ssh/...
so what is happening here, is that terraform trying to find the file in Terraform Cloud instead of my local machine. How can I transfer file from my local machine and still using Terraform Cloud?
There is no straight way to do what I asked in the question. In the end I ended up uploading the keys into AWS with its CLI like this:
aws ec2 import-key-pair --key-name "name_for_the_key" --public-key-material file:///home/user/.ssh/name_for_the_key.pub
and then reference it like that:
resource "aws_instance" "test" {
ami = "${var.ami_id}"
...
key_name = "name_for_the_key"
...
}
Note Yes file:// looks like the "Windowsest" syntax ever but you have to use it on linux too.

Remote-exec not working in Terraform with aws_instance resourse

I have this below code when I run apply it gets a timeout. An instance is created but remote-exec commands don't work.
I am running this in the windows 10 machine.
Terraform version is v0.12.12 provider.aws v2.33.0
resource "aws_instance" "web" {
ami = "ami-54d2a63b"
instance_type = "t2.nano"
key_name = "terra"
tags = {
Name = "HelloWorld"
}
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file("C:/Users/Vinayak/Downloads/terra.pem")}"
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"echo cat > test.txt"
]
}
}
Please try to change you host line to
host = "${self.public_ip}"
Letting people know the actual error message you are getting might help too. :)

Resources