Invalid reference from destroy provisioner - terraform

I'm getting the following Error: Invalid reference from destroy provisioner. It's not clear to me why this error is occurring.
Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self', 'count.index', or
'each.key'.
References to other resources during the destroy phase can cause dependency
cycles and interact poorly with create_before_destroy.
provisioner "remote-exec" {
when = destroy
inline = [
"java -jar /home/ec2-user/jenkins-cli.jar -auth #/home/ec2-user/jenkins_auth -s http://${aws_instance.jenkins-master.private_ip}:8080 delete-node ${self.private_ip}"
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
}
}
Error: Invalid reference from destroy provisioner
on instances.tf line 67, in resource "aws_instance" "jenkins-worker-oregon":
67: inline = [
68: "java -jar /home/ec2-user/jenkins-cli.jar -auth #/home/ec2-user/jenkins_auth -s http://${aws_instance.jenkins-master.private_ip}:8080 delete-node ${self.private_ip}"
69: ]

I had a similar issue and the solution in my case has been to use a null_resource that gets triggered when a specific value has changed.
In your case, a solution could be the following:
resource "null_resource" "register-to-master" {
triggers = {
jenkins-master-ip = aws_instance.jenkins-master.private_ip
private_ip = some_value
}
provisioner "remote-exec" {
when = destroy
inline = [
"java -jar /home/ec2-user/jenkins-cli.jar -auth #/home/ec2-user/jenkins_auth -s http://${self.triggers.jenkins-master-ip}:8080 delete-node ${self.triggers.private_ip}"
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.triggers.public_ip
}
}
provisioner "remote-exec" {
when = create
inline = [ "echo 'create step'" ]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.triggers.public_ip
}
}
}

If you can only reference attributes of the related resource, then the "invalid reference" is presumably the reference to aws_instance.jenkins-master.private_ip in the inline command, which is referring to something outside the related resource.

you can use the tags to have the master private IP and use self.tags like this
tags = {
Name = "put_a_nice_name_or_some_thing_here" #change_this
Master_Private_IP = aws_instance.jenkins-master.private_ip
}
in the provisioner block:
provisioner "remote-exec" {
when = destroy
inline = [
"java -jar /home/ec2-user/jenkins-cli.jar -auth #/home/ec2-user/jenkins_auth -s http://${self.tags.Master_Private_IP}:8080 delete-node ${self.private_ip}"
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
}
}

Related

DigitalOcean droplet provisioning: Cycle Error

I want to create multiple droplets while installing some software onto each of them using a remote provisioner. I have the following code:
resource "digitalocean_droplet" "server" {
for_each = var.servers
name = each.key
image = each.value.image
size = each.value.size
region = each.value.region
ssh_keys = [
data.digitalocean_ssh_key.terraform.id
]
tags = each.value.tags
provisioner "remote-exec" {
inline = [
"mkdir -p /tmp/scripts/",
]
connection {
type = "ssh"
user = "root"
private_key = file("${var.ssh_key}")
host = digitalocean_droplet.server[each.key].ipv4_address
}
}
This always results in the following error:
Error: Cycle: digitalocean_droplet.server["server2"], digitalocean_droplet.server["server1"]
I understand this refers to a circular dependency but how to install the software on each server.
As mentioned in my comment, the issue here is that you are creating a cyclic dependency because you are referring a resource by its name within its own block. To quote [1]:
References create dependencies, and referring to a resource by name within its own block would create a dependency cycle.
To fix this, you can use a special keyword self to reference the same instance that is getting created:
resource "digitalocean_droplet" "server" {
for_each = var.servers
provisioner "remote-exec" {
inline = [
"mkdir -p /tmp/scripts/",
]
connection {
type = "ssh"
user = "root"
private_key = file("${var.ssh_key}")
host = self.ipv4_address # <---- here is where you would use the self keyword
}
}
[1] https://www.terraform.io/language/resources/provisioners/connection#the-self-object

Terraform OpenStack user data gives inconsistency in user creation

I'm kind of stuck here, not sure exactly what is wrong can someone help me.
Problem: When running below resource in Openstack using terraform, the user "aditya" gets created intermittently.
I need the user to be created every-time.
Not sure if it's error in code or problem of vm's.
resource "openstack_compute_instance_v2" "test-machine" {
region = "zxy"
availability_zone = "zcy"
name = "test-machine"
security_groups = []
user_data = templatefile("/some/path",{
admin_username = "aditya"})
connection {
host = someip
type = "ssh"
user = "aditya"
private_key = test_pem
timeout = "20m"
}
provisioner "remote-exec" {
inline = [
"/bin/bash -c \"while [ ! -f /tmp/done-user-data ]; do sleep 2; done\"",
]
}
}

Remote-exec not working in Terraform with aws_instance resourse

I have this below code when I run apply it gets a timeout. An instance is created but remote-exec commands don't work.
I am running this in the windows 10 machine.
Terraform version is v0.12.12 provider.aws v2.33.0
resource "aws_instance" "web" {
ami = "ami-54d2a63b"
instance_type = "t2.nano"
key_name = "terra"
tags = {
Name = "HelloWorld"
}
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file("C:/Users/Vinayak/Downloads/terra.pem")}"
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"echo cat > test.txt"
]
}
}
Please try to change you host line to
host = "${self.public_ip}"
Letting people know the actual error message you are getting might help too. :)

How to use Terraform provisioner with multiple instances

I want to create x instances and run the same provisioner.
resource "aws_instance" "workers" {
ami = "ami-08d658f84a6d84a80"
count = 3
...
provisioner "remote-exec" {
scripts = ["setup-base.sh", "./setup-docker.sh"]
connection {
type = "ssh"
host = "${element(aws_instance.workers.*.public_ip, count.index)}"
user = "ubuntu"
private_key = file("${var.provisionKeyPath}")
agent = false
}
}
I think the host line confuses Terraform. Getting Error: Cycle: aws_instance.workers[2], aws_instance.workers[1], aws_instance.workers[0]
Since I upgrade my terraform version(0.12), I have been encountered the same problem as yours.
You need to use ${self.private_ip} for the host property in your connection object,
and the connection object should be located out of the provisioner "remote-exec"
Details are the below.
resource "aws_instance" "workers" {
ami = "ami-08d658f84a6d84a80"
count = 3
...
connection {
host = "${self.private_ip}"
type = "ssh"
user = "YOUR_USER_NAME"
private_key = "${file("~/YOUR_PEM_FILE.pem")}"
}
provisioner "remote-exec" {
scripts = ["setup-base.sh", "./setup-docker.sh"]
}
...
}
If you need to get more information, the below link is gonna be helping you.
https://github.com/hashicorp/terraform/issues/20286

Terraforms remote-exec on each host created

I am trying to set up a group of EC2s for an app using Terraform in AWS. After each server is created I want to mount the eNVM instance storage on each server using remote-exec. So create 3 servers and then mount the eNVM on each of the 3 servers
attempted to use null_resource but I am getting errors about 'resource depends on non-existent resource' or 'interpolation' errors
variable count {
default = 3
}
module "app-data-node" {
source = "some_git_source"
count = "${var.count}"
instance_size = "instance_data"
hostname_pattern = "app-data"
dns_domain = "${data.terraform_remote_state.network.dns_domain}"
key_name = "app-automation"
description = "Automation App Data Instance"
package_proxy = "${var.package_proxy}"
}
resource "null_resource" "mount_envm" {
# Only run this provisioner for app nodes
#count = "${var.count}"
depends_on = [
"null_resource.${module.app-data-node}"
]
connection {
host = "${aws_instance.i.*.private_ip[count.index]}"
user = "root"
private_key = "app-automation"
}
provisioner "remote-exec" {
inline = [
"sudo mkfs -t ext4 /dev/nvme0n1",
"sudo mkdir /data",
"sudo mount /dev/nvme0n1 /data"
]
}
}
3 EC2 instances each with eNVMs mounted on them.
You can use a null_resource to run the provisioner:
resource "null_resource" "provisioner" {
count = "${var.count}"
triggers {
master_id = "${element(aws_instance.my_instances.*.id, count.index)}"
}
connection {
#host = "${element(aws_instance.my_instances.*.private_ip, count.index)}"
host = "${element(aws_instance.my_instances.*.private_ip, count.index)}"
type = "ssh"
user = "..."
private_key = "..."
}
# set hostname
provisioner "remote-exec" {
inline = [
"sudo mkfs -t ext4 /dev/nvme0n1",
"sudo mkdir /data",
"sudo mount /dev/nvme0n1 /data"
]
}
}
This should do it for all instances at once as well.

Resources