Remote-exec not working in Terraform with aws_instance resourse - terraform

I have this below code when I run apply it gets a timeout. An instance is created but remote-exec commands don't work.
I am running this in the windows 10 machine.
Terraform version is v0.12.12 provider.aws v2.33.0
resource "aws_instance" "web" {
ami = "ami-54d2a63b"
instance_type = "t2.nano"
key_name = "terra"
tags = {
Name = "HelloWorld"
}
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file("C:/Users/Vinayak/Downloads/terra.pem")}"
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"echo cat > test.txt"
]
}
}

Please try to change you host line to
host = "${self.public_ip}"
Letting people know the actual error message you are getting might help too. :)

Related

Execute bash script on Ubutnu using Terraform

Is it possible to execute shell commands on Ubuntu OS using Terraform script?
I have to do some initial configuration before execution of Terraform scripts.
you could define a local-exec provisioner in your resource
provisioner "local-exec" {
command = "echo The server's IP address is ${self.private_ip}"
}
that will execute right after the resource is created, there are other types of provisioners see: https://www.terraform.io/language/resources/provisioners/syntax
Depends upon where your Ubuntu OS is, if its local then you can do something like this
resource "aws_instance" "web" {
# ...
provisioner "local-exec" {
command = "echo ${self.private_ip} >> private_ips.txt"
}
}
If its a remote resource for example an aws ec2 instance:
resource "aws_instance" "web" {
# ...
# Establishes connection to be used by all
# generic remote provisioners (i.e. file/remote-exec)
connection {
type = "ssh"
user = "root"
password = var.root_password
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"puppet apply",
"consul join ${aws_instance.web.private_ip}",
]
}
}
Also, if its an ec2-instance, one thing that is mostly used is defining a script using user_data which runs immediately after the resource is created with root privileges but only once and then will never run even if you reboot the instance. In terraform you can do something like this:
resource "aws_instance" "server" {
ami = "ami-123456"
instance_type = "t3.medium"
availability_zone = "eu-central-1b"
vpc_security_group_ids = [aws_security_group.server.id]
subnet_id = var.subnet1
private_ip = var.private-ip
key_name = var.key_name
associate_public_ip_address = true
tags = {
Name = "db-server"
}
user_data = <<EOF
mkdir abc
apt update && apt install nano
EOF
}

Terraform OpenStack user data gives inconsistency in user creation

I'm kind of stuck here, not sure exactly what is wrong can someone help me.
Problem: When running below resource in Openstack using terraform, the user "aditya" gets created intermittently.
I need the user to be created every-time.
Not sure if it's error in code or problem of vm's.
resource "openstack_compute_instance_v2" "test-machine" {
region = "zxy"
availability_zone = "zcy"
name = "test-machine"
security_groups = []
user_data = templatefile("/some/path",{
admin_username = "aditya"})
connection {
host = someip
type = "ssh"
user = "aditya"
private_key = test_pem
timeout = "20m"
}
provisioner "remote-exec" {
inline = [
"/bin/bash -c \"while [ ! -f /tmp/done-user-data ]; do sleep 2; done\"",
]
}
}

How to send local files using Terraform Cloud as remote backend?

I am creating AWS EC2 instance and I am using Terraform Cloud as backend.
in ./main.tf:
terraform {
required_version = "~> 0.12"
backend "remote" {
hostname = "app.terraform.io"
organization = "organization"
workspaces { prefix = "test-dev-" }
}
in ./modules/instances/function.tf
resource "aws_instance" "test" {
ami = "${var.ami_id}"
instance_type = "${var.instance_type}"
subnet_id = "${var.private_subnet_id}"
vpc_security_group_ids = ["${aws_security_group.test_sg.id}"]
key_name = "${var.test_key}"
tags = {
Name = "name"
Function = "function"
}
provisioner "remote-exec" {
inline = [
"sudo useradd someuser"
]
connection {
host = "${self.public_ip}"
type = "ssh"
user = "ubuntu"
private_key = "${file("~/.ssh/mykey.pem")}"
}
}
}
and as a result, I got the following error:
Call to function "file" failed: no file exists at /home/terraform/.ssh/...
so what is happening here, is that terraform trying to find the file in Terraform Cloud instead of my local machine. How can I transfer file from my local machine and still using Terraform Cloud?
There is no straight way to do what I asked in the question. In the end I ended up uploading the keys into AWS with its CLI like this:
aws ec2 import-key-pair --key-name "name_for_the_key" --public-key-material file:///home/user/.ssh/name_for_the_key.pub
and then reference it like that:
resource "aws_instance" "test" {
ami = "${var.ami_id}"
...
key_name = "name_for_the_key"
...
}
Note Yes file:// looks like the "Windowsest" syntax ever but you have to use it on linux too.

How to use Terraform provisioner with multiple instances

I want to create x instances and run the same provisioner.
resource "aws_instance" "workers" {
ami = "ami-08d658f84a6d84a80"
count = 3
...
provisioner "remote-exec" {
scripts = ["setup-base.sh", "./setup-docker.sh"]
connection {
type = "ssh"
host = "${element(aws_instance.workers.*.public_ip, count.index)}"
user = "ubuntu"
private_key = file("${var.provisionKeyPath}")
agent = false
}
}
I think the host line confuses Terraform. Getting Error: Cycle: aws_instance.workers[2], aws_instance.workers[1], aws_instance.workers[0]
Since I upgrade my terraform version(0.12), I have been encountered the same problem as yours.
You need to use ${self.private_ip} for the host property in your connection object,
and the connection object should be located out of the provisioner "remote-exec"
Details are the below.
resource "aws_instance" "workers" {
ami = "ami-08d658f84a6d84a80"
count = 3
...
connection {
host = "${self.private_ip}"
type = "ssh"
user = "YOUR_USER_NAME"
private_key = "${file("~/YOUR_PEM_FILE.pem")}"
}
provisioner "remote-exec" {
scripts = ["setup-base.sh", "./setup-docker.sh"]
}
...
}
If you need to get more information, the below link is gonna be helping you.
https://github.com/hashicorp/terraform/issues/20286

How to get remote-exec provisioner to apply after disk attachments?

I have a script that I need to run after my instance has been provisioned and the volumes have been attached:
resource "aws_instance" "controller" {
...
provisioner "remote-exec" {
connection {
type = "ssh"
user = "centos"
}
inline = [
"download and run script to verify environment"
]
}
}
resource "aws_ebs_volume" "controller-ebs-sdb" {
...
}
resource "aws_volume_attachment" "controller-volume-attachment-sdb" {
device_name = "/dev/sdb"
volume_id = "${aws_ebs_volume.controller-ebs-sdb.id}"
instance_id = "${aws_instance.controller.id}"
}
Currently the script is failing the environment because when it runs the volume has not been attached.
Is it possible to only run the remote-exec script after the volumes have been attached?
You can run a provisioner on any resource (consider the null_resource pattern for an extreme version of this) so the best thing here is to run it on the aws_volume_attachment resource:
# ...
resource "aws_volume_attachment" "controller-volume-attachment-sdb" {
device_name = "/dev/sdb"
volume_id = "${aws_ebs_volume.controller-ebs-sdb.id}"
instance_id = "${aws_instance.controller.id}"
provisioner "remote-exec" {
connection {
host = "${aws_instance.controller.public_ip}"
type = "ssh"
user = "centos"
}
inline = [
"download and run script to verify environment"
]
}
}
You can consider adding a trigger option in remote-exec. Other crude option is to add a sleep for some seconds or, the script can retry itself, or check the status/existence of the disk and then attempt.

Resources