Terraform cannot ssh into EC2 instance to upload files - terraform

I am trying to get a basic terraform example up and running and then push a very simple flask application in a docker container there. The script all works if I remove the file provisioner section and the user data section. The pem file is in the same location on my disk as the main.tf script and the terraform.exe file.
If I leave the file provisioner in then the script fails with the following error:
Error: Error applying plan:
1 error(s) occurred:
* aws_launch_configuration.example: 1 error(s) occurred:
* dial tcp :22: connectex: No connection could be made because the target machine actively refused it.
If I remove the file provisioning section the script runs fine and I can ssh into the created instance using my private key so the key_name part seems to be working ok, I think its to do with the file provisioner trying to connect to add my files.
Here is my launch configuration from my script, I have tried using the connection block which I got from another post online but I cant see what I am doing wrong.
resource "aws_launch_configuration" "example" {
image_id = "${lookup(var.eu_west_ami, var.region)}"
instance_type = "t2.micro"
key_name = "Terraform-python"
security_groups = ["${aws_security_group.instance.id}"]
provisioner "file" {
source = "python/hello_flask.py"
destination = "/home/ec2-user/hello_flask.py"
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("Terraform-python.pem")}"
timeout = "2m"
agent = false
}
}
provisioner "file" {
source = "python/flask_dockerfile"
destination = "/home/ec2-user/flask_dockerfile"
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("Terraform-python.pem")}"
timeout = "2m"
agent = false
}
}
user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo yum install -y docker
sudo service docker start
sudo usermod -a -G docker ec2-user
sudo docker build -t flask_dockerfile:latest /home/ec2-user/flask_dockerfile
sudo docker run -d -p 5000:5000 flask_dockerfile
EOF
lifecycle {
create_before_destroy = true
}
}
It is probably something very simple and stupid that I am doing, thanks in advance for anyone that takes a look.

aws_launch_configuration is not an actual EC2 instance but just a 'template' to launch instances. Thus, it is not possible to connect to it via SSH.
To copy those file you have two options:
Creating a custom AMI. For that, you can use Packer or Terraform itself, launching an EC2 instance with aws_instance and these file provisioners, and creating an AMI from it with aws_ami
The second one is not a best practice but if the files are short, you can include them in the user_data.

Related

Provisioner local-exec: 'always_run' trigger doesn't work as expected

In my terraform I have mysql module as follows:
# create ssh tunnel to RDS instance
resource "null_resource" "ssh_tunnel" {
provisioner "local-exec" {
command = "ssh -i ${var.private_key} -L 3306:${var.rds_endpoint} -fN ec2-user#${var.bastion_ip} -v >./stdout.log 2>./stderr.log"
}
triggers = {
always_run = timestamp()
}
}
# create database
resource "mysql_database" "rds" {
name = var.db_name
depends_on = [null_resource.ssh_tunnel]
}
When I'm adding new module and running terraform apply first time it works as expected.
But when terraform apply running without any changes I'm getting an error:
Could not connect to server: dial tcp 127.0.0.1:3306: connect: connection refused
If I understand correctly, provisioner "local-exec" should execute script every time due to the trigger settings. Could you explain how should it works properly?
I suspect that this happens because your first local-exec creates the tunnel in the background (-f). Then second execution fails because the first tunnel still exists. You do not close it at all in your code. You would have to extend your code to check for an existence of tunnels and maybe properly close them when you are done using them.
Finally I've implemented this solution https://registry.terraform.io/modules/flaupretre/tunnel/ssh/latest instead of using null_resource.

ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

I'm attempting to install Nginx on an ec2 instance using the Terraform provisioner remote-exec but I keep running into this error.
ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
This is what my code looks like
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_ssh.id]
connection {
type = "ssh"
host = self.public_ip
user = "ec2-user"
private_key = file(var.private_key_path)
}
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
Security group rules are set up to allow ssh from anywhere.
And I'm able to ssh into the box from my local machine.
Not sure if I'm missing really obvious here. I've tried a newer version of Terraform but it's the same issue.
If your EC2 instance is using an AMI for an operating system that uses cloud-init (the default images for most Linux distributions do) then you can avoid the need for Terraform to log in over SSH at all by using the user_data argument to pass a script to cloud-init:
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_ssh.id]
user_data = <<-EOT
yum install nginx -y
service nginx start
EOT
}
For an operating system that includes cloud-init, the system will run cloud-init as part of the system startup and it will access the metadata and user data API to retrieve the value of user_data. It will then execute the contents of the script, writing any messages from that operation into the cloud-init logs.
What I've described above is the official recommendation for how to run commands to set up your compute instance. The documentation says that provisioners are a last resort, and one of the reasons given is to avoid the extra complexity of having to correctly configure SSH connectivity and authentication, which is the very complexity that has caused you to ask this question and so I think trying to follow the advice in the documentation is the best way to address it.

How run remote-exec provisioner on destroy of more than one instances

I am using terraform to setup a docker swarm cluster on OpenStack along with using Ansible for configuration on newly created VMs. I want to perform first docker swarm leave on a VM which is going to be removed when I decrease the number of instances(VMs) and apply changes via terraform apply. It works when I destroy instance one by one but when 2 instances at once then it gives an error.
Error: Cycle: module.swarm_cluster.openstack_compute_instance_v2.swarm-cluster-hosts[3] (destroy), module.swarm_cluster.openstack_compute_instance_v2.swarm-cluster-hosts[2] (destroy)
Here is script:
resource "openstack_compute_instance_v2" "my_cluster"{
provisioner "remote-exec" {
when = destroy
inline = [ "sudo docker swarm leave" ]
}
connection {
type = "ssh"
user = var.ansible_user
timeout = "3m"
private_key = var.private_ssh_key
host = self.access_ip_v4
}
}
terraform :0.12

Unable to execute "remote-exec" provisioner in Azure terraform

I am trying to execute remote-exec provisioner when deploying a VM in Azure but inline code in remote-exec never executes.
Here is my provisioner and connection code:
provisioner "remote-exec" {
inline = [
"touch newfile.txt",
"touch newfile2.txt",
]
}
connection {
type = "ssh"
host = "${azurerm_public_ip.publicip.ip_address}"
user = "testuser"
private_key = "${file("~/.ssh/id_rsa")}"
agent = false
}
Code never executes and gives the error:
Error: Failed to read ssh private key: no key found
The key (id_rsa) is saved in the same location of the VM where I am running the main.tf file.
Please suggest what is wrong here.
As #ydaetskcoR comment, your code private_key = "${file("~/.ssh/id_rsa")}" indicated that the private key should exist at .ssh/id_rsa under your home directoty like /home/username on linux or C:\Users\username on windows.
You could save that key (id_rsa) in that directory as your code, otherwise, you need to add the current path of the key in your code.
For example, edit it to private_key = "${file("${path.module}/id_rsa")}"

Commands in user_data are not executed in terraform

Hi EC2 instance is created, but commands as part of userdata.sh are not gettingexecuted. When I manually connect to EC2 via putty, i found that nginx is not installed in EC2 instance. To verify if the script is getting executed or not I added echo message, but no output is display in command prompt when i run terraform apply. How can i verify if the user-data is getting executed or not?
I have installed Terraform in C drive and below script are present in same folder C:/Terraform/userdata.sh, C:/Terraform/main.tf, i tried giving path as ${file("./userdata.sh")}" but still it does not work.
Please advice as I am just learning terraform. Thanks.
#!/bin/bash -v
echo "userdata-start"
sudo apt-get update -y
sudo apt-get install -y nginx > /tmp/nginx.log
sudo service nginx start
echo "userdata-end"
This is been called in my terraform program [main.tf] as below:
# resource "template_file" "user_data" {
# template = "userdata.sh"
# }
data "template_file" "user_data" {
template = "${file("userdata.sh")}"
}
resource "aws_instance" "web" {
instance_type = "t2.micro"
ami = "ami-5e8bb23b"
key_name = "sptest"
vpc_security_group_ids = ["${aws_security_group.default.id}"]
subnet_id = "${aws_subnet.tf_test_subnet.id}"
user_data = "${data.template_file.user_data.template}"
#user_data = "${template_file.user_data.rendered}"
#user_data = "${file("userdata.sh")}"
#user_data = "${file("./userdata.sh")}"
tags {
Name = "tf-example-ec2"
}
}
I could see one issue with the code you have posted, the user_data variable should be like
user_data = "${data.template_file.user_data.rendered}"
Moreover, as a suggestion i will recommend you to try creating a log file in your script to check what all steps have been executed. It will also benefit you to know whether the script ran at all or not.
One sample from our code, you can modify it based on your standards
logdir=/var/log
logfile=${logdir}/mongo_setup.log
exec >> $logfile 2>&1
Hope this helps.
Why so complicated?
user_data = file("user_data.sh")
This file must exist near other tf.files of the project.
That will be enough

Resources