Hi EC2 instance is created, but commands as part of userdata.sh are not gettingexecuted. When I manually connect to EC2 via putty, i found that nginx is not installed in EC2 instance. To verify if the script is getting executed or not I added echo message, but no output is display in command prompt when i run terraform apply. How can i verify if the user-data is getting executed or not?
I have installed Terraform in C drive and below script are present in same folder C:/Terraform/userdata.sh, C:/Terraform/main.tf, i tried giving path as ${file("./userdata.sh")}" but still it does not work.
Please advice as I am just learning terraform. Thanks.
#!/bin/bash -v
echo "userdata-start"
sudo apt-get update -y
sudo apt-get install -y nginx > /tmp/nginx.log
sudo service nginx start
echo "userdata-end"
This is been called in my terraform program [main.tf] as below:
# resource "template_file" "user_data" {
# template = "userdata.sh"
# }
data "template_file" "user_data" {
template = "${file("userdata.sh")}"
}
resource "aws_instance" "web" {
instance_type = "t2.micro"
ami = "ami-5e8bb23b"
key_name = "sptest"
vpc_security_group_ids = ["${aws_security_group.default.id}"]
subnet_id = "${aws_subnet.tf_test_subnet.id}"
user_data = "${data.template_file.user_data.template}"
#user_data = "${template_file.user_data.rendered}"
#user_data = "${file("userdata.sh")}"
#user_data = "${file("./userdata.sh")}"
tags {
Name = "tf-example-ec2"
}
}
I could see one issue with the code you have posted, the user_data variable should be like
user_data = "${data.template_file.user_data.rendered}"
Moreover, as a suggestion i will recommend you to try creating a log file in your script to check what all steps have been executed. It will also benefit you to know whether the script ran at all or not.
One sample from our code, you can modify it based on your standards
logdir=/var/log
logfile=${logdir}/mongo_setup.log
exec >> $logfile 2>&1
Hope this helps.
Why so complicated?
user_data = file("user_data.sh")
This file must exist near other tf.files of the project.
That will be enough
Related
I'm attempting to install Nginx on an ec2 instance using the Terraform provisioner remote-exec but I keep running into this error.
ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
This is what my code looks like
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_ssh.id]
connection {
type = "ssh"
host = self.public_ip
user = "ec2-user"
private_key = file(var.private_key_path)
}
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
Security group rules are set up to allow ssh from anywhere.
And I'm able to ssh into the box from my local machine.
Not sure if I'm missing really obvious here. I've tried a newer version of Terraform but it's the same issue.
If your EC2 instance is using an AMI for an operating system that uses cloud-init (the default images for most Linux distributions do) then you can avoid the need for Terraform to log in over SSH at all by using the user_data argument to pass a script to cloud-init:
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_ssh.id]
user_data = <<-EOT
yum install nginx -y
service nginx start
EOT
}
For an operating system that includes cloud-init, the system will run cloud-init as part of the system startup and it will access the metadata and user data API to retrieve the value of user_data. It will then execute the contents of the script, writing any messages from that operation into the cloud-init logs.
What I've described above is the official recommendation for how to run commands to set up your compute instance. The documentation says that provisioners are a last resort, and one of the reasons given is to avoid the extra complexity of having to correctly configure SSH connectivity and authentication, which is the very complexity that has caused you to ask this question and so I think trying to follow the advice in the documentation is the best way to address it.
I am starting with Terraform. I am trying to make it set a friendly hostname, instead of the usual ip-10.10.10.10 that AWS uses. However, I haven't found how to do it.
I tried using provisioners, like this:
provisioner "local-exec" {
command = "sudo hostnamectl set-hostname friendly.example.com"
}
But that doesn't work, the hostname is not changed.
So now, I'm trying this:
resource "aws_instance" "example" {
ami = "ami-XXXXXXXX"
instance_type = "t2.micro"
tags = {
Name = "friendly.example.com"
}
user_data = "${data.template_file.user_data.rendered}"
}
data "template_file" "user_data" {
template = "${file("user-data.conf")}"
vars {
hostname = "${aws_instance.example.tags.Name}"
}
}
And in user-data.conf I have a line to use the variable, like so:
hostname = ${hostname}
But this gives me a cycle dependency:
$ terraform apply
Error: Error asking for user input: 1 error(s) occurred:
* Cycle: aws_instance.example, data.template_file.user_data
Plus, that would mean I have to create a different user_data resource for each instance, which seems a bit like a pain. Can you not reuse them? That should be the purpose of templates, right?
I must be missing something, but I can't find the answer.
Thanks.
Using a Terraform provisioner with the local-exec block will execute it on the device from which Terraform is applying: documentation. Note specifically the line:
This invokes a process on the machine running Terraform, not on the resource. See the remote-exec provisioner to run commands on the resource.
Therefore, switching the provisioner from a local-exec to a remote-exec:
provisioner "remote-exec" {
inline = ["sudo hostnamectl set-hostname friendly.example.com"]
}
should fix your issue with setting the hostname.
Since you are supplying the tag to the instance as a string, why not just make that a var?
Replace the string friendly.example.com with ${var.instance-name} in your instance resource and in your data template. Then set the var:
variable "instance-name" {
default="friendly.example.com"
}
I believe that your user-data.conf should be bash script, to start with #!/usr/bin/env bash.
It should look like
#!/usr/bin/env bash
hostname ${hostname}
I am trying to get a basic terraform example up and running and then push a very simple flask application in a docker container there. The script all works if I remove the file provisioner section and the user data section. The pem file is in the same location on my disk as the main.tf script and the terraform.exe file.
If I leave the file provisioner in then the script fails with the following error:
Error: Error applying plan:
1 error(s) occurred:
* aws_launch_configuration.example: 1 error(s) occurred:
* dial tcp :22: connectex: No connection could be made because the target machine actively refused it.
If I remove the file provisioning section the script runs fine and I can ssh into the created instance using my private key so the key_name part seems to be working ok, I think its to do with the file provisioner trying to connect to add my files.
Here is my launch configuration from my script, I have tried using the connection block which I got from another post online but I cant see what I am doing wrong.
resource "aws_launch_configuration" "example" {
image_id = "${lookup(var.eu_west_ami, var.region)}"
instance_type = "t2.micro"
key_name = "Terraform-python"
security_groups = ["${aws_security_group.instance.id}"]
provisioner "file" {
source = "python/hello_flask.py"
destination = "/home/ec2-user/hello_flask.py"
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("Terraform-python.pem")}"
timeout = "2m"
agent = false
}
}
provisioner "file" {
source = "python/flask_dockerfile"
destination = "/home/ec2-user/flask_dockerfile"
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("Terraform-python.pem")}"
timeout = "2m"
agent = false
}
}
user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo yum install -y docker
sudo service docker start
sudo usermod -a -G docker ec2-user
sudo docker build -t flask_dockerfile:latest /home/ec2-user/flask_dockerfile
sudo docker run -d -p 5000:5000 flask_dockerfile
EOF
lifecycle {
create_before_destroy = true
}
}
It is probably something very simple and stupid that I am doing, thanks in advance for anyone that takes a look.
aws_launch_configuration is not an actual EC2 instance but just a 'template' to launch instances. Thus, it is not possible to connect to it via SSH.
To copy those file you have two options:
Creating a custom AMI. For that, you can use Packer or Terraform itself, launching an EC2 instance with aws_instance and these file provisioners, and creating an AMI from it with aws_ami
The second one is not a best practice but if the files are short, you can include them in the user_data.
I'm using Terraform v0.11.7 and AWS provider 1.30 to build an environment to run load tests with locust built on Debian 9.5 AMI.
My module exposes a num_instances var used to determine the locust command line used. Below is my configuration.
resource "aws_instance" "locust_master" {
count = 1
ami = "${var.instance_ami}"
instance_type = "${var.instance_type}"
key_name = "${var.instance_ssh_key}"
subnet_id = "${var.subnet}"
tags = "${local.tags}"
vpc_security_group_ids = ["${local.vpc_security_group_ids}"]
user_data = <<-EOF
#!/bin/bash
# Install pip on instance.
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
sudo python3 get-pip.py
rm get-pip.py
# Install locust and pyzmq on instance.
sudo pip3 install locustio pyzmq
# Write locustfile to instance.
echo "${data.local_file.locustfile.content}" > ${local.locustfile_py}
# Write locust start script to instance.
echo "nohup ${var.num_instances > 1 ? local.locust_master_cmd : local.locust_base_cmd} &" > ${local.start_sh}
# Start locust.
sh ${local.start_sh}
EOF
}
resource "aws_instance" "locust_slave" {
count = "${var.num_instances - 1}"
ami = "${var.instance_ami}"
instance_type = "${var.instance_type}"
key_name = "${var.instance_ssh_key}"
subnet_id = "${var.subnet}"
tags = "${local.tags}"
vpc_security_group_ids = ["${local.vpc_security_group_ids}"]
user_data = <<-EOF
#!/bin/bash
set -x
# Install pip on instance.
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
sudo python3 get-pip.py
rm get-pip.py
# Install locust and pyzmq on instance.
sudo pip3 install locustio pyzmq
# Write locustfile to instance.
echo "${data.local_file.locustfile.content}" > ${local.locustfile_py}
# Write locust master dns name to instance.
echo ${aws_instance.locust_master.private_dns} > ${local.locust_master_host_file}
# Write locust start script to instance.
echo "nohup ${local.locust_slave_cmd} &" > ${local.start_sh}
# Start locust.
sh ${local.start_sh}
EOF
}
If I SSH into the locust_master instance after it has been launched, I see the /home/admin/start.sh script, but it does not appear to have been run, as I do not see the nohup.out file and locust is not in my running processes. If I manually run the same sh /home/admin/start.sh script on that host, the service starts, and I can disconnect from the host and still access it. The same problem is exhibited on the locust_slave host(s).
What might cause running the start.sh in aws_instance user_data to fail? Are there any gotchas I should be aware of when executing scripts in user_data?
Many thanks in advance!
Thanks for the tip! I was not aware of that log file, and it did point it out. It was a relative path issue. I assumed that user_data commands would be executed with /home/admin as the working directory, so locust couldn't find the locustfile.py file. Using absolute path to locustfile.py solved the problem.
I created the aws_db_instance to provision the RDS MySQL database using Terraform configuration. I need to execute the SQL Scripts (CREATE TABLE and INSERT statements) on the RDS. I'm stuck on what command to use here?? Anyone has the sample code in my use case? Please advise. Thanks.
resource "aws_db_instance" "mydb" {
# ...
provisioner "local-exec" {
command = "command to execute script.sql"
}
}
This is possible using a null_resource that depends on the aws_db_instance.my_db. This way the host is available when you run the command, it will only work if there aren't rules preventing you from accessing the DB such as security group ingress or not publicly accessible.
Example:
resource "null_resource" "setup_db" {
depends_on = ["aws_db_instance.my_db"] #wait for the db to be ready
provisioner "local-exec" {
command = "mysql -u ${aws_db_instance.my_db.username} -p${var.my_db_password} -h ${aws_db_instance.my_db.address} < file.sql"
}
}
I don't believe you can use a provisioner with that type of resource. One option you could explore is having an additional step that takes the address of the RDS instance from a Terraform output and runs the SQL script.
So, for instance in a CI environment, you'd have Create Database -> Load Database -> Finished.
Below would be you Terraform to create and output the resource address.
resource "aws_db_instance" "mydb" {
# ...
provisioner "local-exec" {
command = "command to execute script.sql"
}
}
output "username" {
value = "${aws_db_instance.mydb.username}"
}
output "address" {
value = "${aws_db_instance.mydb.address}"
}
The Load Database step would then run a shell script with the SQL logic and the following to obtain the address of the instance - terraform output address