How to restart EC2 instance using terraform without destroying them? - terraform

I am wondering how can we stop and restart the AWS ec2 instance created using terraform. is there any way to do that?

As you asked, for example, there is a limit on the comment, so posting as the answer using local-exec.
I assume that you already configure aws configure | aws configure --profile test using aws-cli.
Here is the complete example to reboot an instance, change VPC SG ID, subnet and key name etc
provider "aws" {
region = "us-west-2"
profile = "test"
}
resource "aws_instance" "ec2" {
ami = "ami-0f2176987ee50226e"
instance_type = "t2.micro"
associate_public_ip_address = false
subnet_id = "subnet-45454566645"
vpc_security_group_ids = ["sg-45454545454"]
key_name = "mytest-ec2key"
tags = {
Name = "Test EC2 Instance"
}
}
resource "null_resource" "reboo_instance" {
provisioner "local-exec" {
on_failure = "fail"
interpreter = ["/bin/bash", "-c"]
command = <<EOT
echo -e "\x1B[31m Warning! Restarting instance having id ${aws_instance.ec2.id}.................. \x1B[0m"
# aws ec2 reboot-instances --instance-ids ${aws_instance.ec2.id} --profile test
# To stop instance
aws ec2 stop-instances --instance-ids ${aws_instance.ec2.id} --profile test
echo "***************************************Rebooted****************************************************"
EOT
}
# this setting will trigger script every time,change it something needed
triggers = {
always_run = "${timestamp()}"
}
}
Now Run terraform apply
Once created and you want later to reboot or stop just call
terraform apply -target null_resource.reboo_instance
See the logs

I have found simpler way to do it.
provisioner "local-exec" {
command = "ssh -tt -o StrictHostKeyChecking=no
someuser#${aws_eip.ec2_public_ip.public_ip} sudo 'shutdown -r'"
}

Using remote-exec:
provisioner "remote-exec" {
inline = [
"sudo /usr/sbin/shutdown -r 1"
]
}
-r 1 is to delay the reboot and prevent remote-exec command exiting with non-zero code.

Related

Execute bash script on Ubutnu using Terraform

Is it possible to execute shell commands on Ubuntu OS using Terraform script?
I have to do some initial configuration before execution of Terraform scripts.
you could define a local-exec provisioner in your resource
provisioner "local-exec" {
command = "echo The server's IP address is ${self.private_ip}"
}
that will execute right after the resource is created, there are other types of provisioners see: https://www.terraform.io/language/resources/provisioners/syntax
Depends upon where your Ubuntu OS is, if its local then you can do something like this
resource "aws_instance" "web" {
# ...
provisioner "local-exec" {
command = "echo ${self.private_ip} >> private_ips.txt"
}
}
If its a remote resource for example an aws ec2 instance:
resource "aws_instance" "web" {
# ...
# Establishes connection to be used by all
# generic remote provisioners (i.e. file/remote-exec)
connection {
type = "ssh"
user = "root"
password = var.root_password
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"puppet apply",
"consul join ${aws_instance.web.private_ip}",
]
}
}
Also, if its an ec2-instance, one thing that is mostly used is defining a script using user_data which runs immediately after the resource is created with root privileges but only once and then will never run even if you reboot the instance. In terraform you can do something like this:
resource "aws_instance" "server" {
ami = "ami-123456"
instance_type = "t3.medium"
availability_zone = "eu-central-1b"
vpc_security_group_ids = [aws_security_group.server.id]
subnet_id = var.subnet1
private_ip = var.private-ip
key_name = var.key_name
associate_public_ip_address = true
tags = {
Name = "db-server"
}
user_data = <<EOF
mkdir abc
apt update && apt install nano
EOF
}

How to destroy a particular resource before deleting other resources in terraform

I am creating a VPN using a script in Terraform as no provider function is available. This VPN also has some other attached resources like security groups.
So when I run terraform destroy it starts deleting the VPN but in parallel, it also starts deleting the security group. The security group deletion fails because those groups are "still" associated with the VPN which is in the process of deletion.
When I run terraform destroy -parallelism=1 it works fine, but due to some limitations, I cannot use this in prod.
Is there a way I can enforce VPN to be deleted first before any other resource deletion starts?
EDIT:
See the security group and VPN Code:
resource "<cloud_provider>_security_group" "sg" {
name = format("%s-%s", local.name, "sg")
vpc = var.vpc_id
resource_group = var.resource_group_id
}
resource "null_resource" "make_vpn" {
triggers = {
vpn_name = var.vpn_name
local_script = local.scripts_location
}
provisioner "local-exec" {
command = "${local.scripts_location}/login.sh"
interpreter = ["/bin/bash", "-c"]
environment = {
API_KEY = var.api_key
}
}
provisioner "local-exec" {
command = local_file.make_vpn.filename
}
provisioner "local-exec" {
when = "destroy"
command = <<EOT
${self.triggers.local_script}/delete_vpn_server.sh ${self.triggers.vpn_name}
EOT
on_failure = continue
}
}

Custom Packer AMI does not execute user_data specified by Terraform

I am currently struggling to get the user_data script to run when starting the EC2 instance using Terraform. I pre-configured my AMI using Packer, and referenced the custom AMI in my Terraform file. Since I need to now the RDS instance URL when starting the EC2 instance, I tried to read them inside the user_data script and set them as environment variables. My app tries to read these environment variables and can connect to the db. Everything works as expected locally, and on CI when running tests. Manually setting the variables and starting the app also works as expected. The only problem is the execution of the user_data script because it already ran when creating the AMI using Packer.
Notice how I read the current DB state inside Terraform, which is why I cannot use traditional approaches that would result in the user_data script getting executed again. I also tried deleting the cloud data as described in this question and this one without success.
This is my current Packer configuration:
build {
name = "spring-ubuntu"
sources = [
"source.amazon-ebs.ubuntu"
]
provisioner "file" {
source = "build/libs/App.jar"
destination = "~/App.jar"
}
provisioner "shell" {
inline = [
"sleep 30",
"sudo apt update",
"sudo apt -y install openjdk-17-jdk",
"sudo rm -Rf /var/lib/cloud/data/scripts",
"sudo rm -Rf /var/lib/cloud/scripts/per-instance",
"sudo rm -Rf /var/lib/cloud/data/user-data*",
"sudo rm -Rf /var/lib/cloud/instances/*",
"sudo rm -Rf /var/lib/cloud/instance",
]
}
}
This is my current Terraform configuration:
resource "aws_instance" "instance" {
ami = "ami-123abc"
instance_type = "t2.micro"
subnet_id = tolist(data.aws_subnet_ids.all.ids)[0]
vpc_security_group_ids = [aws_security_group.ec2.id]
user_data = <<EOF
#!/bin/bash
export DB_HOST=${data.terraform_remote_state.state.outputs.db_address}
export DB_PORT=${data.terraform_remote_state.state.outputs.db_port}
java -jar ~/App.jar
EOF
lifecycle {
create_before_destroy = true
}
}
I tried to run the commands using the remote-exec provisioner as proposed by Marko E. At first it failed with the following error, but after a second try, it worked.
This object does not have an attribute named "db_address".
This object does not have an attribute named "db_port".
This is the working configuration
resource "aws_instance" "instance" {
ami = "ami-0ed33809ce5c950b9"
instance_type = "t2.micro"
key_name = "myKeys"
subnet_id = tolist(data.aws_subnet_ids.all.ids)[0]
vpc_security_group_ids = [aws_security_group.ec2.id]
lifecycle {
create_before_destroy = true
}
connection {
type = "ssh"
user = "ubuntu"
private_key = file("~/Downloads/myKeys.pem")
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"export DB_HOST=${data.terraform_remote_state.state.outputs.db_address}",
"export DB_PORT=${data.terraform_remote_state.state.outputs.db_port}",
"java -jar ~/App.jar &",
]
}
}

How to get remote-exec provisioner to apply after disk attachments?

I have a script that I need to run after my instance has been provisioned and the volumes have been attached:
resource "aws_instance" "controller" {
...
provisioner "remote-exec" {
connection {
type = "ssh"
user = "centos"
}
inline = [
"download and run script to verify environment"
]
}
}
resource "aws_ebs_volume" "controller-ebs-sdb" {
...
}
resource "aws_volume_attachment" "controller-volume-attachment-sdb" {
device_name = "/dev/sdb"
volume_id = "${aws_ebs_volume.controller-ebs-sdb.id}"
instance_id = "${aws_instance.controller.id}"
}
Currently the script is failing the environment because when it runs the volume has not been attached.
Is it possible to only run the remote-exec script after the volumes have been attached?
You can run a provisioner on any resource (consider the null_resource pattern for an extreme version of this) so the best thing here is to run it on the aws_volume_attachment resource:
# ...
resource "aws_volume_attachment" "controller-volume-attachment-sdb" {
device_name = "/dev/sdb"
volume_id = "${aws_ebs_volume.controller-ebs-sdb.id}"
instance_id = "${aws_instance.controller.id}"
provisioner "remote-exec" {
connection {
host = "${aws_instance.controller.public_ip}"
type = "ssh"
user = "centos"
}
inline = [
"download and run script to verify environment"
]
}
}
You can consider adding a trigger option in remote-exec. Other crude option is to add a sleep for some seconds or, the script can retry itself, or check the status/existence of the disk and then attempt.

Running local-exec provisioner on all EC2 instances after creation

I currently have a Terraform file to create EC2 instances on AWS that looks like this:
resource "aws_instance" "influxdata" {
count = "${var.ec2-count-influx-data}"
ami = "${module.amis.rhel73_id}"
instance_type = "${var.ec2-type-influx-data}"
vpc_security_group_ids = ["${var.sg-ids}"]
subnet_id = "${element(module.infra.subnet,count.index)}"
key_name = "${var.KeyName}"
iam_instance_profile = "Custom-role"
tags {
Name = "influx-data-node"
ASV = "${module.infra.ASV}"
CMDBEnvironment = "${module.infra.CMDBEnvironment}"
OwnerContact = "${module.infra.OwnerContact}"
custodian_downtime = "off"
OwnerEid = "${var.OwnerEid}"
}
ebs_block_device {
device_name = "/dev/sdg"
volume_size = 500
volume_type = "io1"
iops = 2000
encrypted = true
delete_on_termination = true
}
user_data = "${file("terraform/attach_ebs.sh")}"
connection {
private_key = "${file("/Users/usr111/Downloads/usr111_CD.pem")}"
user = "ec2-user"
}
provisioner "remote-exec" {
inline = ["echo just checking for ssh. ttyl. bye."]
}
provisioner "local-exec" {
command = <<EOF
ansible-playbook base-data.yml --key-file=/Users/usr111/Downloads/usr111_CD.pem --user=ec2-user -b -i "${self.private_ip},"
EOF
}
}
resource "aws_route53_record" "influx-data-route" {
count = "${var.ec2-count-influx-data}"
zone_id = "${var.r53-zone}"
name = "influx-data-0${count.index}"
type = "A"
ttl = "300"
// matches up record N to instance N
records = ["${element(aws_instance.influxdata.*.private_ip, count.index)}"]
}
resource "local_file" "inventory-meta" {
filename = "inventory"
content = <<-EOF
[meta]
${join("\n",aws_instance.influxmeta.*.private_ip)}
[data]
${join("\n",aws_instance.influxdata.*.private_ip)}
EOF
}
What I'm struggling to figure out is to get this part to run after I create the inventory file:
provisioner "local-exec" {
command = <<EOF
ansible-playbook base-data.yml --key-file=/Users/usr111/Downloads/usr111_CD.pem --user=ec2-user -b -i "${self.private_ip},"
EOF
}
Right now I'm passing an IP into Ansible but I want to pass in the inventory file, which is only created after Terraform provisions all of the instances.
since you are using AWS maybe you could try using the Dynamic Inventory script and your provisioner could look like this:
provisioner "local-exec" {
command = "ansible-playbook -i ec2.py playbook.yml --limit ${self.public_ip}" }
In your playbook you are going to need to wait for SSH to become available since Ansible is making the connection and not Terraform.
- name: wait for ssh
hosts: localhost
gather_facts: no
tasks:
- local_action: wait_for port=22 host="{{ ip }}" search_regex=OpenSSH delay=10
So the command should look like this:
provisioner "local-exec" {
command = "ansible-playbook -i ec2.py playbook.yml --limit ${self.public_ip}" --extra-vars 'ip=${self.public_ip}'}
You can also copy your playbooks to the host with the "File Provisioner", install ansible and run the playbook locally with "remote-exec", but that's up to you.

Resources