I am currently struggling to get the user_data script to run when starting the EC2 instance using Terraform. I pre-configured my AMI using Packer, and referenced the custom AMI in my Terraform file. Since I need to now the RDS instance URL when starting the EC2 instance, I tried to read them inside the user_data script and set them as environment variables. My app tries to read these environment variables and can connect to the db. Everything works as expected locally, and on CI when running tests. Manually setting the variables and starting the app also works as expected. The only problem is the execution of the user_data script because it already ran when creating the AMI using Packer.
Notice how I read the current DB state inside Terraform, which is why I cannot use traditional approaches that would result in the user_data script getting executed again. I also tried deleting the cloud data as described in this question and this one without success.
This is my current Packer configuration:
build {
name = "spring-ubuntu"
sources = [
"source.amazon-ebs.ubuntu"
]
provisioner "file" {
source = "build/libs/App.jar"
destination = "~/App.jar"
}
provisioner "shell" {
inline = [
"sleep 30",
"sudo apt update",
"sudo apt -y install openjdk-17-jdk",
"sudo rm -Rf /var/lib/cloud/data/scripts",
"sudo rm -Rf /var/lib/cloud/scripts/per-instance",
"sudo rm -Rf /var/lib/cloud/data/user-data*",
"sudo rm -Rf /var/lib/cloud/instances/*",
"sudo rm -Rf /var/lib/cloud/instance",
]
}
}
This is my current Terraform configuration:
resource "aws_instance" "instance" {
ami = "ami-123abc"
instance_type = "t2.micro"
subnet_id = tolist(data.aws_subnet_ids.all.ids)[0]
vpc_security_group_ids = [aws_security_group.ec2.id]
user_data = <<EOF
#!/bin/bash
export DB_HOST=${data.terraform_remote_state.state.outputs.db_address}
export DB_PORT=${data.terraform_remote_state.state.outputs.db_port}
java -jar ~/App.jar
EOF
lifecycle {
create_before_destroy = true
}
}
I tried to run the commands using the remote-exec provisioner as proposed by Marko E. At first it failed with the following error, but after a second try, it worked.
This object does not have an attribute named "db_address".
This object does not have an attribute named "db_port".
This is the working configuration
resource "aws_instance" "instance" {
ami = "ami-0ed33809ce5c950b9"
instance_type = "t2.micro"
key_name = "myKeys"
subnet_id = tolist(data.aws_subnet_ids.all.ids)[0]
vpc_security_group_ids = [aws_security_group.ec2.id]
lifecycle {
create_before_destroy = true
}
connection {
type = "ssh"
user = "ubuntu"
private_key = file("~/Downloads/myKeys.pem")
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"export DB_HOST=${data.terraform_remote_state.state.outputs.db_address}",
"export DB_PORT=${data.terraform_remote_state.state.outputs.db_port}",
"java -jar ~/App.jar &",
]
}
}
Related
Is it possible to execute shell commands on Ubuntu OS using Terraform script?
I have to do some initial configuration before execution of Terraform scripts.
you could define a local-exec provisioner in your resource
provisioner "local-exec" {
command = "echo The server's IP address is ${self.private_ip}"
}
that will execute right after the resource is created, there are other types of provisioners see: https://www.terraform.io/language/resources/provisioners/syntax
Depends upon where your Ubuntu OS is, if its local then you can do something like this
resource "aws_instance" "web" {
# ...
provisioner "local-exec" {
command = "echo ${self.private_ip} >> private_ips.txt"
}
}
If its a remote resource for example an aws ec2 instance:
resource "aws_instance" "web" {
# ...
# Establishes connection to be used by all
# generic remote provisioners (i.e. file/remote-exec)
connection {
type = "ssh"
user = "root"
password = var.root_password
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"puppet apply",
"consul join ${aws_instance.web.private_ip}",
]
}
}
Also, if its an ec2-instance, one thing that is mostly used is defining a script using user_data which runs immediately after the resource is created with root privileges but only once and then will never run even if you reboot the instance. In terraform you can do something like this:
resource "aws_instance" "server" {
ami = "ami-123456"
instance_type = "t3.medium"
availability_zone = "eu-central-1b"
vpc_security_group_ids = [aws_security_group.server.id]
subnet_id = var.subnet1
private_ip = var.private-ip
key_name = var.key_name
associate_public_ip_address = true
tags = {
Name = "db-server"
}
user_data = <<EOF
mkdir abc
apt update && apt install nano
EOF
}
I tried Creating Instance in AWS using Terraform and try to copy a set of files into the newly created AWS Instance. I used "provisioner" for the same but for the connection, it always says connection timed out.
In the example below I showed like its AWS Pem file but I tried with both ppk and pem files. nothing works.
provider "aws" {
region = "ap-southeast-1"
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
}
resource "aws_instance" "firsttest" {
ami = "ami-061eb2b23f9f8839c"
instance_type = "t2.micro"
key_name = "deepak"
provisioner "file" {
source = "index.html"
destination = "/home/ubuntu/index.html"
connection {
type = "ssh"
user = "ubuntu"
private_key = file("D:/awskeyterraform/deepak.pem")
host = "${aws_instance.firsttest.public_ip}"
}
}
user_data = <<-EOF
#!/bin/bash
apt-get update -y
apt-get install -y nginx
systemctl enable nginx
service nginx restart
touch index.html
EOF
tags = {
name = "terraform-firsttest"
}
}
Expected should copy the index.html but actual the connection timed out to connect to the newly created instance
In Windows, SSH module Connection doesn't accept "*.pem". Instead, it accepts the PEM file after renaming it to "id_rsa".
provider "aws" {
region = "ap-southeast-1"
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
}
resource "aws_instance" "firsttest" {
ami = "ami-061eb2b23f9f8839c"
instance_type = "t2.micro"
key_name = "deepak"
provisioner "file" {
source = "index.html"
destination = "/home/ubuntu/index.html"
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file("D:/awskeyterraform/id_rsa")}"
host = "${aws_instance.firsttest.public_ip}"
}
}
user_data = <<-EOF
#!/bin/bash
apt-get update -y
apt-get install -y nginx
systemctl enable nginx
service nginx restart
touch index.html
EOF
tags = {
name = "terraform-firsttest"
}
}
Hope this should solve the issue.
I tried to use the remote-exec to execute several commands on target VM, but failed with 'bash: Permission denied', here is the code:
connection {
host = "${azurerm_network_interface.nic.private_ip_address}"
type = "ssh"
user = "${var.mp_username}"
private_key = "${file(var.mp_vm_private_key)}"
}
provisioner "remote-exec" {
inline = [
"sudo wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/onboard_agent.sh",
"sudo chown ${var.mp_username}: onboard_agent.sh",
"sudo chmod +x onboard_agent.sh",
"./onboard_agent.sh -w ${azurerm_log_analytics_workspace.workspace.workspace_id} -s ${azurerm_log_analytics_workspace.workspace.primary_shared_key} -d opinsights.azure.us"
]
}
After checked the issue here: https://github.com/hashicorp/terraform/issues/5397, I need to wrap all the commands into a file. Then I used a template file to put all the commands in it:
OMSAgent.sh
#!/bin/bash
sudo wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/onboard_agent.sh
sudo chown ${username}: onboard_agent.sh
sudo chmod +x onboard_agent.sh
./onboard_agent.sh -w ${workspaceId} -s ${workspaceKey} -d opinsights.azure.us
The code changes to:
data "template_file" "extension_data" {
template = "${file("templates/OMSAgent.sh")}"
vars = {
workspaceId = "${azurerm_log_analytics_workspace.workspace.workspace_id}"
workspaceKey = "${azurerm_log_analytics_workspace.workspace.primary_shared_key}"
username = "${var.mp_username}"
}
}
resource "null_resource" "remote-provisioner" {
connection {
host = "${azurerm_network_interface.nic.private_ip_address}"
type = "ssh"
user = "${var.mp_username}"
private_key = "${file(var.mp_vm_private_key)}"
script_path = "/home/${var.mp_username}/OMSAgent.sh"
}
provisioner "file" {
content = "${data.template_file.extension_data.rendered}"
destination = "/home/${var.mp_username}/OMSAgent.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /home/${var.mp_username}/OMSAgent.sh",
"/home/${var.mp_username}/OMSAgent.sh"
]
}
}
But seems something wrong in the null_resource, the null resource installation stoped and throws this:
null_resource.remote-provisioner (remote-exec): /home/user/OMSAgent.sh: 2: /home/user/OMSAgent.sh: Cannot fork
.
And the content for the shell script is this:
cat OMSAgent.sh
#!/bin/sh
chmod +x /home/user/OMSAgent.sh
/home/user/OMSAgent.sh
Seems I did the script in the wrong way.
#joe huang Please make sure you use the username and password provided when you created the os_profile for your VM:
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
https://www.terraform.io/docs/providers/azurerm/r/virtual_machine.html#example-usage-from-an-azure-platform-image-
Here is a document for installing the OMS agent:
https://support.microsoft.com/en-in/help/4131455/how-to-reinstall-operations-management-suite-oms-agent-for-linux
Hope this helps!
if your /tmp is mounted with noexec, the default location that TF uses to push its tmp script needs to change, perhaps to your users home dir. In the connection block add:
script_path = "~/terraform_provisioner_%RAND%.sh"
With terraform 0.12, there is a templatefile function but I haven't figured out the syntax for passing it a non-trivial map as the second argument and using the result to be executed remotely as the newly created instance's provisioning step.
Here's the gist of what I'm trying to do, although it doesn't parse properly because one can't just create a local variable within the resource block named scriptstr.
While I'm really trying to get the output of the templatefile call to be executed on the remote side, once the provisioner can ssh to the machine, I've so far gone down the path of trying to get the templatefile call output written to a local file via the local-exec provisioner. Probably easy, I just haven't found the documentation or examples to understand the syntax necessary. TIA
resource "aws_instance" "server" {
count = "${var.servers}"
ami = "${local.ami}"
instance_type = "${var.instance_type}"
key_name = "${local.key_name}"
subnet_id = "${element(aws_subnet.consul.*.id, count.index)}"
iam_instance_profile = "${aws_iam_instance_profile.consul-join.name}"
vpc_security_group_ids = ["${aws_security_group.consul.id}"]
ebs_block_device {
device_name = "/dev/sda1"
volume_size = 2
}
tags = "${map(
"Name", "${var.namespace}-server-${count.index}",
var.consul_join_tag_key, var.consul_join_tag_value
)}"
scriptstr = templatefile("${path.module}/templates/consul.sh.tpl",
{
consul_version = "${local.consul_version}"
config = <<EOF
"bootstrap_expect": ${var.servers},
"node_name": "${var.namespace}-server-${count.index}",
"retry_join": ["provider=aws tag_key=${var.consul_join_tag_key} tag_value=${var.consul_join_tag_value}"],
"server": true
EOF
})
provisioner "local-exec" {
command = "echo ${scriptstr} > ${var.namespace}-server-${count.index}.init.sh"
}
provisioner "remote-exec" {
script = "${var.namespace}-server-${count.index}.init.sh"
connection {
type = "ssh"
user = "clear"
private_key = file("${local.private_key_file}")
}
}
}
In your question I can see that the higher-level problem you seem to be trying to solve here is creating a pool of HashiCorp Consul servers and then, once they are all booted up, to tell them about each other so that they can form a cluster.
Provisioners are essentially a "last resort" in Terraform, provided out of pragmatism because sometimes logging in to a host and running commands on it is the only way to get a job done. An alternative available in this case is to instead pass the information from Terraform to the server via the aws_instance user_data argument, which will then allow the servers to boot up and form a cluster immediately, rather than being delayed until Terraform is able to connect via SSH.
Either way, I'd generally prefer to have the main body of the script I intend to run already included in the AMI so that Terraform can just run it with some arguments, since that then reduces the problem to just templating the invocation of that script rather than the whole script:
provisioner "remote-exec" {
inline = ["/usr/local/bin/init-consul --expect='${var.servers}' etc, etc"]
connection {
type = "ssh"
user = "clear"
private_key = file("${local.private_key_file}")
}
}
However, if templating an entire script is what you want or need to do, I'd upload it first using the file provisioner and then run it, like this:
provisioner "file" {
destination = "/tmp/consul.sh"
content = templatefile("${path.module}/templates/consul.sh.tpl", {
consul_version = "${local.consul_version}"
config = <<EOF
"bootstrap_expect": ${var.servers},
"node_name": "${var.namespace}-server-${count.index}",
"retry_join": ["provider=aws tag_key=${var.consul_join_tag_key} tag_value=${var.consul_join_tag_value}"],
"server": true
EOF
})
}
provisioner "remote-exec" {
inline = ["sh /tmp/consul.sh"]
}
I am wondering how can we stop and restart the AWS ec2 instance created using terraform. is there any way to do that?
As you asked, for example, there is a limit on the comment, so posting as the answer using local-exec.
I assume that you already configure aws configure | aws configure --profile test using aws-cli.
Here is the complete example to reboot an instance, change VPC SG ID, subnet and key name etc
provider "aws" {
region = "us-west-2"
profile = "test"
}
resource "aws_instance" "ec2" {
ami = "ami-0f2176987ee50226e"
instance_type = "t2.micro"
associate_public_ip_address = false
subnet_id = "subnet-45454566645"
vpc_security_group_ids = ["sg-45454545454"]
key_name = "mytest-ec2key"
tags = {
Name = "Test EC2 Instance"
}
}
resource "null_resource" "reboo_instance" {
provisioner "local-exec" {
on_failure = "fail"
interpreter = ["/bin/bash", "-c"]
command = <<EOT
echo -e "\x1B[31m Warning! Restarting instance having id ${aws_instance.ec2.id}.................. \x1B[0m"
# aws ec2 reboot-instances --instance-ids ${aws_instance.ec2.id} --profile test
# To stop instance
aws ec2 stop-instances --instance-ids ${aws_instance.ec2.id} --profile test
echo "***************************************Rebooted****************************************************"
EOT
}
# this setting will trigger script every time,change it something needed
triggers = {
always_run = "${timestamp()}"
}
}
Now Run terraform apply
Once created and you want later to reboot or stop just call
terraform apply -target null_resource.reboo_instance
See the logs
I have found simpler way to do it.
provisioner "local-exec" {
command = "ssh -tt -o StrictHostKeyChecking=no
someuser#${aws_eip.ec2_public_ip.public_ip} sudo 'shutdown -r'"
}
Using remote-exec:
provisioner "remote-exec" {
inline = [
"sudo /usr/sbin/shutdown -r 1"
]
}
-r 1 is to delay the reboot and prevent remote-exec command exiting with non-zero code.