resource "aws_instance" "jenkins_worker_inst" {
for_each = toset(["peter", "nelson", "chris"])
provider = aws.region_worker
ami = data.aws_ssm_parameter.worker-linuxAmi.value
instance_type = var.instance-type
key_name = aws_key_pair.worker_key_pair.key_name
associate_public_ip_address = true
vpc_security_group_ids = [aws_security_group.jenkins_worker_sg.id]
subnet_id = aws_subnet.worker_subnet_1.id
tags = {
Name = each.key
}
depends_on = [aws_main_route_table_association.set-worker-default-rt-assoc, aws_instance.jenkins_master_inst]
provisioner "local-exec" {
command = <<EOF
#aws --debug --profile ${var.profile} ec2 wait instance-status-ok --region ${var.region_master} --instance-ids "${self.id}" ansible-playbook --verbose --extra-vars 'passed_in_hosts=tag_Name_${self.tags.Name}' ansible_templates/jenkins-worker-sample.yml
EOF
}
}
I am told that the for_each here should be able to make 3 instances, but it is making 1.
i have tried it with terraform 0.12 and 1.0.0. I really need a way to make the number of instances based on list of something
There are essentially 2 ways to accomplish what you are trying to do. One way is with the for_each meta-argument, which is what you attempted. The other way is by using count, which would look something this.
locals {
instance_names = ["peter", "nelson", "chris"]
}
resource "aws_instance" "jenkins_worker_inst" {
count = length(instance_names)
provider = aws.region_worker
ami = data.aws_ssm_parameter.worker-linuxAmi.value
instance_type = var.instance-type
key_name = aws_key_pair.worker_key_pair.key_name
associate_public_ip_address = true
vpc_security_group_ids = [aws_security_group.jenkins_worker_sg.id]
subnet_id = aws_subnet.worker_subnet_1.id
tags = {
Name = element(local.instance_names, count.index)
}
depends_on = [aws_main_route_table_association.set-worker-default-rt-assoc, aws_instance.jenkins_master_inst]
provisioner "local-exec" {
command = <<EOF
#aws --debug --profile ${var.profile} ec2 wait instance-status-ok --region ${var.region_master} --instance-ids "${self.id}" ansible-playbook --verbose --extra-vars 'passed_in_hosts=tag_Name_${self.tags.Name}' ansible_templates/jenkins-worker-sample.yml
EOF
}
}
The for_each meta-argument should work for Terraform v0.12.6+. If you tried using it in a version of Terraform prior to v0.12.6, then for_each would not yet be supported; you would only have count as an option.
You also stated that you tried it with Terraform v1.0+, which does support for_each. Without additional information (plan?), I can't tell you why that didn't work. However, I can say that sometimes Terraform does wonkey stuff and in the past, there have been breaking changes introduced in minor and patch versions, sometimes without announcement.
Finally, This Medium post does a pretty decent job of explaining the differences between count and for_each. But for your example, either should work just fine.
Related
Given the config below, what happens if I run apply command against the infrastructure if Amazon rolled out a new version of the AMI?
Will the test instance is going to be destroyed and recreated?
so scenario
terraform init
terraform apply
wait N months
terraform plan (or apply)
AM I going to see "forced" recreation of the ec2 instance that was created N months ago using the older version of the AMI which was "recent" back then?
data "aws_ami" "amazon-linux-2" {
most_recent = true
filter {
name = "owner-alias"
values = ["amazon"]
}
filter {
name = "name"
values = ["amzn2-ami-hvm*"]
}
}
resource "aws_instance" "test" {
depends_on = ["aws_internet_gateway.test"]
ami = "${data.aws_ami.amazon-linux-2.id}"
associate_public_ip_address = true
iam_instance_profile = "${aws_iam_instance_profile.test.id}"
instance_type = "t2.micro"
key_name = "bflad-20180605"
vpc_security_group_ids = ["${aws_security_group.test.id}"]
subnet_id = "${aws_subnet.test.id}"
}
Will "aws_ami" with most_recent=true impact future updates?
#ydeatskoR and #sogyals429 have the right answer. To be more concrete:
resource "aws_instance" "test" {
# ... (all the stuff at the top)
lifecycle {
ignore_changes = [
ami,
]
}
}
note: docs moved to: https://www.terraform.io/docs/language/meta-arguments/lifecycle.html#ignore_changes
Yes as per what #ydaetskcoR said you can have a look at the ignore_changes lifecycle and then it would not recreate the instances. https://www.terraform.io/docs/configuration/resources.html#ignore_changes
Terraform v0.12.6, provider.aws v2.23.0
Am trying to create two aws instances using the new for_each construct. Don't actually think this is an aws provider issue, more a terraform/for_each/provisioner issues.
Worked as advertised until I tried to add a local-exec provisioning step.
Don't know how to modify the local-exec example to work with the for.each variable. Am getting a terraform error about a cycle.
locals {
instances = {
s1 = {
private_ip = "192.168.47.191"
},
s2 = {
private_ip = "192.168.47.192"
},
}
}
provider "aws" {
profile = "default"
region = "us-east-1"
}
resource "aws_instance" "example" {
for_each = local.instances
ami = "ami-032138b8a0ee244c9"
instance_type = "t2.micro"
availability_zone = "us-east-1c"
private_ip = each.value["private_ip"]
ebs_block_device {
device_name = "/dev/sda1"
volume_size = 2
}
provisioner "local-exec" {
command = "echo ${aws_instance.example[each.key].public_ip} >> ip_address.txt"
}
}
But get this error.
./terraform apply
Error: Cycle: aws_instance.example["s2"], aws_instance.example["s1"]
Should the for_each each.key variable be expected to work in a provisioning step? There are other ways to get the public_ip later, by either using the testate file or querying aws given the instance ids, but accessing the resource variables within the local-exec provisioning would seem to come in handy in many ways.
Try using the self variable:
provisioner "local-exec" {
command = "echo ${self.public_ip} >> ip_address.txt"
}
Note to readers that resource-level for_each is a relatively new feature in Terraform and requires version >=0.12.6.
I'm wanting to understand how user data can be used to set hostnames for 2 or more ec2 instances that terrafrom creates. Below is my instance.tf which creates 2 instances.
resource "aws_instance" "example" {
count = 2
ami = "${lookup(var.AMIS, var.aws_region)}"
instance_type = "t2.micro"
tags = {Name = "rb-${count.index}"}
# the VPC subnet
subnet_id = "${aws_subnet.dee-main-public-1.id}"
# the security group
vpc_security_group_ids = ["${aws_security_group.allow-ssh.id}"]
# the public SSH key
key_name = "${aws_key_pair.mykeypair.key_name}"
}
resource "aws_key_pair" "mykeypair" {
key_name = "mykeypair"
public_key = "${file("${var.PATH_TO_PUBLIC_KEY}")}"
}
How do I set the hostnames for those 2 instances. i.e web1.example.com, web2.example.com
I understand cloudinit or remote-exec can be used for this but struggling to come up with the code as I'm still a beginner. Really appreciate If I can get some help to come up to speed. Many thanks in advance.
-B
You can do it with data resource for template file which allows you to use count function. The below syntax will allow you use the hostname variable with the count function.
data "template_file" "init" {
count = 2
template = file("$path")
vars = {
hostname = "web${count.index}.example.com"
}
}
resource "aws_instance" "master" {
ami = "$ami"
count = 2
instance_type = t2.medium
user_data = data.template_file.init[count.index].rendered
}
I am trying to accomplish adding 2 instances to a file called aws_worker_nodes_IP. This is the code below...again...I do not just need one worker IP listed I need both or all if my variables were to change.
I was told to use self.public_IP but that just list one. I need it for both.
#-----key pair for Workernodes-----
resource "aws_key_pair" "k8s-node_auth" {
key_name = "${var.key_name2}"
public_key = "${file(var.public_key_path2)}"
}
#-----Workernodes-----
resource "aws_instance" "nodes-opt-us1-k8s" {
instance_type = "${var.k8s-node_instance_type}"
ami = "${var.k8s-node_ami}"
count = "${var.NodeCount}"
tags {
Name = "nodes-opt-us1-k8s"
}
key_name = "${aws_key_pair.k8s-node_auth.id}"
vpc_security_group_ids = ["${aws_security_group.opt-us1-k8s_sg.id}"]
subnet_id = "${aws_subnet.opt-us1-k8s.id}"
#-----Link Terraform worker nodes to Ansible playbooks-----
provisioner "local-exec" {
command = <<EOD
cat <<EOF > aws_worker_nodes_IP
[workers]
${self.public_ip} <------need both here not just one
EOF
EOD
}
}
#----this has two nodes----- "count = "${var.NodeCount}"
Sorry if I did not explain correctly in my first question, and I appreciate the help. I have only been working with terraform for a few months. Also, I am a Network Engineer learning to write code.
I'm trying to use a provisioner to write the public IP address of a newly created Azure instance into a file.
I was able to do it for a single instance.
resource "azurerm_public_ip" "helloterraformips" {
name = "terraformtestip"
location = "East US"
resource_group_name = "${azurerm_resource_group.test.name}"
public_ip_address_allocation = "dynamic"
tags {
environment = "TerraformDemo"
}
}
resource "null_resource" "ansible-provision" {
depends_on = ["azurerm_virtual_machine.master-vm"]
count = "${var.node-count}"
provisioner "local-exec" {
command = "echo \"[masters]\n ansible_ssh_host=${azurerm_public_ip.helloterraformips.ip_address} \" >> /home/osboxes/ansible-kube/ansible/inventory/testinv"
}
}
Trouble is when I try to the same on VM's created thro Terraform looping, I'm facing issues when trying to access them.
resource "azurerm_public_ip" "mysvcs-k8sip" {
count = "${var.node-count}"
name = "mysvcs-k8s-ip-${count.index}"
location = "East US"
resource_group_name = "${azurerm_resource_group.mysvcs-res.name}"
public_ip_address_allocation = "dynamic"
}
resource "null_resource" "ansible-provision" {
provisioner "local-exec" {
command = "echo \"[masters]\n${element(azurerm_public_ip.mysvcs-k8sip.*.ip_address,count.index)} \" >> /home/osboxes/ansible-kube/ansible/inventory/inventory"
}
}
I'm getting this error
Resource 'azurerm_public_ip.mysvcs-k8sip' does not have attribute 'ip_address' for variable 'azurerm_public_ip.mysvcs-k8sip.*.ip_address'
I'm digging into the semantics of Terraform and trying various things, but so far its not working and each iteration to create all resources also takes time. Any help or hint would be very useful.
Thanks,
One workaround I was able to do and get this working was to run "terraform apply -target=azurerm_virtual_machine.master-vm" which first creates the VM. Then run terraform apply again which would then run a provisioner which had this
resource "null_resource" "ansible-k8snodes"{
count = "${var.node-count}"
provisioner "local-exec" {
command = "echo \"\n[nodes]\n ${element(azurerm_public_ip.mysvcs-k8sip.*.ip_address,count.index+1)} ansible_ssh_user=testadmin ansible_ssh_pass=Password1234! \" >> /home/osboxes/ansible-kube/ansible/inventory/inventory"
}
}
#Martin - Count or no count, it doesn't matter and it fails everytime. Infact, it seems like it worked only once for the single instance code posted above in my question, when I tried it again, it didnt work. Thx for your help.