Terraform v0.12.6, provider.aws v2.23.0
Am trying to create two aws instances using the new for_each construct. Don't actually think this is an aws provider issue, more a terraform/for_each/provisioner issues.
Worked as advertised until I tried to add a local-exec provisioning step.
Don't know how to modify the local-exec example to work with the for.each variable. Am getting a terraform error about a cycle.
locals {
instances = {
s1 = {
private_ip = "192.168.47.191"
},
s2 = {
private_ip = "192.168.47.192"
},
}
}
provider "aws" {
profile = "default"
region = "us-east-1"
}
resource "aws_instance" "example" {
for_each = local.instances
ami = "ami-032138b8a0ee244c9"
instance_type = "t2.micro"
availability_zone = "us-east-1c"
private_ip = each.value["private_ip"]
ebs_block_device {
device_name = "/dev/sda1"
volume_size = 2
}
provisioner "local-exec" {
command = "echo ${aws_instance.example[each.key].public_ip} >> ip_address.txt"
}
}
But get this error.
./terraform apply
Error: Cycle: aws_instance.example["s2"], aws_instance.example["s1"]
Should the for_each each.key variable be expected to work in a provisioning step? There are other ways to get the public_ip later, by either using the testate file or querying aws given the instance ids, but accessing the resource variables within the local-exec provisioning would seem to come in handy in many ways.
Try using the self variable:
provisioner "local-exec" {
command = "echo ${self.public_ip} >> ip_address.txt"
}
Note to readers that resource-level for_each is a relatively new feature in Terraform and requires version >=0.12.6.
Related
resource "aws_instance" "jenkins_worker_inst" {
for_each = toset(["peter", "nelson", "chris"])
provider = aws.region_worker
ami = data.aws_ssm_parameter.worker-linuxAmi.value
instance_type = var.instance-type
key_name = aws_key_pair.worker_key_pair.key_name
associate_public_ip_address = true
vpc_security_group_ids = [aws_security_group.jenkins_worker_sg.id]
subnet_id = aws_subnet.worker_subnet_1.id
tags = {
Name = each.key
}
depends_on = [aws_main_route_table_association.set-worker-default-rt-assoc, aws_instance.jenkins_master_inst]
provisioner "local-exec" {
command = <<EOF
#aws --debug --profile ${var.profile} ec2 wait instance-status-ok --region ${var.region_master} --instance-ids "${self.id}" ansible-playbook --verbose --extra-vars 'passed_in_hosts=tag_Name_${self.tags.Name}' ansible_templates/jenkins-worker-sample.yml
EOF
}
}
I am told that the for_each here should be able to make 3 instances, but it is making 1.
i have tried it with terraform 0.12 and 1.0.0. I really need a way to make the number of instances based on list of something
There are essentially 2 ways to accomplish what you are trying to do. One way is with the for_each meta-argument, which is what you attempted. The other way is by using count, which would look something this.
locals {
instance_names = ["peter", "nelson", "chris"]
}
resource "aws_instance" "jenkins_worker_inst" {
count = length(instance_names)
provider = aws.region_worker
ami = data.aws_ssm_parameter.worker-linuxAmi.value
instance_type = var.instance-type
key_name = aws_key_pair.worker_key_pair.key_name
associate_public_ip_address = true
vpc_security_group_ids = [aws_security_group.jenkins_worker_sg.id]
subnet_id = aws_subnet.worker_subnet_1.id
tags = {
Name = element(local.instance_names, count.index)
}
depends_on = [aws_main_route_table_association.set-worker-default-rt-assoc, aws_instance.jenkins_master_inst]
provisioner "local-exec" {
command = <<EOF
#aws --debug --profile ${var.profile} ec2 wait instance-status-ok --region ${var.region_master} --instance-ids "${self.id}" ansible-playbook --verbose --extra-vars 'passed_in_hosts=tag_Name_${self.tags.Name}' ansible_templates/jenkins-worker-sample.yml
EOF
}
}
The for_each meta-argument should work for Terraform v0.12.6+. If you tried using it in a version of Terraform prior to v0.12.6, then for_each would not yet be supported; you would only have count as an option.
You also stated that you tried it with Terraform v1.0+, which does support for_each. Without additional information (plan?), I can't tell you why that didn't work. However, I can say that sometimes Terraform does wonkey stuff and in the past, there have been breaking changes introduced in minor and patch versions, sometimes without announcement.
Finally, This Medium post does a pretty decent job of explaining the differences between count and for_each. But for your example, either should work just fine.
Getting started on Terraform. I am trying to provision an EC2 instance using the following .tf file. I have a default VPC already in my account in the AZ I am trying to provision the EC2 instance.
# Terraform Settings Block
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
#version = "~> 3.21" # Optional but recommended in production
}
}
}
# Provider Block
provider "aws" {
profile = "default"
region = "us-east-1"
}
# Resource Block
resource "aws_instance" "ec2demo" {
ami = "ami-c998b6b2"
instance_type = "t2.micro"
}
I do the following Terraform commands.
terraform init
terraform plan
terraform apply
aws_instance.ec2demo: Creating...
Error: Error launching source instance: VPCIdNotSpecified: No default VPC for this user. GroupName is only supported for EC2-Classic and default VPC.
status code: 400, request id: 04274b8c-9fc2-47c0-8d51-5b627e6cf7cc
on ec2-instance.tf line 18, in resource "aws_instance" "ec2demo":
18: resource "aws_instance" "ec2demo" {
As the error suggests, it doesn't find the default VPC in the us-east-1 region.
You can provide the subnet_id within your VPC to create your instance as below.
resource "aws_instance" "ec2demo" {
ami = "ami-c998b6b2"
instance_type = "t2.micro"
subnet_id = "subnet-0b1250d733767bafe"
}
I'm only create a Default VPC in AWS
AWS VPC
actions
create default VPC
it's done, you try again now
terraform plan
terraform apply
As of now if there is any default VPC available in the AWS account then using terraform resource aws_instance instance can be created without any network spec input.
Official AWS-terraform example:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#basic-example-using-ami-lookup
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id ## Or use static AMI ID for testing.
instance_type = "t3.micro"
}
The error message: Error: Error launching source instance: VPCIdNotSpecified: No default VPC for this user. states that the EC2 instance did not find any networking configuration in your terraform code where it needs to create the instance.
This is probably because of the missing default VPC in your AWS account and it seems that you are not passing any network config input to terraform resource.
Basically, you have two ways to fix this
Create a default VPC and then use the same code.
Document: https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html#create-default-vpc
Another and better way would be to inject the network config to aws_instance resource. I have used the example from the official aws_instance resource. Feel free to update any attributes accordingly.
resource "aws_vpc" "my_vpc" {
cidr_block = "172.16.0.0/16"
tags = {
Name = "tf-example"
}
}
resource "aws_subnet" "my_subnet" {
vpc_id = aws_vpc.my_vpc.id
cidr_block = "172.16.10.0/24"
availability_zone = "us-west-2a"
tags = {
Name = "tf-example"
}
}
resource "aws_network_interface" "foo" {
subnet_id = aws_subnet.my_subnet.id
private_ips = ["172.16.10.100"]
tags = {
Name = "primary_network_interface"
}
}
resource "aws_instance" "foo" {
ami = "ami-005e54dee72cc1d00" # us-west-2
instance_type = "t2.micro"
network_interface {
network_interface_id = aws_network_interface.foo.id
device_index = 0
}
credit_specification {
cpu_credits = "unlimited"
}
}
Another way of passing network config to ec2 instance is to use subnet_id in aws_instance resource as suggested by others.
Are there probabilities that you deleted the default vpc, if u did u can recreate going to the VPC Section -> My Vpcs -> At the right corner you will see a dropdown called actions click and select create a default vpc
As the AWS announcement last year On August 15, 2022 we expect all migrations to be complete, with no remaining EC2-Classic resources present in any AWS account. From now on you will need to specify while you are creating any new resources the subnet_id and declare it inside your while creating.
Example :
resource "aws_instance" "test" {
ami = "ami-xxxxx"
instance_type = var.instance_type
vpc_security_group_ids = ["sg-xxxxxxx"]
subnet_id = "subnet-xxxxxx"
I am creating AWS EC2 instance and I am using Terraform Cloud as backend.
in ./main.tf:
terraform {
required_version = "~> 0.12"
backend "remote" {
hostname = "app.terraform.io"
organization = "organization"
workspaces { prefix = "test-dev-" }
}
in ./modules/instances/function.tf
resource "aws_instance" "test" {
ami = "${var.ami_id}"
instance_type = "${var.instance_type}"
subnet_id = "${var.private_subnet_id}"
vpc_security_group_ids = ["${aws_security_group.test_sg.id}"]
key_name = "${var.test_key}"
tags = {
Name = "name"
Function = "function"
}
provisioner "remote-exec" {
inline = [
"sudo useradd someuser"
]
connection {
host = "${self.public_ip}"
type = "ssh"
user = "ubuntu"
private_key = "${file("~/.ssh/mykey.pem")}"
}
}
}
and as a result, I got the following error:
Call to function "file" failed: no file exists at /home/terraform/.ssh/...
so what is happening here, is that terraform trying to find the file in Terraform Cloud instead of my local machine. How can I transfer file from my local machine and still using Terraform Cloud?
There is no straight way to do what I asked in the question. In the end I ended up uploading the keys into AWS with its CLI like this:
aws ec2 import-key-pair --key-name "name_for_the_key" --public-key-material file:///home/user/.ssh/name_for_the_key.pub
and then reference it like that:
resource "aws_instance" "test" {
ami = "${var.ami_id}"
...
key_name = "name_for_the_key"
...
}
Note Yes file:// looks like the "Windowsest" syntax ever but you have to use it on linux too.
I am a beginner to Terraform.
I am trying to execute following code from Terraform Getting started guide.
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
resource "aws_instance" "example" {
ami = "${lookup(var.amis, var.region)}"
instance_type = "t2.micro"
tags {
Name = "newprovisionerstest"
}
provisioner "local-exec" {
command = "echo ${aws_instance.example.public_ip} > ip_address.txt"
}
}
output "ip" {
value = "${aws_eip.ip.public_ip}"
}
When I run
terraform apply
or
terraform refresh
It gives following error:
Error: output 'ip': unknown resource 'aws_eip.ip' referenced in variable aws_eip.ip.public_ip
Why is it so? Is it because "aws_eip" resource is not declared anywhere?
Like you said it yourself, there is no aws_eip resource called ip.
If you use the
aws_instance.example.public_ip
it should work totally fine
I'm trying to use a provisioner to write the public IP address of a newly created Azure instance into a file.
I was able to do it for a single instance.
resource "azurerm_public_ip" "helloterraformips" {
name = "terraformtestip"
location = "East US"
resource_group_name = "${azurerm_resource_group.test.name}"
public_ip_address_allocation = "dynamic"
tags {
environment = "TerraformDemo"
}
}
resource "null_resource" "ansible-provision" {
depends_on = ["azurerm_virtual_machine.master-vm"]
count = "${var.node-count}"
provisioner "local-exec" {
command = "echo \"[masters]\n ansible_ssh_host=${azurerm_public_ip.helloterraformips.ip_address} \" >> /home/osboxes/ansible-kube/ansible/inventory/testinv"
}
}
Trouble is when I try to the same on VM's created thro Terraform looping, I'm facing issues when trying to access them.
resource "azurerm_public_ip" "mysvcs-k8sip" {
count = "${var.node-count}"
name = "mysvcs-k8s-ip-${count.index}"
location = "East US"
resource_group_name = "${azurerm_resource_group.mysvcs-res.name}"
public_ip_address_allocation = "dynamic"
}
resource "null_resource" "ansible-provision" {
provisioner "local-exec" {
command = "echo \"[masters]\n${element(azurerm_public_ip.mysvcs-k8sip.*.ip_address,count.index)} \" >> /home/osboxes/ansible-kube/ansible/inventory/inventory"
}
}
I'm getting this error
Resource 'azurerm_public_ip.mysvcs-k8sip' does not have attribute 'ip_address' for variable 'azurerm_public_ip.mysvcs-k8sip.*.ip_address'
I'm digging into the semantics of Terraform and trying various things, but so far its not working and each iteration to create all resources also takes time. Any help or hint would be very useful.
Thanks,
One workaround I was able to do and get this working was to run "terraform apply -target=azurerm_virtual_machine.master-vm" which first creates the VM. Then run terraform apply again which would then run a provisioner which had this
resource "null_resource" "ansible-k8snodes"{
count = "${var.node-count}"
provisioner "local-exec" {
command = "echo \"\n[nodes]\n ${element(azurerm_public_ip.mysvcs-k8sip.*.ip_address,count.index+1)} ansible_ssh_user=testadmin ansible_ssh_pass=Password1234! \" >> /home/osboxes/ansible-kube/ansible/inventory/inventory"
}
}
#Martin - Count or no count, it doesn't matter and it fails everytime. Infact, it seems like it worked only once for the single instance code posted above in my question, when I tried it again, it didnt work. Thx for your help.