When running terraform apply against the following it keeps asking me for variable input on the CLI instead of accepting from the file, if I remove the variables from the .tf file and just leave the first one in for the ami it works with some massaging. Any ideas?
contents of dev.tf:
variable "aws_region" {}
variable "instance_type" {}
variable "key_name" {}
variable "vpc_security_group_ids" {}
variable "subnet_id" {}
variable "iam_instance_profile" {}
variable "tag_env" {}
provider "aws" {
region = "${var.aws_region}"
}
data "aws_ami" "amazon_linux" {
most_recent = true
filter {
name = "name"
values = [
"amzn-ami-hvm-*-x86_64-gp2",
]
}
filter {
name = "owner-alias"
values = [
"amazon",
]
}
}
resource "aws_instance" "kafka" {
ami = "${data.aws_ami.amazon_linux.id}"
instance_type = "${var.instance_type}"
subnet_id = "${var.subnet_id}"
key_name = "${var.key_name}"
vpc_security_group_ids = ["${var.vpc_security_group_ids}"]
iam_instance_profile = "${var.iam_instance_profile}"
user_data = <<-EOF
#!/bin/bash
sudo yum -y install telnet
EOF
tags {
ProductCode = "id"
InventoryCode = "id"
Environment = "${var.tag_env}"
}
}
contents of dev.tfvars:
aws_region = "us-east-1"
tag_env = "dev"
instance_type = "t2.large"
subnet_id = "subnet-id"
vpc_security_group_ids = "sg-id , sg-id"
key_name = "id"
iam_instance_profile = "id"
Ah good catch, changed the filename to terraform.tfvars and it now works.
Related
I have the following question, in terraform how to update variables tags only when the resource is changed?. For example, the below code has the tag UpdatedAt = timestamp(), the timestamp function is executed every time with the terraform apply command. How should I do so that the tag only changes when the resource it changes?, i.e. the timestamp() function only should be executed when the resource aws_instance have an updated
resource "aws_instance" "ec2" {
ami = var.instance_ami
instance_type = var.instance_size
subnet_id = var.subnet_id
key_name = var.ssh_key_name
vpc_security_group_ids = var.security_group_id
user_data = file(var.file_path)
root_block_device {
volume_size = var.instance_root_device_size
volume_type = "gp3"
}
tags = {
Name = "${ec2_name}-${var.project_name}-${var.infra_env}"
Project = var.project_name
Environment = var.infra_env
ManagedBy = "terraform"
UpdatedBy = var.developer_email
UpdatedAt = timestamp()
}
} ```
You can't do this. TF does not have functionality for you to conditionally apply/exclude changes during apply procedure.
I used below terrafrom code to create AWS EC2 instance,
resource "aws_instance" "example" {
ami = var.ami-id
instance_type = var.ec2_type
key_name = var.keyname
subnet_id = "subnet-05a63e5c1a6bcb7ac"
security_groups = ["sg-082d39ed218fc0f2e"]
# root disk
root_block_device {
volume_size = "10"
volume_type = "gp3"
encrypted = true
delete_on_termination = true
}
tags = {
Name = var.instance_name
Environment = "dev"
}
metadata_options {
http_endpoint = "enabled"
http_put_response_hop_limit = 1
http_tokens = "required"
}
}
after 5 minutes with no change in the code when I try to run terraform plan. It shows something changed outside of Terraform, its trying destroy and re-create the Ec2 instance. Why is this happening?
How to prevent this?
aws_instance.example: Refreshing state... [id=i-0aa279957d1287100]
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
# aws_instance.example has been changed
~ resource "aws_instance" "example" {
id = "i-0aa279957d1287100"
~ security_groups = [
- "sg-082d39ed218fc0f2e",
]
tags = {
"Environment" = "dev"
"Name" = "ec2linux"
}
# (26 unchanged attributes hidden)
~ root_block_device {
+ tags = {}
# (9 unchanged attributes hidden)
}
# (4 unchanged blocks hidden)
}
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these
changes.
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
adding image:
You must use vpc_security_group_ids instead of security_groups
resource "aws_instance" "example" {
ami = var.ami-id
instance_type = var.ec2_type
key_name = var.keyname
subnet_id = "subnet-05a63e5c1a6bcb7ac"
vpc_security_group_ids = ["sg-082d39ed218fc0f2e"]
# root disk
root_block_device {
volume_size = "10"
volume_type = "gp3"
encrypted = true
delete_on_termination = true
}
tags = {
Name = var.instance_name
Environment = "dev"
}
metadata_options {
http_endpoint = "enabled"
http_put_response_hop_limit = 1
http_tokens = "required"
}
}
I’m getting the error below, from command AWS_PROFILE=myprofile AWS_REGION=sa-east-1 terraform apply -target=module.saopaulo_service_dev_kubernetes.
Error authorizing security group rule type ingress: InvalidGroup.NotFound: The security group ‘sg-something’ does not exist
The target I'm applying is as below.
module "saopaulo_service_dev_kubernetes" {
source = "./modules/regional-kubernetes"
region_code = "saopaulo"
vpc_name = "main"
env = "dev"
cluster_prefix = "service"
instance_type = "m5.2xlarge"
providers = {
aws = aws.saopaulo
}
}
The source file is as below. I didn't add all the files, as there are too many, but just attached the eks module (terraform-aws-modules/eks/aws) I use to create my module.
data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
version = "~> 1.9"
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "12.2.0" # Version Pinning
cluster_name = local.cluster_name
cluster_version = local.cluster_version
vpc_id = local.vpc_id
subnets = local.private_subnets
cluster_enabled_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
worker_additional_security_group_ids = [aws_security_group.nodeport.id, data.aws_security_group.common_eks_sg.id]
wait_for_cluster_cmd = "for i in `seq 1 60`; do curl -k -s $ENDPOINT/healthz >/dev/null && exit 0 || true; sleep 5; done; echo TIMEOUT && exit 1"
worker_groups = concat([{
instance_type = "t3.micro"
asg_min_size = "1"
asg_max_size = var.asg_max_size
key_name = "shared-backdoor"
kubelet_extra_args = join(" ", [
"--node-labels=app=nodeport",
"--register-with-taints=dedicated=nodeport:NoSchedule"
])
pre_userdata = file("${path.module}/pre_userdata.sh")
tags = concat([for k, v in local.common_tags : {
key = k
value = v
propagate_at_launch = "true"
}], [{
key = "Role"
value = "nodeport"
propagate_at_launch = "true"
}])
}], local.worker_group)
map_users = local.allow_user
# map_roles = local.allow_roles[var.env]
}
I have security group named sg-something in sa-east-1 region, and have also checked that I’m running terraform apply on correct region by checking
data "aws_region" "current" {}
output my_region {
value = data.aws_region.current.name
}
Any suggestions?
➜ terraform -v
Terraform v0.12.24
+ provider.aws v2.60.0
My terraform example.tf:
locals {
standard_tags = {
team = var.team
project = var.project
component = var.component
environment = var.environment
}
}
provider "aws" {
profile = "profile"
region = var.region
}
resource "aws_key_pair" "security_key" {
key_name = "security_key"
public_key = file(".ssh/key.pub")
}
# New resource for the S3 bucket our application will use.
resource "aws_s3_bucket" "project_bucket" {
# NOTE: S3 bucket names must be unique across _all_ AWS accounts, so
# this name must be changed before applying this example to avoid naming
# conflicts.
bucket = "project-bucket"
acl = "private"
}
resource "aws_security_group" "ssh_allow" {
name = "allow-all-ssh"
ingress {
cidr_blocks = [
"0.0.0.0/0"
]
from_port = 22
to_port = 22
protocol = "tcp"
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "http_allow" {
name = "allow-all-http"
ingress {
cidr_blocks = [
"0.0.0.0/0"
]
from_port = 80
to_port = 80
protocol = "tcp"
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "example" {
ami = "ami-08ee2516c7709ea48"
instance_type = "t2.micro"
security_groups = [aws_security_group.ssh_allow.name, aws_security_group.http_allow.name]
key_name = aws_key_pair.security_key.key_name
connection {
type = "ssh"
user = "centos"
private_key = file(".ssh/key")
host = self.public_ip
}
provisioner "local-exec" {
command = "echo ${aws_instance.example.public_ip} > ip_address.txt"
}
provisioner "remote-exec" {
inline = [
"sudo yum -y install nginx",
"sudo systemctl start nginx"
]
}
depends_on = [aws_s3_bucket.project_bucket, aws_key_pair.security_key]
dynamic "tag" {
for_each = local.standard_tags
content {
key = tag.key
value = tag.value
propagate_at_launch = true
}
}
}
And when I run terraform plan
I got the following error:
➜ terraform plan
Error: Unsupported block type
on example.tf line 94, in resource "aws_instance" "example":
94: dynamic "tag" {
Blocks of type "tag" are not expected here.
There isn't a block type called tag defined in the schema for the aws_instance resource type. There is an argument called tags, which is I think the way to get the result you were looking for here:
tags = local.standard_tags
I expect you are thinking of the tag block in aws_autoscaling_group, which deviates from the usual design of tags arguments in AWS provider resources because for this resource type in particular each tag has the additional attribute propagate_at_launch. That attribute only applies to autoscaling groups because it decides whether instances launched from the autoscaling group will inherit a particular tag from the group itself.
unfortunately since the aws_instance resource's tags attribute is a map, w/in the HCL constructs atm, it cannot exist as repeatable blocks like a tag attribute in the aws_autoscaling_group example seen here in the Dynamic Nested Blocks section: https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each/
but from your comment, it seems you're trying to set the tags attribute with perhaps a map of key/value pairs? in this case, this is certainly doable 😄 you should be able to directly set the field with tags = local.standard_tags
OR if you intend to set the tags attribute with a list of key/value pairs, a for loop can work as well by doing something like:
locals {
standard_tags = [
{
name = "a"
number = 1
},
{
name = "b"
number = 2
},
{
name = "c"
number = 3
},
]
}
resource "aws_instance" "test" {
...
tags = {
for tag in local.standard_tags:
tag.name => tag.number
}
}
I'm struggling to get the password from a couple of new ec2 instances when using terraform. Been reading up through a couple of posts and thought i had it but not getting anywhere.
Here's my config:
resource "aws_instance" "example" {
ami = "ami-06f9d25508c9681c3"
count = "2"
instance_type = "t2.small"
key_name = "mykey"
vpc_security_group_ids =["sg-98d190fc","sg-0399f246d12812edb"]
get_password_data = "true"
}
output "public_ip" {
value = "${aws_instance.example.*.public_ip}"
}
output "public_dns" {
value = "${aws_instance.example.*.public_dns}"
}
output "Administrator_Password" {
value = "${rsadecrypt(aws_instance.example.*.password_data,
file("mykey.pem"))}"
}
Managed to clear up all the syntax errors but now when running get the following error:
PS C:\tf> terraform apply
aws_instance.example[0]: Refreshing state... (ID: i-0e087e3610a8ff56d)
aws_instance.example[1]: Refreshing state... (ID: i-09557bc1e0cb09c67)
Error: Error refreshing state: 1 error(s) occurred:
* output.Administrator_Password: At column 3, line 1: rsadecrypt: argument 1
should be type string, got type list in:
${rsadecrypt(aws_instance.example.*.password_data, file("mykey.pem"))}
This error is returned because aws_instance.example.*.password_data is a list of the password_data results from each of the EC2 instances. Each one must be decrypted separately with rsadecrypt.
To do this in Terraform v0.11 requires using null_resource as a workaround to achieve a "for each" operation:
resource "aws_instance" "example" {
count = 2
ami = "ami-06f9d25508c9681c3"
instance_type = "t2.small"
key_name = "mykey"
vpc_security_group_ids = ["sg-98d190fc","sg-0399f246d12812edb"]
get_password_data = true
}
resource "null_resource" "example" {
count = 2
triggers = {
password = "${rsadecrypt(aws_instance.example.*.password_data[count.index], file("mykey.pem"))}"
}
}
output "Administrator_Password" {
value = "${null_resource.example.*.triggers.password}"
}
From Terraform v0.12.0 onwards, this can be simplified using the new for expression construct:
resource "aws_instance" "example" {
count = 2
ami = "ami-06f9d25508c9681c3"
instance_type = "t2.small"
key_name = "mykey"
vpc_security_group_ids = ["sg-98d190fc","sg-0399f246d12812edb"]
get_password_data = true
}
output "Administrator_Password" {
value = [
for i in aws_instance.example : rsadecrypt(i.password_data, file("mykey.pem"))
]
}