Getting EC2 Windows Password from instances when using Terraform - terraform

I'm struggling to get the password from a couple of new ec2 instances when using terraform. Been reading up through a couple of posts and thought i had it but not getting anywhere.
Here's my config:
resource "aws_instance" "example" {
ami = "ami-06f9d25508c9681c3"
count = "2"
instance_type = "t2.small"
key_name = "mykey"
vpc_security_group_ids =["sg-98d190fc","sg-0399f246d12812edb"]
get_password_data = "true"
}
output "public_ip" {
value = "${aws_instance.example.*.public_ip}"
}
output "public_dns" {
value = "${aws_instance.example.*.public_dns}"
}
output "Administrator_Password" {
value = "${rsadecrypt(aws_instance.example.*.password_data,
file("mykey.pem"))}"
}
Managed to clear up all the syntax errors but now when running get the following error:
PS C:\tf> terraform apply
aws_instance.example[0]: Refreshing state... (ID: i-0e087e3610a8ff56d)
aws_instance.example[1]: Refreshing state... (ID: i-09557bc1e0cb09c67)
Error: Error refreshing state: 1 error(s) occurred:
* output.Administrator_Password: At column 3, line 1: rsadecrypt: argument 1
should be type string, got type list in:
${rsadecrypt(aws_instance.example.*.password_data, file("mykey.pem"))}

This error is returned because aws_instance.example.*.password_data is a list of the password_data results from each of the EC2 instances. Each one must be decrypted separately with rsadecrypt.
To do this in Terraform v0.11 requires using null_resource as a workaround to achieve a "for each" operation:
resource "aws_instance" "example" {
count = 2
ami = "ami-06f9d25508c9681c3"
instance_type = "t2.small"
key_name = "mykey"
vpc_security_group_ids = ["sg-98d190fc","sg-0399f246d12812edb"]
get_password_data = true
}
resource "null_resource" "example" {
count = 2
triggers = {
password = "${rsadecrypt(aws_instance.example.*.password_data[count.index], file("mykey.pem"))}"
}
}
output "Administrator_Password" {
value = "${null_resource.example.*.triggers.password}"
}
From Terraform v0.12.0 onwards, this can be simplified using the new for expression construct:
resource "aws_instance" "example" {
count = 2
ami = "ami-06f9d25508c9681c3"
instance_type = "t2.small"
key_name = "mykey"
vpc_security_group_ids = ["sg-98d190fc","sg-0399f246d12812edb"]
get_password_data = true
}
output "Administrator_Password" {
value = [
for i in aws_instance.example : rsadecrypt(i.password_data, file("mykey.pem"))
]
}

Related

Control update variables in terraform

I have the following question, in terraform how to update variables tags only when the resource is changed?. For example, the below code has the tag UpdatedAt = timestamp(), the timestamp function is executed every time with the terraform apply command. How should I do so that the tag only changes when the resource it changes?, i.e. the timestamp() function only should be executed when the resource aws_instance have an updated
resource "aws_instance" "ec2" {
ami = var.instance_ami
instance_type = var.instance_size
subnet_id = var.subnet_id
key_name = var.ssh_key_name
vpc_security_group_ids = var.security_group_id
user_data = file(var.file_path)
root_block_device {
volume_size = var.instance_root_device_size
volume_type = "gp3"
}
tags = {
Name = "${ec2_name}-${var.project_name}-${var.infra_env}"
Project = var.project_name
Environment = var.infra_env
ManagedBy = "terraform"
UpdatedBy = var.developer_email
UpdatedAt = timestamp()
}
} ```
You can't do this. TF does not have functionality for you to conditionally apply/exclude changes during apply procedure.

Terrafrom AWS EC2 with no change in the code, trying to destroy and create instance

I used below terrafrom code to create AWS EC2 instance,
resource "aws_instance" "example" {
ami = var.ami-id
instance_type = var.ec2_type
key_name = var.keyname
subnet_id = "subnet-05a63e5c1a6bcb7ac"
security_groups = ["sg-082d39ed218fc0f2e"]
# root disk
root_block_device {
volume_size = "10"
volume_type = "gp3"
encrypted = true
delete_on_termination = true
}
tags = {
Name = var.instance_name
Environment = "dev"
}
metadata_options {
http_endpoint = "enabled"
http_put_response_hop_limit = 1
http_tokens = "required"
}
}
after 5 minutes with no change in the code when I try to run terraform plan. It shows something changed outside of Terraform, its trying destroy and re-create the Ec2 instance. Why is this happening?
How to prevent this?
aws_instance.example: Refreshing state... [id=i-0aa279957d1287100]
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
# aws_instance.example has been changed
~ resource "aws_instance" "example" {
id = "i-0aa279957d1287100"
~ security_groups = [
- "sg-082d39ed218fc0f2e",
]
tags = {
"Environment" = "dev"
"Name" = "ec2linux"
}
# (26 unchanged attributes hidden)
~ root_block_device {
+ tags = {}
# (9 unchanged attributes hidden)
}
# (4 unchanged blocks hidden)
}
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these
changes.
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
adding image:
You must use vpc_security_group_ids instead of security_groups
resource "aws_instance" "example" {
ami = var.ami-id
instance_type = var.ec2_type
key_name = var.keyname
subnet_id = "subnet-05a63e5c1a6bcb7ac"
vpc_security_group_ids = ["sg-082d39ed218fc0f2e"]
# root disk
root_block_device {
volume_size = "10"
volume_type = "gp3"
encrypted = true
delete_on_termination = true
}
tags = {
Name = var.instance_name
Environment = "dev"
}
metadata_options {
http_endpoint = "enabled"
http_put_response_hop_limit = 1
http_tokens = "required"
}
}

How re-attache ebs volume using terraform

I'm trying to keep AWS EBS volume as a persistent data-store, every week my AMI changes so I have to spin-up new VM in aws. At this time I'm expecting my volume to detach from the old VM and attach to a new VM without destroying the EBS volume and data.
resource "aws_instance" "my_instance" {
count = var.instance_count
ami = lookup(var.ami,var.aws_region)
instance_type = var.instance_type
key_name = aws_key_pair.terraform-demo.key_name
subnet_id = aws_subnet.main-public-1.id
// user_data = "${file("install_apache.sh")}"
tags = {
Name = "Terraform-${count.index + 1}"
Batch = "5AM"
}
}
variable "instances" {
type = map
default = {
"xx" = "sss-console"
"4xx" = "sss-upload-port"
"xxx" = "sss"
}
}
resource "aws_kms_key" "cmp_kms" {
description = "ssss-ebsencrypt"
tags = local.all_labels
}
resource "aws_ebs_volume" "volumes" {
count = var.instance_count
availability_zone = element(aws_instance.my_instance.*.availability_zone, count.index )
encrypted = true
kms_key_id = aws_kms_key.cmp_kms.arn
size = local.volume_size
type = local.volume_type
iops = local.volume_iops
// tags = merge(var.extra_labels, map("Name", "${var.cell}-${element(local.volume_name, count.index)}"))
lifecycle {
// prevent_destroy = true
ignore_changes = [kms_key_id, instance_id]
}
}
resource "aws_volume_attachment" "volumes-attachment" {
depends_on = [aws_instance.my_instance, aws_ebs_volume.volumes]
count = var.instance_count
device_name = "/dev/${element(local.volume_name, count.index)}"
volume_id = element(aws_ebs_volume.volumes.*.id, count.index)
instance_id = element(aws_instance.my_instance.*.id, count.index)
force_detach = true
}
ERROR on terraform apply
Error: Unsupported attribute
on instance.tf line 71, in resource "aws_ebs_volume" "volumes":
71: ignore_changes = [kms_key_id, instance_id]
This object has no argument, nested block, or exported attribute named
"instance_id".
earlier the same code use to work with terraform v0.11 but it's not working with v0.12. what is the replacement for this or how can we re-attach EBS to a different machine without destroying it?
As per terraform documentation, they do not expose any attribute named as instance_id for resource aws_ebs_volume.
For reference: https://www.terraform.io/docs/providers/aws/d/ebs_volume.html.
You can specify the instance_id at the time of volume attachment using resource
aws_volume_attachment.
You can refer the answer given in https://gitter.im/hashicorp-terraform/Lobby?at=5ab900eb2b9dfdbc3a237e36 for more information.

How to loop through subnets in a resource using count

In Terraform 0.11.14 , the following was possible to loop through the different subnets retrieved earlier in a data variable (cf. https://www.terraform.io/docs/providers/aws/d/subnet_ids.html ):
data "aws_subnet_ids" "private" {
vpc_id = "${var.vpc_id}"
tags = {
Tier = "Private"
}
}
resource "aws_instance" "app" {
count = "3"
ami = "${var.ami}"
instance_type = "t2.micro"
subnet_id = "${element(data.aws_subnet_ids.private.ids, count.index)}"
}
However, since I migrated to Terreform 0.12, this syntax results in the following error:
Error: Error in function call
on ..\..\modules\elk\es-proxy-server.tf line 21, in resource "aws_spot_instance_request" "kibana_proxy":
21: subnet_id = "${element(data.aws_subnet_ids.private.ids, count.index)}"
|----------------
| count.index is 0
| data.aws_subnet_ids.private.ids is set of string with 2 elements
Call to function "element" failed: cannot read elements from set of string.
I tried to use the tolist function and to work out how to take benefit of the following https://www.terraform.io/upgrade-guides/0-12.html#working-with-count-on-resources without any success.
You should be able to do:
subnet_id = "${tolist(data.aws_subnet_ids.private.ids)[count.index]}"

Delay in creation of launch config in AWS

Using Terraform, I have the following launch config and autoscale group resources defined:
resource "aws_launch_configuration" "lc_name" {
name = "lc_name"
image_id = "ami-035d01348bb6e6070"
instance_type = "m3.large"
security_groups = ["sg-61a0b51b"]
}
####################
# Autoscaling group
####################
resource "aws_autoscaling_group" "as_group_name" {
name = "as_group_name"
launch_configuration = "lc_name"
vpc_zone_identifier = ["subnet-be1088f7","subnet-fa8d6fa1"]
min_size = "1"
max_size = "1"
desired_capacity = "1"
load_balancers = ["${aws_elb.elb_name.name}"]
health_check_type = "EC2"
}
When I run terraform apply, I get:
Error: Error applying plan:
1 error(s) occurred:
aws_autoscaling_group.as_group_name: 1 error(s) occurred:
aws_autoscaling_group.as_group_name: Error creating AutoScaling Group: ValidationError: Launch configuration name not found - A launch configuration with the name: lc_name does not exist
status code: 400, request id: b09191d3-a47c-11e8-8198-198283743bc9
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
If I run apply again, all goes well, strongly implying that there is a delay in the create autoscale group code recognizing a new launch configuration. Is there a way to adjust to this delay?
Update:
Per suggestion, I added a dependency:
resource "aws_launch_configuration" "myLaunchConfig" {
name = "myLaunchConfig"
image_id = "ami-01c068891b0d9411a"
instance_type = "m3.large"
security_groups = ["sg-61a0b51b"]
}
resource "aws_autoscaling_group" "myAutoScalingGroup" {
name = "myAutoScalingGroup"
launch_configuration = "myLaunchConfig"
depends_on = ["myLaunchConfig"]
vpc_zone_identifier = ["subnet-be1088f7","subnet-fa8d6fa1"]
min_size = "1"
max_size = "1"
desired_capacity = "1"
load_balancers = ["${aws_elb.myLoadBalancer.name}"]
health_check_type = "EC2"
}
Still getting an error for the same reason, though it looks a bit different:
Error: aws_autoscaling_group.myAutoScalingGroup: resource depends on non-existent resource 'myLaunchConfig'
As far as Terraform can tell there is no relationship between your autoscaling group and your launch configuration so it is going to try to create these in parallel, leading you to the observed race condition that corrects itself on the next apply.
With Terraform you have two different ways of ordering a dependency chain between resources.
You can use the explicit depends_on syntax to force a resource to wait until another resource is created before it, in turn, is created.
In your case this would be something like:
resource "aws_launch_configuration" "lc_name" {
name = "lc_name"
image_id = "ami-035d01348bb6e6070"
instance_type = "m3.large"
security_groups = ["sg-61a0b51b"]
}
####################
# Autoscaling group
####################
resource "aws_autoscaling_group" "as_group_name" {
name = "as_group_name"
launch_configuration = "lc_name"
vpc_zone_identifier = ["subnet-be1088f7", "subnet-fa8d6fa1"]
min_size = "1"
max_size = "1"
desired_capacity = "1"
load_balancers = ["${aws_elb.elb_name.name}"]
health_check_type = "EC2"
depends_on = ["aws_launch_configuration.lc_name"]
}
Or, and this is generally preferable where possible, if you interpolate a value from one resource then it will automatically wait until that resource is created before creating the second resource.
In your case you would then use something like this:
resource "aws_launch_configuration" "lc_name" {
name = "lc_name"
image_id = "ami-035d01348bb6e6070"
instance_type = "m3.large"
security_groups = ["sg-61a0b51b"]
}
####################
# Autoscaling group
####################
resource "aws_autoscaling_group" "as_group_name" {
name = "as_group_name"
launch_configuration = "${aws_launch_configuration.lc_name.name}"
vpc_zone_identifier = ["subnet-be1088f7", "subnet-fa8d6fa1"]
min_size = "1"
max_size = "1"
desired_capacity = "1"
load_balancers = ["${aws_elb.elb_name.name}"]
health_check_type = "EC2"
}
If you are ever unsure as to the order of things that Terraform will operate on then you might want to take a look at the terraform graph command.

Resources