Restore an instance from snapshot without recreating using terraform - terraform

Use case: Intent is to revert an instance with already taken AWS snapshot of the volume the instance is currently running on.
For this, i thought why not use "terraform import" to bring the existing state of the instance and then modify HCL config file to replace ONLY volume. By behaviour it expect to create AMI from snapshot and then spawn instance from AMI. It works but it destroy's instance and then recreate instance.
I don't expect instance recreation, rather why not just do below by using terraform:
stop instance
detach current volume
create volume from provided snapshot to revert to
attach created volume to instance
power on instance.
How to achieve above?
Current config file which i am trying after importing an instance:
provider "aws" {
…..
…..
…..
}
resource "aws_ami" "example5554" {
name = "example5554"
virtualization_type = "hvm"
root_device_name = "/dev/sda1"
ebs_block_device {
snapshot_id = "snap-xxxxxxxxxxxxx”
device_name = "/dev/sda1"
volume_type = "gp2"
}
}
resource "aws_instance" "arstest1new" {
ami = "${aws_ami.example5554.id}"
instance_type = "m4.large"
}

Related

When to use ebs_block_device?

I want to create an EC2 instance with Terraform. This instance should have some EBS.
In the documentation I read that Terraform provides two ways to create an EBS:
ebs_block_device
aws_ebs_volume with aws_volume_attachment
I want to know, when should I use ebs_block_device?
Documentation
Unfortunately the documentation isn't that clear (at least for me) about:
When to use ebs_block_device?
How is the exact actual behavior?
See Resource: aws_instance:
ebs_block_device - (Optional) One or more configuration blocks with additional EBS block devices to attach to the instance. Block device configurations only apply on resource creation. See Block Devices below for details on attributes and drift detection. When accessing this as an attribute reference, it is a set of objects.
and
Currently, changes to the ebs_block_device configuration of existing resources cannot be automatically detected by Terraform. To manage changes and attachments of an EBS block to an instance, use the aws_ebs_volume and aws_volume_attachment resources instead. If you use ebs_block_device on an aws_instance, Terraform will assume management over the full set of non-root EBS block devices for the instance, treating additional block devices as drift. For this reason, ebs_block_device cannot be mixed with external aws_ebs_volume and aws_volume_attachment resources for a given instance.
Research
I read:
No change when modifying aws_instance.ebs_block_device.volume_size, which says that Terraform doesn't show any changes with plan/apply and doesn't change anything in AWS, although changes were made..
AWS "ebs_block_device.0.volume_id": this field cannot be set, which says that Terraform shows an error while running plan.
Ebs_block_device forcing replacement every terraform apply, which says that Terraform replaces all EBS.
aws_instance dynamic ebs_block_device forces replacement, which says that Terraform replaces all EBS, although no changes were made.
adding ebs_block_device to existing aws_instance forces unneccessary replacement, which says that Terraform replaces the whole EC2 instance with all EBS.
aws_instance dynamic ebs_block_device forces replacement, which says that Terraform replaces the whole EC2 instance with all EBS, although no changes were made.
I know that the issues are about different versions of Terraform and Terraform AWS provider and some issues are already fixed, but what is the actual intended behavoir?
In almost all issues the workaround/recommendation is to use aws_ebs_volume with aws_volume_attachment instead of ebs_block_device.
Question
When should I use ebs_block_device? What is the use case for this feature?
When should I use ebs_block_device?
When you need another volume other than the root volume because
Unlike the data stored on a local instance store (which persists only
as long as that instance is alive), data stored on an Amazon EBS
volume can persist independently of the life of the instance.
When you launch an instance, the root device volume contains the image used to boot the instance.
Instances that use Amazon EBS for the root device automatically have
an Amazon EBS volume attached. When you launch an Amazon EBS-backed
instance, we create an Amazon EBS volume for each Amazon EBS snapshot
referenced by the AMI you use
Here's an example of an EC2 with additional EBS volume.
provider "aws" {
region = "eu-central-1"
}
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
ebs_block_device {
device_name = "/dev/sda1"
volume_size = 8
volume_type = "gp3"
throughput = 125
delete_on_termination = false
}
}
Note: delete_on_termination for root_block_device is set to true by default.
You can read more on AWS block device mapping here.
EDIT:
aws_volume_attachment is used when you want to attach an existing EBS volume to an EC2 instance. It helps to manage the relationship between the volume and the instance, and ensure that the volume is attached to the desired instance in the desired state.
Here's an example usage:
resource "aws_volume_attachment" "ebs_att" {
device_name = "/dev/sdh"
volume_id = aws_ebs_volume.example.id
instance_id = aws_instance.web.id
}
resource "aws_instance" "web" {
ami = "ami-21f78e11"
availability_zone = "us-west-2a"
instance_type = "t2.micro"
tags = {
Name = "HelloWorld"
}
}
resource "aws_ebs_volume" "example" {
availability_zone = "us-west-2a"
size = 1
}
and the ebs_block_device is used when you want to create a new EBS volume and attach it to an EC2 instance at the same time as the instance is being created.
NOTE:
If you use ebs_block_device on an aws_instance, Terraform will assume
management over the full set of non-root EBS block devices for the
instance, and treats additional block devices as drift. For this
reason, ebs_block_device cannot be mixed with external aws_ebs_volume + aws_volume_attachment resources for a given instance.
Source
I strongly suggest using only resource aws_ebs_volume. When creating an instance, the root block will be created automatically. For extra EBS storage, you will want Terraform to manage them independently.
Why?
Basically you have 2 choices to create an instance with 1 extra disk:
resource "aws_instance" "instance" {
ami = "ami-xxxx"
instance_type = "t4g.micro"
#... other arguments ...
ebs_block_device {
volume_size = 10
volume_type = "gp3"
#... other arguments ...
}
}
OR
resource "aws_instance" "instance" {
ami = "ami-xxxx"
instance_type = "t4g.micro"
#... other arguments ...
}
resource "aws_ebs_volume" "volume" {
size = 10
type = "gp3"
}
resource "aws_volume_attachment" "attachment" {
volume_id = aws_ebs_volume.volume.id
instance_id = aws_instance.instance.id
device_name = "/dev/sdb"
}
The first method is more compact, creates fewer Terraform resource and makes terraform import easier. But if you need to recreate your instance what will happen? Terraform will remove the instance, and redeploy it from scratch with new volumes. If you use the argument delete_on_termination to false, the volumes will still exist but they won't be attached to your instance.
In the contrary, when using a dedicated resource, the instance recreation will recreate the attachements (because the instance id changes) and then, reattach your existing volumes to your instance, which is what we need 90% of the time.
Also, if at some point you need to manipulate your volume in the Terraform state (terraform state commands), it will be much easier to do it on the individual resource aws_ebs_volume.
Finally, at some point in your Terraform journey, you will want to industrialize your code by adding loops, variables and so on. A common use case is to make the number of volumes variables: you provide a list of volumes and Terraform create 1, 2 or 10 volumes according to this list.
And for this you have also have 2 options :
variable "my_volume" { map(any) }
my_volume = {
"/dev/sdb": {
"size": 10
"type": "gp3"
}
}
resource "aws_instance" "instance" {
ami = "ami-xxxx"
instance_type = "t4g.micro"
#... other arguments ...
dynamic "ebs_block_device" {
for_each = var.my_volumes
content {
volume_size = ebs_block_device.value["size"]
volume_type = ebs_block_device.value["type"]
#... other arguments ...
}
}
}
OR
resource "aws_instance" "instance" {
ami = "ami-xxxx"
instance_type = "t4g.micro"
#... other arguments ...
}
resource "aws_ebs_volume" "volume" {
for_each = var.
size = 10
type = "gp3"
}
resource "aws_volume_attachment" "attachment" {
volume_id = aws_ebs_volume.volume.id
instance_id = aws_instance.instance.id
device_name = "/dev/sdb"
}

Terraform: Azure VMSS rolling_upgrade does not re-image instances

Having the following VMSS terraform config:
resource "azurerm_linux_virtual_machine_scale_set" "my-vmss" {
...
instances = 2
...
upgrade_mode = "Rolling"
rolling_upgrade_policy {
max_batch_instance_percent = 100
max_unhealthy_instance_percent = 100
max_unhealthy_upgraded_instance_percent = 0
pause_time_between_batches = "PT10M"
}
extension {
name = "my-vmss-app-health-ext"
publisher = "Microsoft.ManagedServices"
type = "ApplicationHealthLinux"
automatic_upgrade_enabled = true
type_handler_version = "1.0"
settings =jsonencode({
protocol = "tcp"
port = 8080
})
...
}
However, whenever a change is applied (e.g., changing custom_data), the VMSS is updated but instances are not reimaged. Only after manual reimage (via UI or Azure CLI) do the instances get updated.
The "terraform plan" is as expected - custom_data change is detected:
# azurerm_linux_virtual_machine_scale_set.my-vmss will be updated in-place
~ resource "azurerm_linux_virtual_machine_scale_set" "my-vmss" {
...
~ custom_data = (sensitive value)
...
Plan: 0 to add, 1 to change, 0 to destroy.
Any idea of how to make Terraform cause the instance reimaging?
It looks like not a terraform issue but a "rolling upgrades" design by Azure. From here (1) it follows that updates to custom_data won't affect existing instances. I.e., until the instance is manually reimaged (e.g., via UI or azure CLI) it won't get the new custom_data (e.g., the new cloud-init script).
In contrast, AWS does refresh instances on custom_data updates. Please let me know if my understanding is incorrect or if you have an idea of how to work around this limitation in Azure.

Create EKS node group of instance type Fargate with Terraform

With the eksctl cli one can create an EKS cluster of type Fargate which creates nodes of instance type "Fargate".
How can the same be achieved with terraform? The cluster can be created with node groups, but instance type Fargate does not seem to exist (although eksctl creates it like that)
node_groups = {
eks_nodes = {
desired_capacity = 3
max_capacity = 3
min_capaicty = 3
instance_type = "Fargate"
}
}
Thanks!
Have you tried to define a Fargate profile first?
You must define at least one Fargate profile that specifies which pods should use Fargate when they are launched. You also need to create a pod execution role this way the components running on the Fargate infrastructure need to make calls to AWS APIs on your behalf to do things like pull container images from Amazon ECR or route logs to other AWS services.
A terraform code for aws eks fargage looks like the following:
resource "aws_eks_fargate_profile" "default" {
cluster_name = var.cluster_name
fargate_profile_name = var.fargate_profile_name
pod_execution_role_arn = join("", aws_iam_role.default.arn)
subnet_ids = var.subnet_ids
tags = var.tags
selector {
namespace = var.kubernetes_namespace
labels = var.kubernetes_labels
}
}
Make sure you're using the aws_eks_fargate_profile resource to create an eks fargate profile.
A terraform code for fargate pod execution role looks like the following:
data "aws_iam_policy_document" "assume_role" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["eks-fargate-pods.amazonaws.com"]
}
}
}
resource "aws_iam_role" "default" {
name = var.role_name
assume_role_policy = join("", data.aws_iam_policy_document.assume_role.json)
tags = var.tags
}
resource "aws_iam_role_policy_attachment" "amazon_eks_fargate_pod_execution_role_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
role = join("", aws_iam_role.default.name)
}
I suggest you check some awesome examples from awesome communities like Cloudposse.
I'll give you the complete example of fargate profile and eks-node-group, it seems the solution that you need to deploy at this moment.
Pd: Try to read how they made the modules, I think you'll reach your goal quickly.
I hope it may useful for you and other users.

Terraform attempts to create the S3 backend again when switching to a new workspace

I am following this excellent guide to terraform. I am currently on the 3rd post exploring the state. Specifically at the point where terraform workspaces are demonstrated.
So, I have the following main.tf:
provider "aws" {
region = "us-east-2"
}
resource "aws_s3_bucket" "terraform_state" {
bucket = "mark-kharitonov-terraform-up-and-running-state"
# Enable versioning so we can see the full revision history of our
# state files
versioning {
enabled = true
}
# Enable server-side encryption by default
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-up-and-running-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
terraform {
backend "s3" {
# Replace this with your bucket name!
bucket = "mark-kharitonov-terraform-up-and-running-state"
key = "workspaces-example/terraform.tfstate"
region = "us-east-2"
# Replace this with your DynamoDB table name!
dynamodb_table = "terraform-up-and-running-locks"
encrypt = true
}
}
output "s3_bucket_arn" {
value = aws_s3_bucket.terraform_state.arn
description = "The ARN of the S3 bucket"
}
output "dynamodb_table_name" {
value = aws_dynamodb_table.terraform_locks.name
description = "The name of the DynamoDB table"
}
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
And it is all great:
C:\work\terraform [master ≡]> terraform workspace show
default
C:\work\terraform [master ≡]> terraform apply
Acquiring state lock. This may take a few moments...
aws_dynamodb_table.terraform_locks: Refreshing state... [id=terraform-up-and-running-locks]
aws_instance.example: Refreshing state... [id=i-01120238707b3ba8e]
aws_s3_bucket.terraform_state: Refreshing state... [id=mark-kharitonov-terraform-up-and-running-state]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Releasing state lock. This may take a few moments...
Outputs:
dynamodb_table_name = terraform-up-and-running-locks
s3_bucket_arn = arn:aws:s3:::mark-kharitonov-terraform-up-and-running-state
C:\work\terraform [master ≡]>
Now I am trying to follow the guide - create a new workspace and apply the code there:
C:\work\terraform [master ≡]> terraform workspace new example1
Created and switched to workspace "example1"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
C:\work\terraform [master ≡]> terraform plan
Acquiring state lock. This may take a few moments...
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_dynamodb_table.terraform_locks will be created
+ resource "aws_dynamodb_table" "terraform_locks" {
...
+ name = "terraform-up-and-running-locks"
...
}
# aws_instance.example will be created
+ resource "aws_instance" "example" {
+ ami = "ami-0c55b159cbfafe1f0"
...
}
# aws_s3_bucket.terraform_state will be created
+ resource "aws_s3_bucket" "terraform_state" {
...
+ bucket = "mark-kharitonov-terraform-up-and-running-state"
...
}
Plan: 3 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Releasing state lock. This may take a few moments...
C:\work\terraform [master ≡]>
And here the problems start. In the guide, the terraform plan command reports that only one resource is going to be created - an EC2 instance. This implies that terraform is going to reuse the same S3 bucket for the backend and the same DynamoDB table for the lock. But in my case, terraform informs me that it would want to create all the 3 resources, including the S3 bucket. Which would definitely fail (already tried).
So, what am I doing wrong? What is missing?
Creating a new workspace is effectively starting from scratch. The guide steps are a bit confusing in this regard but they are creating two plans to achieve the final result. The first creates the state S3 Bucket and the locking DynamoDB table and the second plan contains just the instance they are creating but uses the terraform code block to tell that plan where to store its state.
In your example you are both setting your state location and creating it in the same plan. This means when you create a new workspace its going to attempt to create that state location a second time because this workspace does not know about the other workspace's state.
In the end its important to know that using workspaces creates unique state files per workspace by appending the workspace name to the remote state path. For example if your state location is mark-kharitonov-terraform-up-and-running-state with a path of workspaces-example then you might see the following:
Default state: mark-kharitonov-terraform-up-and-running-state/workspaces-example/default/terraform.tfstate
Other state: mark-kharitonov-terraform-up-and-running-state/workspaces-example/other/terraform.tfstate
EDIT:
To be clear on how to get the guide results. You need to create two separate plans in separate folders (all plans in your working directory will run at the same time). So create a hierarchy like:
plans >
state >
main.tf
instance >
main.tf
Inside your plans/state/main.tf file put your state location content:
provider "aws" {
region = "us-east-2"
}
resource "aws_s3_bucket" "terraform_state" {
bucket = "mark-kharitonov-terraform-up-and-running-state"
# Enable versioning so we can see the full revision history of our
# state files
versioning {
enabled = true
}
# Enable server-side encryption by default
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-up-and-running-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
output "s3_bucket_arn" {
value = aws_s3_bucket.terraform_state.arn
description = "The ARN of the S3 bucket"
}
Then in your plans/instance/main.tf file you can reference the created state location with the terraform block and should only need the following content:
terraform {
backend "s3" {
# Replace this with your bucket name!
bucket = "mark-kharitonov-terraform-up-and-running-state"
key = "workspaces-example/terraform.tfstate"
region = "us-east-2"
# Replace this with your DynamoDB table name!
dynamodb_table = "terraform-up-and-running-locks"
encrypt = true
}
}
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}

The Terraform resource "random_pet" is not working

This code will create an EC2 instance with name EC2 Instance:
provider "aws" {
region = "eu-west-1"
}
module ec2 {
source = "./ec2_instance"
name = "EC2 Instance"
}
However, if I try and use the random_pet resource the Instance name becomes an empty string.
provider "aws" {
region = "eu-west-1"
}
resource "random_pet" "server" {
length = 4
}
module ec2 {
source = "./ec2_instance"
name = "${random_pet.server.id}"
}
How come?
I'm using the random_pet.server.id code from https://www.terraform.io/docs/providers/random/r/pet.html
UPDATE: by using an output I was able to debug this.
Terraform does not seem to show the value of this variable during a plan. However, when doing an apply it did successfully populate this variable (and therefore name the instance). The question then becomes why does it not work in plan but does in apply?

Resources