Terraform code will not create EC2 Instance - terraform

I'm new to Terraform and I'm trying to create my first resource.
The provider is AWS and the provider download completed
I have run terraform init and that has completed.
However when I try to run terraform plan it tells me nothing in my infrastructure will change
provider "aws" {
access_key = "I input my key here"
secret_key = " I input my key here"
region = "us-east-1"
}
resource "aws_instance" "Server1" {
ami = "ami-0ea83ef2bc1efef82"
instance_type = "t2.micro"
}

And that is correct.
"terraform plan" will just create execution plan, but it will not execute anything!
The terraform plan command is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files.
This command is a convenient way to check whether the execution plan for a set of changes matches your expectations without making any changes to real resources or to the state
Terraform Plan
Now, post "terraform plan", what you have to do to create an AWS instance is hit "terraform apply"
"terraform apply" will pick the plan generated by "terraform plan" and will execute it on the provider mentioned. If its execution is successful, an EC2 instance will be created.
The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.
Terraform Apply

save the code and then
terraform init
and then terraform plan and apply!

Related

In Terraform is it possible to move to state from one workspace to another

When starting of I was using the default workspace. Due to increased complexity I would like to use multiple workspaces. I want to move what is in default workspace into its own workspace or rename the default workspace as another workspace. How can I do this?
Yes it is possible to migrate state between workspaces.
I'm assuming that you are using S3 remote backend and terraform version >= 0.13
Let's see how this state surgery looks like:
Sample resource config that needs to be migrated between workspaces:
provider "local" {
version = "2.1.0"
}
resource "local_file" "foo" {
content = "foo!"
filename = "foo.bar"
}
terraform {
backend "s3" {
bucket = ""
region = ""
kms_key_id = ""
encrypt = ""
key = ""
dynamodb_table = ""
}
}
Let's initialise the backend for the default workspace and apply:
terraform init
<Initialize the backend>
terraform workspace list
* default
terraform apply
local_file.foo: Refreshing state... [id=<>]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed
So, as you can see a local file was already created and the state is stored in the default workspace. Terraform apply didn't change anything.
Now, we want to migrate to a new workspace:
Pull the state while you are still in the default workspace
terraform state pull > default.tfstate
Create a new workspace; let's call it test
terraform workspace new test
Created and switched to workspace "test"!
If you try to run terraform state list, you should not see any state.
Let's push the state to newly created workspace and see what's in the state; also what happens when we apply.
terraform state push default.tfstate
terraform state list
local_file.foo
terraform apply
local_file.foo: Refreshing state... [id=<>]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Your local_file.foo has been migrated to the test workspace.
Don't forget to switch back to the default workspace and remove state references for this file.
terraform workspace select default
terraform state rm local_file.foo
Removed local_file.foo
Successfully removed 1 resource instance(s).
PS: I would highly recommend reading more about managing Terraform state.
None of previews answers worked for me, every time the new workspace is empty as it should be. In order to populate it you need to use -state as per documentation.
https://www.terraform.io/cli/commands/workspace/new
So what worked:
Create a local backup
terraform state pull > default.tfstate
Switch to local by commenting the backend block then terraform init -migrate-state
Create your new workspace with terraform workspace new -state=default.tfstate newspace This will copy the default into newspace
Do a terraform plan to check
Uncomment backend block to switch back in the cloud then run terraform init -migrate-state to migrate your new workspace in the cloud
Do a terraform plan and terraform workspace list to check.
Here's a step by step way to do this that's a little different than the default answer.
First, we have a default s3 backend.
terraform {
backend "s3" {
encrypt = true
bucket = "cool-bucket-name"
key = "production/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "cool-dynamo-db-lock-name"
}
}
Now, check our workspaces.
terraform workspace list
default
* weird-workspace-name
Pull the state down by first commenting out the terraform backend.
#terraform {
# backend "s3" {
# encrypt = true
# bucket = "cool-bucket-name"
# key = "production/terraform.tfstate"
# region = "us-east-1"
# dynamodb_table = "cool-dynamo-db-lock-name"
# }
#}
Run the command to migrate the state locally.
terraform init -migrate-state
You can check the state and see if it's still good to go.
terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Next, pull the state. The name of the state file is determined by what you originally gave as the key.
terraform state pull > terraform.tfstate
terraform workspace new cool-workspace-name
terraform state push terraform.tfstate
terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Alright, now go clean up your old state.
terraform workspace select weird-workspace-name
terraform state list | cut -f 1 -d '[' | xargs -L 0 terraform state rm
terraform workspace select cool-workspace-name
terraform workspace delete weird-workspace-name
Run the terraform apply command again.
terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
And now move your state back into S3, uncomment the terraform backend.tf file.
terraform {
backend "s3" {
encrypt = true
bucket = "cool-bucket-name"
key = "production/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "cool-dynamo-db-lock-name"
}
}
Run:
terraform init -migrate-state
I really appreciate the answer by bhalothia, I just wanted to add another step by step that's a little bit of a different case.
Depending on the backend, Terraform might be able to do the migration on its own.
For this, just update your backend configuration in the terraform block and then run following command to migrate the state automatically:
terraform init -migrate-state
Doing so will copy the state of all workspaces defined from old to new.
I can confirm this to be working on Terraform Cloud, even with multiple workspaces defined using a prefix.

Error message while deleting google_kms_crypto_key resource

I am managing kms keys and key rings with gcp terraform provider
resource "google_kms_key_ring" "vault" {
name = "vault"
location = "global"
}
resource "google_kms_crypto_key" "vault_init" {
name = "vault"
key_ring = google_kms_key_ring.vault.self_link
rotation_period = "100000s" #
}
When I ran this for the first time, I was able to create the keys and keyrings successfully and doing a terraform destroy allows the terraform code to execute successfully with out any errors.
The next time I do a terraform apply, I just use terraform import to import the resources from GCP and the code execution works fine.
But after a while, certain key version 1 was destroyed. Now everytime I do a terrafrom destroy, I get the below error
module.cluster_vault.google_kms_crypto_key.vault_init: Destroying... [id=projects/<MY-PROJECT>/locations/global/keyRings/vault/cryptoKeys/vault]
Error: googleapi: Error 400: The request cannot be fulfilled. Resource projects/<MY-PROJECT>/locations/global/keyRings/vault/cryptoKeys/vault/cryptoKeyVersions/1 has value DESTROYED in field crypto_key_version.state., failedPrecondition
Is there was way to suppress this particular error ? KeyVersions 1-3 are destroyed.
At present, Cloud KMS resources cannot be deleted. This is against Terraform's desired behavior to be able to completely destroy and re-create resources. You will need to use a different key name or key ring name to proceed.

Terraform Error: The "remote" backend does not support resource targeting at this time

Does Terraform Cloud support the -target flag when running terraform plan...?
It doesn't seem like there's an option to turn on or off this feature in Terraform Cloud. I'm wondering if this means that Terraform Cloud as a whole does not support Module targeting, or if there is an option in my instance of Terraform Cloud that turns off this feature.
Expected Result: Terraform successfully creates the plan.
Actual Result: Terraform reports the following error:
Error: Resource targeting is currently not supported
The "remote" backend does not support resource targeting at this time.
Edit 9/30/19:
I'm using Terraform Cloud's "Remote Executor" and Terraform version 0.12.9.
If you refer to https://github.com/hashicorp/terraform/blob/a8d01e3940b406a3c974bfaffd0ca5f534363cc7/backend/remote/backend_plan.go#L73 you see that there is an explicit check in Terraform Cloud backend that disables Plan targeting
In the same piece of code you can find all checks and limitations that are currently in effect in Terraform Cloud.
If you need those features, you can migrate your backend to something like aws s3: https://www.terraform.io/docs/backends/types/s3.html
To learn more about backends refer to: https://www.terraform.io/docs/backends/index.html
# main.tf
resource "null_resource" "test" {
}
resource "null_resource" "test2" {
}
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "<my-org>"
workspaces {
name = "<my-workspace>"
}
}
}
I didn't face the error above when running terraform plan:
❯ terraform plan -target=null_resource.test -out=plan.tfplan
Acquiring state lock. This may take a few moments...
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# null_resource.test will be created
+ resource "null_resource" "test" {
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
This plan was saved to: plan.tfplan
To perform exactly these actions, run the following command to apply:
terraform apply "plan.tfplan"
Releasing state lock. This may take a few moments...
Here is my version:
❯ terraform version
Terraform v0.12.6
+ provider.null v2.1.2
Your version of Terraform is out of date! The latest version
is 0.12.9. You can update by downloading from www.terraform.io/downloads.html

Terraform to destroy a particular resource

Can we destroy a particular resource.
For example : An azure sql database only, without affecting the sql server or any firewalls.
will the below work and what is the resource address.
terraform destroy -target xxx
yes terraform has that functionality to destroy selected resources, but first you have to detached the dependent resources from the target resource and then try this command terraform destroy -target RESOURCE_TYPE.NAME
Yes, you can destroy specific resources, one at a time.
Following the terraform azure sql example : https://www.terraform.io/docs/providers/azurerm/r/sql_database.html
When the resources are created, they are registered in the terraform state file.
You can list the resources in the state file :
$ terraform state list
azurerm_resource_group.test
azurerm_sql_database.test
azurerm_sql_server.test
You can then destroy the sql database only with this command :
$ terraform destroy -target=azurerm_sql_database.test

Terraform Create a New EBS Snapshot on each Terraform apply

I am trying to use Terraform as part of my continuous deployment pipeline. I am using Terraform to create a snapshot of my production EBS volume (for backup purposes) prior to executing any other pipeline tasks.
I can get terraform to take the Snapshot, however the issue is Terraform will not create a new snapshot on each run. Instead it detects there is already an existing snapshot and does nothing.
For example.
Terraform Apply Execution 1 - Snapshot successfully taken.
Terraform Apply Execution 2 - No snapshot taken.
The code I am using for Terraform is provided below.
provider "aws" {
access_key = "..."
secret_key = "..."
region = "..."
}
resource "aws_ebs_snapshot" "example_snapshot" {
volume_id = "vol-xyz"
tags = {
Name = "continuous_deployment_backup"
}
}
Does anyone know how I can force Terraform to create a new EBS snapshot each time it is run?
To avoid any repetitive and manual tasks if you are working on a continuous deployment pipeline, an option could be to run CloudWatch Events rules according to a schedule automating Amazon EBS Snapshots.
You can check it out here in this tutorial suggested by AWS in its CloudWatch Documentation.
You can use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation, retention, and deletion of snapshots taken to back up your Amazon EBS volumes as well, always using terraform through the aws_dlm_lifecycle_policy resource for instance.

Resources