How does "-refresh=false" flag work with terraform apply? - terraform

So there's a doc that says
The -refresh=false option is used in normal planning mode to skip the default behavior of refreshing Terraform state before checking for configuration changes.
CLI: Use terraform plan -refresh=false or terraform apply -refresh=false.
My foo resource has password attribute that is required for a successful GET /id (which I know is a little bit of an anti-pattern but it is what it is). What I'd like to do is to test a "disaster recovery" (how to rotate a password if someone deletes it outside of TF but password is still present in TF state -- to avoid 401 error).
For example,
resource "foo" "bar" {
...
password = "123"
}
And let's assume someone deleted 123 as a possible password option so I update a config to
resource "foo" "bar" {
...
password = "456"
}
Now when I run terraform plan -target="foo.bar" I get 401 error. However when I run terraform plan -refresh=false -target="foo.bar" it works expected:
~ resource "foo" "bar" {
# (4 unchanged attributes hidden)
~ password = (sensitive value)
Plan: 0 to add, 1 to change, 0 to destroy.
so everything looks great.
However when I run terraform apply -refresh=false -target="foo.bar" I still get the same 401 error. Are there any hidden read calls or something? fooUpdate code is pretty much empty and doesn't call remote API.
Update: I managed to avoid the errors by running:
$ terraform plan -refresh=false -target="foo.bar" -out=my.plan
$ terraform apply my.plan
I'm still a little bit confused why it works but
$ terraform plan -refresh=false -target="foo.bar"
$ terraform apply -refresh=false -target="foo.bar"
doesn't.

Related

Shall TF Provider delete resources from state if the resource is in "DELETING" state (similarly to 404)?

Context: I'm creating a new TF provider.
TF official docs say that
When you create something in Terraform but delete it manually, Terraform should gracefully handle it. If the API returns an error when the resource doesn't exist, the read function should check to see if the resource is available first. If the resource isn't available, the function should set the ID to an empty string so Terraform "destroys" the resource in state. The following code snippet is an example of how this can be implemented; you do not need to add this to your configuration for this tutorial.
if resourceDoesntExist {
d.SetID("")
return
}
It's pretty clear when resourceDoesntExist := response.code == 404 but what about the case where the resource is in DELETING state (which means that the resource is going to be removed in like 30 minutes and at that point GET request will start returning 404).
Shall it be treated as 404 too? What about the corresponding data source, shall it return an error?

aws elasticbeanstalk terraform plan does not show sensitive setting

I am using terraform to provision elasticbeanstalk and there have been no changes in my template but still when I try to plan, it shows me below:
# module.abc.aws_elastic_beanstalk_environment.this will be updated in-place
~ resource "aws_elastic_beanstalk_environment" "this" {
id = "abc"
name = "abc"
tags = {}
# (19 unchanged attributes hidden)
~ setting {
# At least one attribute in this block is (or was) sensitive,
# so its contents will not be displayed.
}
}
Plan: 0 to add, 3 to change, 0 to destroy.
I do not want to apply until I know what setting change it is referring to. Can someone help me show that setting output in tf plan output?
You can see the configuration changes from AWS EB console - Change History

Is there a way to add comments for Teraform to display at the end of 'Terraform apply' logs?

I wanted to be able to add a customised comment like 'Please update xxx manually in AWS console and re-run Terraform apply. Ignore this message if not applicable'.
Something like this, is there a way to configure this in Terraform script?
You could use outputs in the root module which would then be outputted to the terminal when you run terraform apply.
As a short example:
resource "null_resource" "foo" {
}
output "next_steps" {
value = "Please update xxx manually in AWS console and re-run Terraform apply. Ignore this message if not applicable"
}
Will output the following on creation with terraform apply:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# null_resource.foo will be created
+ resource "null_resource" "foo" {
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ next_steps = "Please update xxx manually in AWS console and re-run Terraform apply. Ignore this message if not applicable"
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
null_resource.foo: Creating...
null_resource.foo: Creation complete after 0s [id=347317219666477450]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
next_steps = "Please update xxx manually in AWS console and re-run Terraform apply. Ignore this message if not applicable"
If you rerun terraform apply then you'll see this:
null_resource.foo: Refreshing state... [id=347317219666477450]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
next_steps = "Please update xxx manually in AWS console and re-run Terraform apply. Ignore this message if not applicable"

Terraform doesn't seem to pick up manual changes

I have a very frustrating Terraform issue, I made some changes to my terraform script which failed when I applied the plan. I've gone through a bunch of machinations and probably made the situation worse as I ended up manually deleting a bunch of AWS resources in trying to resolve this.
So now I am unable to use Terraform at all (refresh, plan, destroy) all get the same error.
The Situation
I have a list of Fargate services, and a set of maps which correlate different features of the fargate services such as the "Target Group" for the load balancer (I've provided some code below). The problem appears to be that Terraform is not picking up that these resources have been manually deleted or is somehow getting confused because they don't exist. At this point if I run a refresh, plan or destroy I get an error stating that a specific list is empty, even though it isn't (or should not be).
In the failed run I added a new service to the list below along with a new url (see code below)
Objective
At this point I would settle for destroying the entire environment (its my dev environment), however; ideally I want to just get the system working such that Terraform will detect the changes and work properly.
Terraform Script is Valid
I have reverted my Terraform scripts back to the last known good version. I have run the good version against our staging environment and it works fine.
Configuration Info
MacOS Mojave 10.14.6 (18G103)
Terraform v0.12.24.
provider.archive v1.3.0
provider.aws v2.57.0
provider.random v2.2.1
provider.template v2.1.2
The Terraform state file is being stored in a S3 bucket, and terraform init --reconfigure has been called.
What I've done
I was originally getting a similar error but it was in a different location, after many hours Googling and trying stuff (which I didn't write down) I decided to manually remove the AWS resources associated with the problematic code (the ALB, Target Groups, security groups)
Example Terraform Script
Unfortunately I can't post the actual script as it is private, but I've posted what I believe is the pertinent parts but have redacted some info. The reason I mention this is that any syntax type error you might see would be caused by this redaction, as I stated above the script works fine when run in our staging environment.
globalvars.tf
In the root directory. In the case of the failed Terraform run I added a new name to the service_names (edd = "edd") list (I added as the first element). In the service_name_map_2_url I added the new entry (edd = "edd") as the last entry. I'm not sure if the fact that I added these elements in different 'order' is the problem, although it really shouldn't since I access the map via the name and not by index
variable "service_names" {
type = list(string)
description = "This is a list/array of the images/services for the cluster"
default = [
"alert",
"alert-config"
]
}
variable service_name_map_2_url {
type = map(string)
description = "This map contains the base URL used for the service"
default = {
alert = "alert"
alert-config = "alert-config"
}
}
alb.tf
In modules/alb. In this module we create an ALB and then a target group for each service, which looks like this. The items from globalvars.tf are passed into this script
locals {
numberOfServices = length(var.service_names)
}
resource "aws_alb" "orchestration_alb" {
name = "orchestration-alb"
subnets = var.public_subnet_ids
security_groups = [var.alb_sg_id]
tags = {
environment = var.environment
group = var.tag_group_name
app = var.tag_app_name
contact = var.tag_contact_email
}
}
resource "aws_alb_target_group" "orchestration_tg" {
count = local.numberOfServices
name = "${var.service_names[count.index]}-tg"
port = 80
protocol = "HTTP"
vpc_id = var.vpc_id
target_type = "ip"
deregistration_delay = 60
tags = {
environment = var.environment
group = var.tag_group_name
app = var.tag_app_name
contact = var.tag_contact_email
}
health_check {
path = "/${var.service_name_map_2_url[var.service_names[count.index]]}/health"
port = var.app_port
protocol = "HTTP"
healthy_threshold = 2
unhealthy_threshold = 5
interval = 30
timeout = 5
matcher = "200-308"
}
}
output.tf
This is the output of the alb.tf, other things are outputted but this is the one that matters for this issue
output "target_group_arn_suffix" {
value = aws_alb_target_group.orchestration_tg.*.arn_suffix
}
cloudwatch.tf
In modules/cloudwatch. I attempt to create a dashboard
data "template_file" "Dashboard" {
template = file("${path.module}/dashboard.json.template")
vars = {
...
alert-tg = var.target_group_arn_suffix[0]
alert-config-tg = var.target_group_arn_suffix[1]
edd-cluster-name = var.ecs_cluster_name
alb-arn-suffix = var.alb-arn-suffix
}
}
Error
When I run terraform refresh (or plan or destroy) I get the following error (I get the same error for alert-config as well)
Error: Invalid index
on modules/cloudwatch/cloudwatch.tf line 146, in data "template_file" "Dashboard":
146: alert-tg = var.target_group_arn_suffix[0]
|----------------
| var.target_group_arn_suffix is empty list of string
The given key does not identify an element in this collection value.
AWS Environment
I have manually deleted the ALB. Dashboard and all Target Groups. I would expect (and this has worked in the past) that Terraform would detect this and update its state file appropriately such that when running a plan it would know it has to create the ALB and target groups.
Thank you
Terraform trusts its state as the single source of truth. Using Terraform in the presence of manual change is possible, but problematic.
If you manually remove infrastructure, you need to run terraform state rm [resource path] on the manually removed resource.
Gruntwork has what they call The Golden Rule of Terraform:
The master branch of the live repository should be a 1:1 representation of what’s actually deployed in production.

resource output value from one plan into another plan

I have two plans, in which I am creating two different servers(just for example otherwise it's really complex). In one plan, I am outputing the value of the security group like this:
output "security_group_id" {
value = "${aws_security_group.security_group.id}"
}
I have second plan, in which I want to use that value, how I can achieve it, I have tried couple of things but nothing work for me.
I know how to use the output value return by module but don't know that how I can use the output of one plan to another.
When an output is used in the top-level module of a configuration (the directory where you run terraform plan) its value is recorded in the Terraform state.
In order to use this value from another configuration, the state must be published to a location where it can be read by the other configuration. The usual way to achieve this is to use Remote State.
With remote state enabled for the first configuration, it becomes possible to read the resulting values from the second configuration using the terraform_remote_state data source.
For example, it's possible to keep the state for the first configuration in Amazon S3 by using a backend configuration like the following:
terraform {
backend "s3" {
bucket = "example-s3-bucket"
key = "example-bucket-key"
region = "us-east-1"
}
}
After adding this to the first configuration, Terraform will prompt you to run terraform init to initialize the new backend, which includes migrating the existing state to be stored on S3.
Then in the second configuration this can be retrieved by providing the same configuration to the terraform_remote_state data source:
data "terraform_remote_state" "example" {
backend = "s3"
config {
bucket = "example-s3-bucket"
key = "example-bucket-key"
region = "us-east-1"
}
}
resource "aws_instance" "foo" {
# ...
vpc_security_group_ids = "${data.terraform_remote_state.example.security_group_id}"
}
Note that since the second configuration is reading the state from the first it is necessary to terraform apply the first configuration so that this value will actually be recorded in the state. The second config must be re-applied any time the outputs are changed in the first.
For the local backend the process is same. In first step, we need to declare the following code snippet to publish the state.
terraform {
backend local {
path = "./terraform.tfstate"
}
}
When you execute terraform init and terraform apply command, please observe that in .terraform directory new terraform.tfsate file would be created which contains backend information and tell terraform to use the following tfstate file.
Now in the second configuration we need to use data source to import the outputs by using this code snippet
data "terraform_remote_state" "test" {
backend = "local"
config {
path = "${path.module}/../regionalvpc/terraform.tfstate"
}
}

Resources