I'm working on importing one of our rds instance into terraform.
Terraform plan shows ~ and -
~ maintenance_window = "sat:06:10-sat:06:40" -> (known after apply)
- max_allocated_storage = 0 -> null
Both of these values are not defined in the configuration, I would like to understand, why it is showing - and Do we configure null variables also in the module?
Using Terraform 0.12.28
Basically:
~ the value is in the state and is changing after the plan
- the value is in the state and you are trying to remove it (null value)
maintenance_window is showing ~ because its value is going to change, and in your specific case its value is computed and hence known after applying the changes. From the docs:
maintenance_window - (Optional) The window to perform maintenance in. Syntax: "ddd:hh24:mi-ddd:hh24:mi". Eg: "Mon:00:00-Mon:03:00". See RDS Maintenance Window docs for more information.
If that window is fine for you, you can specify that as an argument or let Terraform change it to its default value.
max_allocated_storage is showing - because when you imported the resource in the state, it imported all Terraform known arguments, but you are not specifying that one. In particular from the docs:
max_allocated_storage - (Optional) When configured, the upper limit to which Amazon RDS can automatically scale the storage of the DB instance. Configuring this will automatically ignore differences to allocated_storage. Must be greater than or equal to allocated_storage or 0 to disable Storage Autoscaling.
In this case you can set max_allocated_storage = 0 in order to not show any change in the plan for that argument
Related
I am using terraform to provision elasticbeanstalk and there have been no changes in my template but still when I try to plan, it shows me below:
# module.abc.aws_elastic_beanstalk_environment.this will be updated in-place
~ resource "aws_elastic_beanstalk_environment" "this" {
id = "abc"
name = "abc"
tags = {}
# (19 unchanged attributes hidden)
~ setting {
# At least one attribute in this block is (or was) sensitive,
# so its contents will not be displayed.
}
}
Plan: 0 to add, 3 to change, 0 to destroy.
I do not want to apply until I know what setting change it is referring to. Can someone help me show that setting output in tf plan output?
You can see the configuration changes from AWS EB console - Change History
Recently upgraded my Terraform project to AWS provider 3.74.0 and TF 1.1.4 (from much older versions).
I'm suddenly getting this autoscaling schedule reporting external changes:
resource "aws_autoscaling_schedule" "api-svc-tst-down-schedule" {
scheduled_action_name = "api-svc-tst-down-schedule"
min_size = 0
max_size = 1
desired_capacity = 0
// Minute Hour DayOfMonth Month DayOfWeek
recurrence = "0 13 * * *"
autoscaling_group_name = aws_autoscaling_group.api-svc-tst-asg.name
lifecycle {
ignore_changes = [start_time]
}
}
The plan command is now reporting:
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the
last "terraform apply":
# aws_autoscaling_schedule.api-svc-tst-down-schedule has changed
~ resource "aws_autoscaling_schedule" "api-svc-tst-down-schedule" {
id = "api-svc-tst-down-schedule"
~ start_time = "2022-01-31T13:00:00Z" -> "2022-02-01T13:00:00Z"
# (7 unchanged attributes hidden)
}
If I apply the plan, it doesn't appear that TF changes the ASG (I'm assuming it just updates its state file) and the notification goes away until the next day.
I note that the AWS console does show that the Scheduled action has a Start time, which seems to be being set by AWS.
I tried adding start_time to ignored_changes but it didn't seem to make a difference, still reported as externally changed.
Is this a known issue with Terraform (I'm not seeing anything via googling)?
How can I prevent TF from being marked as externally changed?
Edit: I also tried setting the start_time attribute as suggested in the comments. But the detected changes warning came back the next day.
Edit 2: I also tried deleting and re-adding the resource via Terraform, but it still gets marked as changed the next day.
This undesirable behavior was an intentional change introduced in Terraform version 0.15.4.
It cannot currently be avoided. The only workaround is that all team members (and tooling) must be educated to ignore "expected drift".
Note that this "expected drift" behavior is not limited to just aws_autoscaling_schedule resources, or even just the AWS provider. The issue happens on many different platforms/types for any resource where the cloud vendor updates the attribute after the resource is created.
Many resources will report drift immediately after being created - often you can get rid of the report by immediately doing an apply or refresh to update the TF state and as long as AWS doesn't make changes to those attributes, you won't see the resource reported as changed again.
Other resource attributes (like aws_autoscaling_schedule.start_time) get updated by the cloud vendor regularly. These types of resources will intermittently report "Objects have changed outside of Terraform", whenever you run plan.
There is a locked open issue to track: https://github.com/hashicorp/terraform/issues/28803.
Note that the issue is locked because Hashicorp got tired of people telling them how negatively this affects their teams.
I am new in terraform, I have problem while creating ec2 instance, my condition is, I am passing private_Ips values from variable file (like IP pool) and creating two ec2 instance so when I run terraform apply first time it creates two ec2 instances, but when second time I run terraform apply it says IP already in use, its not going to take 3rd and 4th IP. I want on 2nd run it for it to take 3rd and 4th IPs. Below are my definitions. Could you please suggest:
var.tf
variable "private_ips" {
default = {
"0" = "x.x.x.x"
"1" = "x.x.x.x"
"2" = "x.x.x.x"
"3" = "x.x.x.x"
}
}
main.tf
private_ip = "${lookup(var.private_ips,count.index)}"
Terraform is a stateful tool. So, whenever it creates some pieces of infrastructure it keeps track of them (in what is called Terraform state). The whole idea is that if you run terraform apply the first time it creates some infrastructure. And any subsequent run just updates whatever has been created before (with whatever changes were applied in .tf files).
You might want to read up on workspaces whose idea is about using the same configuration (.tf files) against multiple independent copies of target infrastructure. Typically used for dev/test/prod kind of setups. It might be what you are after.
I have a very frustrating Terraform issue, I made some changes to my terraform script which failed when I applied the plan. I've gone through a bunch of machinations and probably made the situation worse as I ended up manually deleting a bunch of AWS resources in trying to resolve this.
So now I am unable to use Terraform at all (refresh, plan, destroy) all get the same error.
The Situation
I have a list of Fargate services, and a set of maps which correlate different features of the fargate services such as the "Target Group" for the load balancer (I've provided some code below). The problem appears to be that Terraform is not picking up that these resources have been manually deleted or is somehow getting confused because they don't exist. At this point if I run a refresh, plan or destroy I get an error stating that a specific list is empty, even though it isn't (or should not be).
In the failed run I added a new service to the list below along with a new url (see code below)
Objective
At this point I would settle for destroying the entire environment (its my dev environment), however; ideally I want to just get the system working such that Terraform will detect the changes and work properly.
Terraform Script is Valid
I have reverted my Terraform scripts back to the last known good version. I have run the good version against our staging environment and it works fine.
Configuration Info
MacOS Mojave 10.14.6 (18G103)
Terraform v0.12.24.
provider.archive v1.3.0
provider.aws v2.57.0
provider.random v2.2.1
provider.template v2.1.2
The Terraform state file is being stored in a S3 bucket, and terraform init --reconfigure has been called.
What I've done
I was originally getting a similar error but it was in a different location, after many hours Googling and trying stuff (which I didn't write down) I decided to manually remove the AWS resources associated with the problematic code (the ALB, Target Groups, security groups)
Example Terraform Script
Unfortunately I can't post the actual script as it is private, but I've posted what I believe is the pertinent parts but have redacted some info. The reason I mention this is that any syntax type error you might see would be caused by this redaction, as I stated above the script works fine when run in our staging environment.
globalvars.tf
In the root directory. In the case of the failed Terraform run I added a new name to the service_names (edd = "edd") list (I added as the first element). In the service_name_map_2_url I added the new entry (edd = "edd") as the last entry. I'm not sure if the fact that I added these elements in different 'order' is the problem, although it really shouldn't since I access the map via the name and not by index
variable "service_names" {
type = list(string)
description = "This is a list/array of the images/services for the cluster"
default = [
"alert",
"alert-config"
]
}
variable service_name_map_2_url {
type = map(string)
description = "This map contains the base URL used for the service"
default = {
alert = "alert"
alert-config = "alert-config"
}
}
alb.tf
In modules/alb. In this module we create an ALB and then a target group for each service, which looks like this. The items from globalvars.tf are passed into this script
locals {
numberOfServices = length(var.service_names)
}
resource "aws_alb" "orchestration_alb" {
name = "orchestration-alb"
subnets = var.public_subnet_ids
security_groups = [var.alb_sg_id]
tags = {
environment = var.environment
group = var.tag_group_name
app = var.tag_app_name
contact = var.tag_contact_email
}
}
resource "aws_alb_target_group" "orchestration_tg" {
count = local.numberOfServices
name = "${var.service_names[count.index]}-tg"
port = 80
protocol = "HTTP"
vpc_id = var.vpc_id
target_type = "ip"
deregistration_delay = 60
tags = {
environment = var.environment
group = var.tag_group_name
app = var.tag_app_name
contact = var.tag_contact_email
}
health_check {
path = "/${var.service_name_map_2_url[var.service_names[count.index]]}/health"
port = var.app_port
protocol = "HTTP"
healthy_threshold = 2
unhealthy_threshold = 5
interval = 30
timeout = 5
matcher = "200-308"
}
}
output.tf
This is the output of the alb.tf, other things are outputted but this is the one that matters for this issue
output "target_group_arn_suffix" {
value = aws_alb_target_group.orchestration_tg.*.arn_suffix
}
cloudwatch.tf
In modules/cloudwatch. I attempt to create a dashboard
data "template_file" "Dashboard" {
template = file("${path.module}/dashboard.json.template")
vars = {
...
alert-tg = var.target_group_arn_suffix[0]
alert-config-tg = var.target_group_arn_suffix[1]
edd-cluster-name = var.ecs_cluster_name
alb-arn-suffix = var.alb-arn-suffix
}
}
Error
When I run terraform refresh (or plan or destroy) I get the following error (I get the same error for alert-config as well)
Error: Invalid index
on modules/cloudwatch/cloudwatch.tf line 146, in data "template_file" "Dashboard":
146: alert-tg = var.target_group_arn_suffix[0]
|----------------
| var.target_group_arn_suffix is empty list of string
The given key does not identify an element in this collection value.
AWS Environment
I have manually deleted the ALB. Dashboard and all Target Groups. I would expect (and this has worked in the past) that Terraform would detect this and update its state file appropriately such that when running a plan it would know it has to create the ALB and target groups.
Thank you
Terraform trusts its state as the single source of truth. Using Terraform in the presence of manual change is possible, but problematic.
If you manually remove infrastructure, you need to run terraform state rm [resource path] on the manually removed resource.
Gruntwork has what they call The Golden Rule of Terraform:
The master branch of the live repository should be a 1:1 representation of what’s actually deployed in production.
I have 3 environments for my infrastructure. All of them the same, but with various sizes. I understand this is a good use case for Terraform workspaces. And indeed it works well in that regard. But please correct me if this is not the right way to go.
Now my only issue is with managing the DNS within the workspaces. I use the Google provider and that works by having 2 types of resources: a google_dns_managed_zone which represents the zone, and a google_dns_record_set type for each DNS record.
Note that the record set type needs to have a reference to the managed zone type.
With that in mind, I need to manage the DNS zone from the production environment. I can't share that resource in the other workspaces because I should be able to destroy the dev or staging workspace without destroying the DNS zone.
I try to solve that issue with count. I use it as a boolean as shown in the code below and find it pretty hackish but that's what I have found in the Terraform community. Any improvement is welcome.
That allows me to have the zone and the production records (like MX shown below as example) only present in the prod workspace.
But then I am stuck when it comes to managing record sets only in a specific workspace. I need that for example in the case of creating an nginx in the dev workspace and automatically create a DNS record set for it, like dev.example.com.
For that I need to access the managed zone resource. As shown below I use terraform_remote_state in order to access the resource from the prod workspace. To the extent of my understanding, that works with an output, which you can see below. When I select the prod workspace, I can indeed output the managed zone. And then if I select another workspace, using the remote state retrieves the managed zone from prod successfully. But my issue is that Terraform fails when it comes to the output line since it is only present in the prod workspace and does not exist in any other workspace and thus can't be outputted.
So it's a bit of a nonsense and I don't understand if there is a better way to achieve this. I did a fair bit of research and asked the community but could not find an answer to that. It seems to me that managing DNS is common to all infrastructures and should be pretty well covered. What am I doing wrong and how should it be done?
locals {
environment="${terraform.workspace}"
dns_zone_managers = {
"dev" = "0"
"staging" = "0"
"prod" = "1"
}
dns_zone_manager = "${lookup(local.dns_zone_managers, local.environment)}"
}
resource "google_dns_managed_zone" "base_zone" {
name = "base_zone"
dns_name = "example.com."
count = "${local.dns_zone_manager}"
}
resource "google_dns_record_set" "mx" {
name = "${google_dns_managed_zone.base_zone.dns_name}"
managed_zone = "${google_dns_managed_zone.base_zone.name}"
type = "MX"
ttl = 300
rrdatas = [
"10 spool.mail.example.com.",
"50 fb.mail.example.com."
]
count = "${local.dns_zone_manager}"
}
data "terraform_remote_state" "dns" {
backend = "local"
workspace = "prod"
}
output "dns_zone_name" {
value = "${google_dns_managed_zone.base_zone.*.name[0]}"
}
Then I can introduce record sets in a specific workspace only, using count again and referring to the managed zone through the remote state like so:
resource "google_dns_record_set" "a" {
name = "dev"
managed_zone = "${data.terraform_remote_state.dns.dns_zone_name}"
type = "A"
ttl = 300
rrdatas = ["1.2.3.4"]
}