Im having an issue using terraform workspaces between a region pair. One workspace for South Central US and one for North Central US. Everything works great until it comes to zones. Public IP for example. South I’m setting zones, North won’t accept zone configuration empty, 0, or 1,2, or 3 because it doesn’t support zones yet. Hoping to set zones in SCU and eventually doing the same in NCU when they become available.
How do I use the same code in both regions? I know I can use workspace vars for values, but in this case it is an entire line of code. Seems like there should be an easy answer I’m just not thinking of.
I asked this question differently and got the answer I was looking for. Instead of trying to put a value in the zones config line, just put null. TF omits that line and sends it. Simple and it worked.
Here's an example of my variables.tfvars
firewalla-AvailabilityZone = {
hub-ncu = null
hub-scu = [1]
}
firewallb-AvailabilityZone = {
hub-ncu = null
hub-scu = [3]
}
Then in the resource:
zones = var.firewalla-AvailabilityZone[terraform.workspace]
Related
Recently upgraded my Terraform project to AWS provider 3.74.0 and TF 1.1.4 (from much older versions).
I'm suddenly getting this autoscaling schedule reporting external changes:
resource "aws_autoscaling_schedule" "api-svc-tst-down-schedule" {
scheduled_action_name = "api-svc-tst-down-schedule"
min_size = 0
max_size = 1
desired_capacity = 0
// Minute Hour DayOfMonth Month DayOfWeek
recurrence = "0 13 * * *"
autoscaling_group_name = aws_autoscaling_group.api-svc-tst-asg.name
lifecycle {
ignore_changes = [start_time]
}
}
The plan command is now reporting:
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the
last "terraform apply":
# aws_autoscaling_schedule.api-svc-tst-down-schedule has changed
~ resource "aws_autoscaling_schedule" "api-svc-tst-down-schedule" {
id = "api-svc-tst-down-schedule"
~ start_time = "2022-01-31T13:00:00Z" -> "2022-02-01T13:00:00Z"
# (7 unchanged attributes hidden)
}
If I apply the plan, it doesn't appear that TF changes the ASG (I'm assuming it just updates its state file) and the notification goes away until the next day.
I note that the AWS console does show that the Scheduled action has a Start time, which seems to be being set by AWS.
I tried adding start_time to ignored_changes but it didn't seem to make a difference, still reported as externally changed.
Is this a known issue with Terraform (I'm not seeing anything via googling)?
How can I prevent TF from being marked as externally changed?
Edit: I also tried setting the start_time attribute as suggested in the comments. But the detected changes warning came back the next day.
Edit 2: I also tried deleting and re-adding the resource via Terraform, but it still gets marked as changed the next day.
This undesirable behavior was an intentional change introduced in Terraform version 0.15.4.
It cannot currently be avoided. The only workaround is that all team members (and tooling) must be educated to ignore "expected drift".
Note that this "expected drift" behavior is not limited to just aws_autoscaling_schedule resources, or even just the AWS provider. The issue happens on many different platforms/types for any resource where the cloud vendor updates the attribute after the resource is created.
Many resources will report drift immediately after being created - often you can get rid of the report by immediately doing an apply or refresh to update the TF state and as long as AWS doesn't make changes to those attributes, you won't see the resource reported as changed again.
Other resource attributes (like aws_autoscaling_schedule.start_time) get updated by the cloud vendor regularly. These types of resources will intermittently report "Objects have changed outside of Terraform", whenever you run plan.
There is a locked open issue to track: https://github.com/hashicorp/terraform/issues/28803.
Note that the issue is locked because Hashicorp got tired of people telling them how negatively this affects their teams.
For work I'm learning Go and Terraform. I read in their tutorial how the different contexts are defined but I'm not clear on exactly when these different contexts are called and what triggers them.
From looking at the Hashicups example it looks like when you put this:
resource "hashicups_order" "new" {
items {
coffee {
id = 3
}
quantity = 2
}
items {
coffee {
id = 2
}
quantity = 2
}
}
in your Terraform file that is going to go look at hashicups_order remove the hashicups prefix and look for a resource called order. The order resource provides the following contexts:
func resourceOrder() *schema.Resource {
return &schema.Resource{
CreateContext: resourceOrderCreate,
ReadContext: resourceOrderRead,
UpdateContext: resourceOrderUpdate,
DeleteContext: resourceOrderDelete,
What isn't clear to me is what triggers each context . From that example it seems like since you are increasing the value of quantity it will trigger the update context. If this were the first run and no previous state existed it would trigger create etc.
However it my case the resource is a server and one API resource I want to present to the user is server power control. However you would never "create/destroy" this resource... or would you? You could read the current power state and you could update the power state but, at least intuitively, you wouldn't create or destroy it. I'm having trouble wrapping my head around how this would be modeled in Terraform/Go. I conceptually understand the coffee resource in the example but I'm having trouble making the leap to imagining what that looks like as something like a server power capability or other things without a clear matching to the different CRUD operations.
I am new in terraform, I have problem while creating ec2 instance, my condition is, I am passing private_Ips values from variable file (like IP pool) and creating two ec2 instance so when I run terraform apply first time it creates two ec2 instances, but when second time I run terraform apply it says IP already in use, its not going to take 3rd and 4th IP. I want on 2nd run it for it to take 3rd and 4th IPs. Below are my definitions. Could you please suggest:
var.tf
variable "private_ips" {
default = {
"0" = "x.x.x.x"
"1" = "x.x.x.x"
"2" = "x.x.x.x"
"3" = "x.x.x.x"
}
}
main.tf
private_ip = "${lookup(var.private_ips,count.index)}"
Terraform is a stateful tool. So, whenever it creates some pieces of infrastructure it keeps track of them (in what is called Terraform state). The whole idea is that if you run terraform apply the first time it creates some infrastructure. And any subsequent run just updates whatever has been created before (with whatever changes were applied in .tf files).
You might want to read up on workspaces whose idea is about using the same configuration (.tf files) against multiple independent copies of target infrastructure. Typically used for dev/test/prod kind of setups. It might be what you are after.
I have 3 environments for my infrastructure. All of them the same, but with various sizes. I understand this is a good use case for Terraform workspaces. And indeed it works well in that regard. But please correct me if this is not the right way to go.
Now my only issue is with managing the DNS within the workspaces. I use the Google provider and that works by having 2 types of resources: a google_dns_managed_zone which represents the zone, and a google_dns_record_set type for each DNS record.
Note that the record set type needs to have a reference to the managed zone type.
With that in mind, I need to manage the DNS zone from the production environment. I can't share that resource in the other workspaces because I should be able to destroy the dev or staging workspace without destroying the DNS zone.
I try to solve that issue with count. I use it as a boolean as shown in the code below and find it pretty hackish but that's what I have found in the Terraform community. Any improvement is welcome.
That allows me to have the zone and the production records (like MX shown below as example) only present in the prod workspace.
But then I am stuck when it comes to managing record sets only in a specific workspace. I need that for example in the case of creating an nginx in the dev workspace and automatically create a DNS record set for it, like dev.example.com.
For that I need to access the managed zone resource. As shown below I use terraform_remote_state in order to access the resource from the prod workspace. To the extent of my understanding, that works with an output, which you can see below. When I select the prod workspace, I can indeed output the managed zone. And then if I select another workspace, using the remote state retrieves the managed zone from prod successfully. But my issue is that Terraform fails when it comes to the output line since it is only present in the prod workspace and does not exist in any other workspace and thus can't be outputted.
So it's a bit of a nonsense and I don't understand if there is a better way to achieve this. I did a fair bit of research and asked the community but could not find an answer to that. It seems to me that managing DNS is common to all infrastructures and should be pretty well covered. What am I doing wrong and how should it be done?
locals {
environment="${terraform.workspace}"
dns_zone_managers = {
"dev" = "0"
"staging" = "0"
"prod" = "1"
}
dns_zone_manager = "${lookup(local.dns_zone_managers, local.environment)}"
}
resource "google_dns_managed_zone" "base_zone" {
name = "base_zone"
dns_name = "example.com."
count = "${local.dns_zone_manager}"
}
resource "google_dns_record_set" "mx" {
name = "${google_dns_managed_zone.base_zone.dns_name}"
managed_zone = "${google_dns_managed_zone.base_zone.name}"
type = "MX"
ttl = 300
rrdatas = [
"10 spool.mail.example.com.",
"50 fb.mail.example.com."
]
count = "${local.dns_zone_manager}"
}
data "terraform_remote_state" "dns" {
backend = "local"
workspace = "prod"
}
output "dns_zone_name" {
value = "${google_dns_managed_zone.base_zone.*.name[0]}"
}
Then I can introduce record sets in a specific workspace only, using count again and referring to the managed zone through the remote state like so:
resource "google_dns_record_set" "a" {
name = "dev"
managed_zone = "${data.terraform_remote_state.dns.dns_zone_name}"
type = "A"
ttl = 300
rrdatas = ["1.2.3.4"]
}
I am remaking portal in gamemaker for my semester final, I was wondering how you find an object, if I have one portal down, and go into it, the game crashes, as the 2nd portal isn't placed, and it can't get its .x,.y pos. How do I set a variable to fix this?
We don't know how you determine the destination teleporter, you should clarify that. But one variant could be to check whether the amount of portals is >= 2, so you have at least one place to go
if (instance_number(your_portal_name) >= 2)
{
// proceed the portal mechanics
}
I assume that in some event you have a piece of code that does the teleporting. You just have to place this piece of code in an "if" statement that verifies if the second portal exists. This way, you will attempt teleportation only if the needed exit instance exists. You can use the "instance_exists" function
for example :
if ( instance_exists( exit_portal_or_whatever_you_name_it ) )
{
your_teleportation_code;
}
I would say that based on the information you gave us, German Gorodnev's answer is correct. If you only have one portal and you try to get the position of a non-existent portal, then you will get an error. So you should include an if statement that makes sure the needed portals are there before retrieving the positions.