Why does terraform plan lock the state? - terraform

Q: What's the point for the terraform plan operation to lock the state by default?
(I know it might be disabled with -lock=false)
Context:
(As far as I understand) The plan operation is not supposed to alter state.
plan does start with some version of refresh (which typically alters state), but even the standard output of terraform plan pro-actively says it's not the case with the plan-initiated refresh:
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
I saw this question on the Hashicorp website, which doesn't seem to provide a conclusive answer.

I believe it is because behind the scenes it performs a state refresh:
-refresh=false - Disables the default behavior of synchronizing the Terraform state with remote objects before checking for configuration changes.
https://www.terraform.io/docs/cli/commands/plan.html
The other reason I can think of is that if you run a plan and someone already locked the state, it means it is performing changes to the state so reading the state might give you a plan that is no longer valid after the lock is freed
https://github.com/hashicorp/terraform/issues/28130

Related

terraform locked state file s3 solution

I am trying to fix the well known issue with having multiple pipelines and colleagues are running terraform plan and getting the following error:
│ Error: Error acquiring the state lock
I would like to know if there is any source of the possible ways to get rid of this issue so ci/cd and engineers can run plan without needing to wait for a long time until they are able to work.
Even hashicorp is saying to be careful with force unlock there are risks for multiple writes:
Be very careful with this command. If you unlock the state when
someone else is holding the lock it could cause multiple writers.
Force unlock should only be used to unlock your own lock in the
situation where automatic unlocking failed.
Is there a way that we can write the file to the disk before performing the plan ?
The locking is there to protect you. You may run a plan (or apply) with --lock=false:
terraform plan --lock=false
But I wouldn't encourage that as you may lose the benefits of state locking, it's there to protect you from conflicting modifications made to your infrastructure.
You would like to run a terraform plan against the most recent state which is usually written by the very last apply operation you run on your main/master branch.
If the plan takes too long to apply or to run while your engineers are working on different sub-parts of the infrastructure, you would consider a possible refactoring where you break your infrastructure to multiple folders where you run a separate terraform plan/apply for each of them (src), of course this may come with cost of refactoring and moving resources from a state to another.
One other approach is to disable the state refresh on PR pipelines by setting a --refresh=false which is as well not making you take the full advantages of Terraform's state management with diffs and state locking.
And of course, as a last resort for few exceptions where you have a locked state, you may consider running a manual terraform force-unlock [options] LOCK_ID in few exceptions (for example when a plan gets cancelled, or the runner drops connection so it doesn't release the state).
Resources:
https://developer.hashicorp.com/terraform/language/state/locking
https://cloud.google.com/docs/terraform/best-practices-for-terraform
https://github.com/hashicorp/terraform/issues/28130#issuecomment-801422713

Creating API Management Resources (api, custom domains etc) with terraform unreliable or taking ages

We have multiple terraform scripts, that create/update hundreds of resources on azure. If we want to change anything on api management related resources, it takes ages and regularly even times out. Running it again sometimes solves issues, but also sometimes tells us, that the api we want to create already exists and stuff like that.
The customer is getting really annoyed by us providing unreliable update-scripts that cause quite some efforts for the operations team, that is responsible of deploying and running the whole product. Saving changes in the api management is also taking ages and running into errors when we use the azure portal.
Is there any trick or clue on how to improve our situation?
(This is going on for a while now and feels like getting worse and worse over the time)
I'd start by using the Debugging options to sort out precisely which resources are taking the longest. You can consider breaking those out into a separate state, so you don't have to calculate them each time.
Next, ensure that the running process has timeouts set greater than those of terraform. Killing terraform mid-run is a good way to end up with a confused state.
Aside from that, there are some resources for which you can provide Operation Timeouts. With those you can ensure terraform treats them as failed before the process running terraform kills it (if they are available).
I'd consider opening a bug on the azurerm provider or asking in the Terraform Section of the Community Forum.
Azure API Management is slow in applying changes because it's a distributed service. An update operation will take time as it waits until the changes are applied to all instances. If you are facing similar issues in the portal it's a sign that it has nothing to do with Terraform or AzureRM. I would contact Azure support, as they will have the telemetry to help you further.
In my personal experience, a guaranteed way to get things stuck is to do a lot of small changes in succession without waiting for the previous ones to finish so I would start by checking that.
Finally, if you find no help in the previous steps, I would try using Bicep/ARM to manage the APIM. Usually, the ARM deployment API is a bit more robust compared to the APIs used by Terraform & GO SDK.

Suspended changesets created in Dev workspace appear in Release workspace

Here's my setup: I have a SW_Dev stream which is loaded in SW_Dev_RWS workspace. This is where the daily work is done. Every once in a while, I make a snapshot of SW_Dev, and create a workspace from that snapshot, SW_Release_v1_RWS. Then I change the flow target of SW_Release_v1_RWS to SW_Release stream deliver all changes there and also make a snapshot. The goal is to have snapshots for all releases in SW_Release stream. I also keep SW_Release_v1_RWS loaded for a while, since I often need to produce release binaries with small tweaks for testing.
The problem I have appears when I have unfinished work in SW_Dev which I have to store in a suspended changeset. For some reason, that suspended changeset also appears (as suspended) in SW_Release_v1_RWS. Of course that means there are no actual modifications to the release workspace, but it doesn't look tidy. What's worse, if I try to discard a suspended changeset from SW_Release_v1_RWS, it also disappears from SW_Dev_RWS.
Is there a way to prevent suspended changesets created in SW_Dev_RWS from appearing in SW_Release_v1_RWS? Perhaps there is a different approach I should adopt for saving release snapshots in SW_Release stream?

Upgrade Service Fabric Service that Fails to Honor Cancellation Token

I've got a stateful service running in a Service Fabric cluster that I now know fails to honor a cancellation token passed into it. My fault.
I'm ready to release the fix, but during the upgrade process, I'm expecting the service replica on the faulty primary node to get stuck since it won't honor the token passed in.
I can use Restart-ServiceFabricDeployedCodePackage or even Restart-ServiceFabricNode to manually take down the stuck replica, but that will result in a brief service interruption during the upgrade process.
Is there any way to release this fix with zero downtime?
This is not possible for a stateful service using the Service Fabric infrastructure, you will need to have downtime on the upgrade. Once you have a version that supports the cancellation token then you will be fine.
That said, depending on the use of the state, and if you have a load balancer between your clients and the service, you can stand up another service instance on the new fixed version and use the load balancer to drain your traffic across to then new version, upgrade the old, drain back to it and then drop the second service you created. This will allow for a zero downtime scenario.
The only workarounds I can think of are worse since they turn off parts of health checks during upgrades and "force" the process to come down. This doesn't make things more graceful or improve downtime, and has a side effect of potentially causing other health issues to be ignored.
There's always some downtime, even with the fully rolling upgrades, since swapping a primary to another node is never instantaneous and callers need to discover the new location. With those commands, you're just converting a more graceful shutdown and cleanup into a failure, which results in the same primary swap. Shouldn't be a huge difference since clients (and SF) have to deal with failure normally anyway.
I'd keep using those commands since they give you good manual control over which replicas/processes to poke when things get stuck.

Terraform lock state at the resource level?

I have a tf file that has multiple resources/modules in it and it all uses a single remote state file in s3.
I often target specific modules in the tf file.
If I have locking setup does that mean two people can't make changes at the same time even if they are targeting different modules?
From what I read it seems Terraform locks the entire state file. Does it support resource level locking? Docs didn't seem clear to me on this.
You're right, Terraform does lock the whole state file regardless of what resources you're targeting.
The idea behind this implementation is that there may be references between resources. More precisely an event involving one resource (creation/update/destruction) originally may cause other resources to be created/updated/destroyed. So even apply -target=resource_one.ref_name may result in changes of other resources. All of that should be presented in terraform plan though.
All state operations (incl. locking) are currently implemented on the backend (S3, Consul, TFE, ...) level and the common interface between them isn't as flexible because the common denominator is basically blob of JSON (state file).
If you have two or more independent parts of infrastructure then I'd suggest you to split them apart into either different workspaces or directories. You can leverage terraform state subcommands to do the migration after splitting your config files.
You can also use the terraform_remote_state data source to reference any outputs exposed from any of these parts.
Managing independent parts of your infrastructure in a single state file is not something I'd recommend for a couple of reasons:
It doesn't scale very well. As you begin to add more resources, the time it takes to finish terraform plan & apply will increase as Terraform has to check current state of each resource.
All critical Terraform commands have blast radius wider than necessary which makes human errors much scarier. e.g. accident terraform destroy will destroy everything, not just one part of your infra.
The -target flag is meant to be used for exceptional circumstances, not for routine operations, as mentioned in the docs.

Resources