terraform locked state file s3 solution - terraform

I am trying to fix the well known issue with having multiple pipelines and colleagues are running terraform plan and getting the following error:
│ Error: Error acquiring the state lock
I would like to know if there is any source of the possible ways to get rid of this issue so ci/cd and engineers can run plan without needing to wait for a long time until they are able to work.
Even hashicorp is saying to be careful with force unlock there are risks for multiple writes:
Be very careful with this command. If you unlock the state when
someone else is holding the lock it could cause multiple writers.
Force unlock should only be used to unlock your own lock in the
situation where automatic unlocking failed.
Is there a way that we can write the file to the disk before performing the plan ?

The locking is there to protect you. You may run a plan (or apply) with --lock=false:
terraform plan --lock=false
But I wouldn't encourage that as you may lose the benefits of state locking, it's there to protect you from conflicting modifications made to your infrastructure.
You would like to run a terraform plan against the most recent state which is usually written by the very last apply operation you run on your main/master branch.
If the plan takes too long to apply or to run while your engineers are working on different sub-parts of the infrastructure, you would consider a possible refactoring where you break your infrastructure to multiple folders where you run a separate terraform plan/apply for each of them (src), of course this may come with cost of refactoring and moving resources from a state to another.
One other approach is to disable the state refresh on PR pipelines by setting a --refresh=false which is as well not making you take the full advantages of Terraform's state management with diffs and state locking.
And of course, as a last resort for few exceptions where you have a locked state, you may consider running a manual terraform force-unlock [options] LOCK_ID in few exceptions (for example when a plan gets cancelled, or the runner drops connection so it doesn't release the state).
Resources:
https://developer.hashicorp.com/terraform/language/state/locking
https://cloud.google.com/docs/terraform/best-practices-for-terraform
https://github.com/hashicorp/terraform/issues/28130#issuecomment-801422713

Related

Creating API Management Resources (api, custom domains etc) with terraform unreliable or taking ages

We have multiple terraform scripts, that create/update hundreds of resources on azure. If we want to change anything on api management related resources, it takes ages and regularly even times out. Running it again sometimes solves issues, but also sometimes tells us, that the api we want to create already exists and stuff like that.
The customer is getting really annoyed by us providing unreliable update-scripts that cause quite some efforts for the operations team, that is responsible of deploying and running the whole product. Saving changes in the api management is also taking ages and running into errors when we use the azure portal.
Is there any trick or clue on how to improve our situation?
(This is going on for a while now and feels like getting worse and worse over the time)
I'd start by using the Debugging options to sort out precisely which resources are taking the longest. You can consider breaking those out into a separate state, so you don't have to calculate them each time.
Next, ensure that the running process has timeouts set greater than those of terraform. Killing terraform mid-run is a good way to end up with a confused state.
Aside from that, there are some resources for which you can provide Operation Timeouts. With those you can ensure terraform treats them as failed before the process running terraform kills it (if they are available).
I'd consider opening a bug on the azurerm provider or asking in the Terraform Section of the Community Forum.
Azure API Management is slow in applying changes because it's a distributed service. An update operation will take time as it waits until the changes are applied to all instances. If you are facing similar issues in the portal it's a sign that it has nothing to do with Terraform or AzureRM. I would contact Azure support, as they will have the telemetry to help you further.
In my personal experience, a guaranteed way to get things stuck is to do a lot of small changes in succession without waiting for the previous ones to finish so I would start by checking that.
Finally, if you find no help in the previous steps, I would try using Bicep/ARM to manage the APIM. Usually, the ARM deployment API is a bit more robust compared to the APIs used by Terraform & GO SDK.

How do I avoid hitting a rate limit with Terraform refresh/apply, causing the operation to fail?

Terraform offers providers for a great number of diffent Cloud Providers (both IaaS, PaaS and SaaS). Some of these providers have very low rate limits, which you will quickly hit once your infrastructure reaches a certain scale where you have hundreds of resources in the same terraform state. I've encountered issues with this with several providers, most recently Azure Resource Manager (azurerm), whose DNS rate limit is (currently) 500 GET requests per 5 minutes.
One common and sensible way to deal with rate limiting issues in Terraform is to split your state up into smaller logically cohesive states. However, you eventually reach a limit where it no longer makes semantic sense to split the state up any further. I don't want to wind up in the situation where I have to create separate states which semantically should be the same, but which I for rate limiting reasons must arbitrarily separate.
Another approach to solve this is to pass the -parallelism=1 flag to the respective terraform CLI command, to avoid any parallel requests to the provider, thereby minimising the requests performed per minute.
I have now exhausted both of these approaches to the rate limiting issue I'm currently experiencing with one of my states, but the issue persists. Parallelism is disabled, and it doesn't make sense to break down the state further. I am still hitting the rate limit, and as such, all terraform refresh and terraform apply operations fail for this state.
Independent of the specific provider in question, is there any way to tell Terraform to perform the requests toward the provider more slowly (or at a certain max rate) to avoid hitting a rate limit? I have not found any flag for this in the relevant Terraform CLI commands.

Anylogic: Release specific resource

I've got another small issue with AnyLogic resources.
I want to be able to release a specific resource from a resource pool - not just any resource from the pool. The reason is that I occasionally seize multiple resources from a ResourcePool (one at a time) and then wish to release the resources again one at a time. But I don't want to release "any" resource from the pool, I want to be able to specify which specific resource of the pool to release.
Is this possible or is this one of the limitations of the resources implementation?
One way to do this that worked before for us is to use separate agents to grab resources. So for example:
Suppose there is the main WorkItem agent then
When a resource is needed, a Split block is used to spawn a new agent called ResourceHolder
The new ResourceHolder then grabs the resource using normal Seize
Afterwards ResourceHolder carrying the unit is joined back to the WorkItem using Combine.
The ResourceHolder has to be stored somewhere in WorkItem and it should be built to be able to tell which resource unit it is carrying (i.e. original resource pool, type of resource, when it was grabbed, etc.). Then when only a specific resource unit needs to be released the model needs to find the right ResourceHolder in the WorkItem and run it through a Release block. It is a little cumbersome but definitely gives a very fine control over release logic.
I can think of many ways to do this depending on the situation... first one is to use a selectOutput before the release in order to release or not. The selectOutput will check if it's the right resource to release
the other option, if you want to release everything with the same release block but in a given order, you can put a wait block before the release block and wait for the right moment to release the resource
another one, is to use wrap up actions, and put a wait block in the wrap up, to wait for the other resources to arrive there before releasing so they are released in order
The only way to release specific resources, with the standard seize blocks, is to specify that you want to release resources that were seized at a specific seize block
This then implies that you need as many seize and release blocks as you want control over the release process. i.e. if you seize 5 of a resource type and want to release them 1 by 1 over the course of the flow chart you will need 5 seize and 5 release blocks.

Why does terraform plan lock the state?

Q: What's the point for the terraform plan operation to lock the state by default?
(I know it might be disabled with -lock=false)
Context:
(As far as I understand) The plan operation is not supposed to alter state.
plan does start with some version of refresh (which typically alters state), but even the standard output of terraform plan pro-actively says it's not the case with the plan-initiated refresh:
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
I saw this question on the Hashicorp website, which doesn't seem to provide a conclusive answer.
I believe it is because behind the scenes it performs a state refresh:
-refresh=false - Disables the default behavior of synchronizing the Terraform state with remote objects before checking for configuration changes.
https://www.terraform.io/docs/cli/commands/plan.html
The other reason I can think of is that if you run a plan and someone already locked the state, it means it is performing changes to the state so reading the state might give you a plan that is no longer valid after the lock is freed
https://github.com/hashicorp/terraform/issues/28130

Terraform lock state at the resource level?

I have a tf file that has multiple resources/modules in it and it all uses a single remote state file in s3.
I often target specific modules in the tf file.
If I have locking setup does that mean two people can't make changes at the same time even if they are targeting different modules?
From what I read it seems Terraform locks the entire state file. Does it support resource level locking? Docs didn't seem clear to me on this.
You're right, Terraform does lock the whole state file regardless of what resources you're targeting.
The idea behind this implementation is that there may be references between resources. More precisely an event involving one resource (creation/update/destruction) originally may cause other resources to be created/updated/destroyed. So even apply -target=resource_one.ref_name may result in changes of other resources. All of that should be presented in terraform plan though.
All state operations (incl. locking) are currently implemented on the backend (S3, Consul, TFE, ...) level and the common interface between them isn't as flexible because the common denominator is basically blob of JSON (state file).
If you have two or more independent parts of infrastructure then I'd suggest you to split them apart into either different workspaces or directories. You can leverage terraform state subcommands to do the migration after splitting your config files.
You can also use the terraform_remote_state data source to reference any outputs exposed from any of these parts.
Managing independent parts of your infrastructure in a single state file is not something I'd recommend for a couple of reasons:
It doesn't scale very well. As you begin to add more resources, the time it takes to finish terraform plan & apply will increase as Terraform has to check current state of each resource.
All critical Terraform commands have blast radius wider than necessary which makes human errors much scarier. e.g. accident terraform destroy will destroy everything, not just one part of your infra.
The -target flag is meant to be used for exceptional circumstances, not for routine operations, as mentioned in the docs.

Resources