terraform | aws_launch_template restricts version limits - terraform

In AWS, while creating launch_template, there is an option to delete versions. but in terraform, I don't see any attributes that do that.
With each deployment, I see it creates a new aws launch template and doesn't remove the old one.
Is there any option to delete / restrict the number of version to have at one of time?

Related

How to delete a revision in Azure Container Apps?

Is there a way to delete a revision on Azure Container Apps?
Scenario
I have an Azure Container App instance for testing puposes which I regularly push new revisions to using the az containerapp update command in my CI/ CD pipeline whenever I merge a change onto my master branch. As the revisions all use a Docker image with the same tag :latest - but not (necessarily) the same code inside the Docker container - I create a new unique revision suffix for each revision in order to create a revision-scope change.
I am using the single-revision mode, therefore there's only ever one revision which serves 100% of the traffic. So whenever I push a new revision with a new revision-suffix a new revision gets created and activated and the previous revision gets deactivated.
Using this approach with time a lot of revisions get created and most of them will not be needed anymore but will still occupy storage and - as revision names must be unqiue - a lot of names which I would like to re-use, therefore I'd like to delete them.
However, looking at the available commands in the Azure CLI for revisions there does not seem to be a way to delete a revision.
The question therefore is, if there is a way how can I delete those revisions? Alternatively if revisions cannot be deleted, is there another way so I can force the container app to update the docker image it is running even though the tag of the docker image does not change (in that case I would not (necessarily) need to create a new revision every time)?
Expectation
I would have expected there was a deletion command as there will be many container apps with many revisions which will need lots of storage (which one might need to pay for eventually) as a revision might be activated again at any time, so Microsoft or Azure users should at least to my mind have the same desire to delete outdated/ deprecated/ unused revisions.
Agreed the point of #ahmelsayed that is not possible to delete the revisions manually and they should eventually be pruned to the most recent 100.
I would have expected there was a deletion command as there will be many container apps with many revisions which will need lots of storage (which one might need to pay for eventually) as a revision might be activated again at any time, so Microsoft or Azure users should at least to my mind have the same desire to delete outdated/ deprecated/ unused revisions.
As mentioned in this MS Doc, max. 100 revisions are allowed and older than that are purged where there is no cost for inactive revisions.
You can deactivate the unused or outdated revisions using the Azure Portal or Azure CLI or REST API or Code like Java, Go, and JS and activate also.
Here is the syntax of deactivating the Azure Container Apps Revisions using Azure CLI:
az containerapp revision deactivate --revision <Your_Container_Revision_Name> --resource-group <Your_Resource-Group_Name>

Custom environments in terraform on aws

I need to create a setup with terraform and aws where i could have the traditional Dev, Stage and Prod environments. I also need to allow creation of custom copies of individual services in those environments for developers to work on. I will use workspaces for Dev, Stage and Prod but can't figure out how to achieve these custom copies of services in environments.
An example would be a lambda in dev environment that would be called "test-lambda-dev". It would be deployed with the dev environment. The developer with initials "de" would need to work on some changes in this lambdas code and would need a custom version of it deployed alongside the regular one. So he would need to create a "test-lambda-dev-de".
I have tried to solve it by introducing a sufix after the resources name that would normaly be empty but someone could provide a value signalling that a cuwtom version is needed. Terraform destroyed the regular resource and replaced it with the custom one so it has not worked as intended.
Is there a way to get it to work? If not how can developers work on resources without temporarily breaking the environment?

How to handle resource changes after provider upgrade in terraform?

I am trying to upgrade the azurerm terraform provider from 2.30.0 to 3.13.0. For sure there are several changes in some resources (e.g. resoruce name changes, renamed attributes, removed attributes, etc.). I checked the Azure Resource Manager Upgrade Guide and found those changes by which our configuration is affected.
For example in version 3.0.0 the attribute availibility_zones is replaced by zones for the azurerm_kubernetes_cluster_node_pool ressource. Therefore when i run terraform plan i get an error, that the attribute availibility_zones doesn't exists.
I found a migration guide from deprecated resources. I understood the idea of removing the resource from the state and importing it again by it's resource id, but there are also other resources like azurerm_subnet, azurerm_kubernetes_cluster, azurerm_storage_account they have resource changes, why the terraform import -var-file='./my.tfvars' [..] command fails.
I am not sure if it fails (only) because of the dependencies to some variables they are needed for declaring the resource properly. Or would it also fail because of reading the .tfvars and terraform compares the read variables with the state?
Actually i need a "best practice" guide how to handle resource changes after a provider update. I dont know where to start and where to end. I tried to visualize the dependencies with terraform graph and created a svg to try to figure out the order by which i have to migrate the resource changes. It's unpossible to understand the relations of the whole configuration.. I could also just remove attributes from the state file they doesnt exists anymore, or rename attributes manually.
So How to handle resource changes after provider upgrade in terraform?
General
I was able to update the provider properly - i hope at least. I would like to share my experience, maybe it would help other beginners. This is not a professional guide, but just my experience that i want to share.
First of all you have to remove ALL resources affected by the provider upgrade and then re-import them. What does that mean?
The new provider will contain divers changes on different resources. For example:
Removed deprecated attributes (attribute is completely removed)
Superseded attribute (attribute is replaced by another).
Renamed attributes
Superseded resources (here the resource can be deprecated or removed by the upgraded version)
Note
The migration guide describes how you can migrate from deprecated resources, but the workflow for attribute changes is the same. How i understood it. This is the only guide that i found.
terraform plan will show you one or several errors for affected resources.
If your terraform configuration is complex and huge, then you shouldn't try to remove and re-import them all at once. Just go step by step and fix one affected resource successively.
terraform plan can show changes although he shouldn't.
Check the force replacement attribute accurately and understand why terraform detects changes. It's seems be obvious but it doesn't have to.
There can be a type change e.g. int -> string
If the affected change is a kind of missing secret, then you can try to add the secret manually as the value to the related attribute in the state file and run terraform plan again.
Or there can be also a bug by the provider. So if you can't understand the detected change try to search the issues of the provider - mostly on github. Don't get confused if you can't find any related issue, maybe you have found a bug. Then just create a new issue.
You will also face some other errors or bugs related to terraform itself. You have to search for a workaround patiently, so that you can continue apply the resource changes.
Try to figure out resource changes or to ignore an error for this moment that occurs in another module with resource targeting.
How To
---> !! BACKUP YOUR STATE FILE !! <---: You have to backup your state file before you start manipulating the state file. You will be able to restore the state of the backed state file if something goes wrong. Also you can use the backed up state file for finding needed ids when you have to import the resource.
Get Affected Resource:
How you can find all affected resources? After the upgrade the provider will not be able to parse the state file, if a resource contains changes - like i described in the question above. You will get an error for affected resources. Then you can check the changes for this affected resource in the upgrade guide of the provider - can be found in the provider register e.g. azurerm.
Terraform Configuration: Now you have to apply the changes for the affected resources in the terraform configuration modules before you can import them like described in the migration guide.
Remove Outdated Resource: Like described in the the migration guide you have to remove the outdated resource from the state file because it contains the old format of the resource. The new provider is not able to handle these resources from the state file. They must be re-imported with the new provider.
Import Removed Resource: After you removed the resource you have to re-import it also described in the migration guide. Check the terraform import documentation for better understanding and usage.
So How to handle resource changes after provider upgrade in terraform?
I don't think deleting the state file and then importing the resource and do changes in resources attribute based on when you require to upgrade the azurerm version is a feasible solution.
Terraform Registry already given update/notes for every resource when they did some changes on their upgrading version. Just like below example
we use azurerm_app_service for version ~2.x but for version ~3.0 and ~4.0 azurerm_linux_web_app and azurerm_windows_web_app resources instead.
Would suggest you check the terraform registry for update on particular resources attribute for specific provider version or not and do it accordingly.

Terraform, how to centralize providers versioning

We use terraform for Azure PAAS resources creation and it runs as a separate pipeline steps for each component. For instance - first step data component plan and apply, second step web component plan and apply and so on. So the code is arranged into multiple components and each of those will have it's own definition for provider azurerm block. Inside the block we want to pin the provider version and we want to control it in centralized manner. So currently we came up with the following approach.
provider "azurerm" {
version = "=${ps.AzureRmVersion}"
skip_provider_registration = "true"
features {}
}
When the release process runs there is a powershell functionality that replaces the ps.AzureRmVerison marker with the version. My question is if there is another way to control the provider version without involving third party such as powerhsell to control it.
The version argument in provider blocks is a legacy pattern from older versions of Terraform for specifying version constraints (a set of versions that this module is compatible with) rather than version selections (a single selected version that you want to use).
Since you want to centrally control which exact version is selected I think the best approach would be to have your automation script generate a Dependency Lock File containing the versions you want to prescribe.
Normally Terraform manages this lock file itself as you install and upgrade providers, but in that case each configuration will have its own set of locks and may therefore differ from one another. Since you want to impose central policy, you can instead use Terraform CLI with a simple configuration that only contains provider requirements declarations for the providers you want to use:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "1.0.0"
}
}
}
In that directory you can run terraform providers lock to cause Terraform to select that particular version from the registry and generate a .terraform.lock.hcl file recording the checksums for all of the platforms you specified:
terraform providers lock -platform=windows_amd64 -platform=linux_amd64
You can then save that .terraform.lock.hcl file to your central location and configure your automation to copy that file into the working directory each time (overwriting any file that might already be there) before running terraform init. Terraform will then select whatever package the lock file recorded, and make sure that it matches the checksums previously recorded.
Your individual Terraform configurations may optionally contain their own non-exact version constraints specifying which Terraform versions they are compatible with, which will then cause Terraform to report an error if the centrally-selected version recorded in your shared lock file is not compatible with one of your configurations.
Note that the lock file only constrains providers that are already recorded in it. If one of your configurations requires a different provider that's not already in the lock file then by default terraform init will select the newest compatible version of that provider and overwrite the lock file to include it.
If you want to prevent that and require all new providers to be added to the centrally-maintained lock file, you can add an additional option to terraform init to tell Terraform to fail if the action it's taking would require changes to the locked providers:
terraform init -lockfile=readonly
To add a new provider with this usage pattern, you'd need to return to the requirements-only configuration I described earlier, add the new provider to it, re-run the same terraform providers lock command to regenerate it, and then update your "master" lock file to that new version of the file.

Terraform show and plan not matching

I am beginner in terraform in a (dangerous) live environment.
I ran a script for creating 3 new accounts in AWS Organizations. Two got generated and due to service limit error I couldn't create one.
To add to it, there was a mistake of the parent-id in the script. I rectified the accounts on the console by moving it to the right parent ID.
That leaves me with one account to be created.
After making the necessary changes in the service limit, I tried running the script. The plan shows 3 accounts to be added 2 to be destroyed. There's no way these accounts can be deleted and added. (Since the script is now version controlled - I can't run just for this one account).
Here's what I did - I modified the terraform state (the parent id) in the S3 bucket. Ensured that terraform show is reflecting the new changes. The terraform plan still shows 3 accounts to add and 2 to destroy.
How do I get this fixed? Any help is deeply appreciated.
Thanks.
The code is source of truth when working with Infrastructure as Code, even if you change state file, you need to update the code as well as state file.
There is no way Terraform can update source code when detecting a drift on your resouces.
So you need:
1- write the manual changes you done in AWS into the Terraform code.
2- Do a terraform plan. It will refresh the state and show you if there is still a difference
If modifying the state file like me, do it at your own risk. I followed how to clean your terraform state and performed the surgery!
Ensure that the code is reflected properly to pick the changes.

Resources