One of the ways of creating an API Gateway on AWS with Terraform requires creating a resource for each method/route and each integration (resource that handles the request), along with an API Deployment.
When we remove the resources for a route from our configuration, Terraform detects the change and deletes the integration, lambda etc. But this also needs a deployment. Since the dependent resources are deleted they are not part of any depends_on clause. This results in the following behavior:
The deployment is created prior to deleting the resources from the API Gateway. So the old resources are still part of the API Gateway at the time the deployment is done. Since a deployment is a snapshot of the resources configured at this time, the old resource is still part of the snapshot.
How can we tell Terraform that the API deployment resource should only be updated after all other resources (no longer in the template at this point) are destroyed?
Related
I have created some resources in Azure using Terraform such as VNETS, VMs, NSGs etc. Let's assume if I create another VM in the same VNET which was created by Terraform, I want to know if I rerun the Terraform script, will the manually created VM gets destroyed since the manually created VM is not in the state file?
No, Terraform does not interfere with resources that are created outside of terraform. It only manages resources that are included in its state file.
However, if you make manual changes to resources that you created through terraform(for example VNET in your case), terraform would reset them to what is declared in terraform code on the next run/execution.
Learning Terraform, and in one of the tutorials for terraform with azure a requirement was to log in with the az client. Now my understanding is that this was to create a Service Princlple.
I was trying this with Github actions and my assumption was that the properties obtained for the Service Principle. When I tried running terraform plan everything worked out fine.
However, when I tried to do terraform apply it failed until I explicitly did an az login step in the github workflow job.
What am I missing here? Does terraform plan only compare the new configuration file against the state file, not the actual account? Or does it verify the state against the resource-group/subscription in Azure?
I was a little confused with the documentation on terraform plan
I have a silly question.
I am trying to deploy an azure web app using terraform. I have a task to build the code and drop it as a artefact, this works just fine. So I moved to the release process as follow
My code has a backend configuration in which I am saving my terraform.tfstate to be able to access this I create a azure resource manager service principle
Now this works just perfectly for all my stages. I am able to create the resource group and the web app, and the terraform.tfstate get saved in the container which is under the azure resource manager
But here is my problem. If I update my code local and push it to GitHub the pipeline builds the artefact and the release triggers, but at the plan stage it fails with the following error.
reading resource group: resources.GroupsClient#Get: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client 'XXXX' with object id 'XXX' does not have authorization to perform action 'Microsoft.Resources/subscriptions/resourcegroups/read' over scope '/subscriptions/XXXX/resourcegroups/rg-hri-stg-eur-configurations' or the scope is invalid. If access was recently granted, please refresh your credentials."[0m
I do understand that once the resource group exists, I don't have permission to perform any action on it, such as plan, apply or Destroy.
I was wondering how can I set a azure resource manager for those pipeline to access this specific resource group once it has been created?
Thank you very much for any advice or help you can provide me with.
I found the issue. A silly one to be honest. My ARM resource was target a specific resource group(the one in which I keep my terraform states), so it was not working when trying to update a resource. I change the scope of the ARM resource to subscription level and everything works fine now. Thank you so much for your help guys
I am using Terraform scripts to create azure services, I am having some doubts regarding Terraform,
1) If I have one environment let say dev in azure having some azure resources how can I copy all the resources to new environment lest say prod using terraform script.
2)what are the impact of re-run the terraform file with additional azure resources, what it will do.
3)What if I want to create an app service with the same name from Terraform script that already present in the azure will it update the resource or do nothing after terraform execution completed.
Please feel free to answer the question, it will be great help.
To answer your questions:
You could create a new workspace with terraform workspace new and copy all configuration files (.tf) to the new environment, then run terraform init, plan, apply.
The terraform will compare the content in your current state file with your configuration file, then update the new attributes or creating new resources other than re-creating the existing resources.
You could run terraform import to import existing infrastructure into Terraform. For referencing existing resources in the portal, you can use data sources.
When using Terraform to deploy our AWS infra, if we use the AWS console to redeploy an API Gateway deployment, it creates a new deployment ID. Afterwards when running terraform apply again, Terraform lists aws_api_gateway_stage.deployment_id as an in-place update item. Running terraform refresh does not update the state. It continues to show this as a delta. Am I missing something?
Terraform computes the delta based on what it already has saved on the backend and what there is on AWS at the moment of application. You can use Terraform to apply changes to AWS, but not the other way around, i.e. you cannot change your AWS configuration and expect Terraform to pull the changes into its state.