Terraform Cloud with remoted Backend connected to github VCS - terraform

I have terraform cloud as a backend integrated with github used to provision aws resources. When I change my terraform code and create a pull request, Terraform generates a plan.
Here is the structure of terraform.
Modules
alb
modulealb.tf
Environments
dev
alb.tf
In dev folder is my working directory for Terraform cloud workspace.
Issue is, when I make some changes in modulealb.tf and commit my changes to github, Terraform cloud does not recognize those changes and infrastructure updates are not planned.
How I can make Terraform cloud recognize my changes in modules.
From Vscode, I tried
terraform init -upgrade
terraform get -update
Modules are initialzed and I commit my changes to github but still those module changes are not being picked up by terraform cloud.
Please point me in the right direction.
Thank you.
Edit:
To provide more context, I am working with changing my security group and route table modules.
My previous modules are "aws_route_table" with inline route.
resource "aws_route_table" "public_rt" {
vpc_id = aws_vpc.main.id
route {
cidr_block = var.aws_route_table_public_rt_cidr_block
gateway_id = aws_internet_gateway.main.id
}
}
Now I commented that one inline route and did a pull request, terraform plan don't have any changes. Still I applied those changes and added a new module "aws_route" to include one route.
resource "aws_route" "aws_internet_gateway" {
route_table_id = aws_route_table.public_rt.id
destination_cidr_block = var.aws_route_table_public_rt_cidr_block
gateway_id = aws_internet_gateway.main.id
}
I created a pull request and terraform apply errored as that route is already existing and cannot create a duplicate route. So I deleted routes from AWS console and apply terraform, it is successful and added those changes.
In the similar way I only had "aws_security_group" with one inline ingress and one inline egress block. Now I added a new module by commenting out those inline blocks and deleted existing rules in AWS console, Terraform apply created those security group rules.
Hopefully I have done everything right, but the main issue here is when I have these "aws_security_group", "aws_route_table" modules with inline blocks when I comment those inline blocks terraform plan don't have any changes
Looking at my state file, For "aws_security_group" Inline ingress and egress and deleted/removed by terraform.

Related

Diagnostic Settings - Master" already exists - to be managed via Terraform this resource needs to be imported into the State

I have a diagnostic setting configured on my master db. As shown below in my main.tf
resource "azurerm_monitor_diagnostic_setting" "main" {
name = "Diagnostic Settings - Master"
target_resource_id = "${azurerm_mssql_server.main.id}/databases/master"
log_analytics_workspace_id = azurerm_log_analytics_workspace.main.id
log {
category = "SQLSecurityAuditEvents"
enabled = true
retention_policy {
enabled = false
}
}
metric {
category = "AllMetrics"
retention_policy {
enabled = false
}
}
lifecycle {
ignore_changes = [log, metric]
}
}
If I don't delete it before in the resource group before I run the Terraform, I get the error:
Diagnostic Settings - Master" already exists - to be managed via
Terraform this resource needs to be imported into the State
I know that if I delete the SQL Server the diagnostic setting remains - but I don't know why that is a problem with Terraform. I have also noticed that it is in my tfplan.
What could be the problem?
If I don't delete it before in the resource group before I run the
Terraform, I get the error:
Diagnostic Settings - Master" already exists - to be managed via Terraform this resource needs to be imported into the State
I know that if I delete the SQL Server the diagnostic setting remains but I don't know why that is a problem with Terraform.
If you have created the resource in Azure from a different way (i.e. Portal/Templates/CLI/Powershell), that means Terraform is not aware of resource already existing in Azure. So, during Terraform Plan, it shows you the plan what will be created from what you have written in main.tf. But when you run Terraform Apply the azurerm provider checks the resources names with the existing resources of the same resource providers and result in giving an error that it already exists and needs to be imported to be managed by Terraform.
Also if you have created everything from Terraform then doing a Terraform destroy deletes all the resources present on the main.tf.
Well, it's in the .tfplan and also it's in main.tf - so it's imported, right ?
If you mention the resource and its details in main.tf and .tfplan, it doesn't mean that you have imported the resource or Terraform gets aware of the resource. Terraform is only aware of the resources that are stored in the Terraform state file i.e. .tfstate.
So , to overcome the error that you get without deleting the resource from Portal, you will have to add the resource in the main.tf as you have already done and then use Terraform import command to import the Azure resource to Terraform State file like below:
terraform import azurerm_monitor_diagnostic_setting.example "{resourceID}|{DiagnosticsSettingsName}"
So, for you it will be like:
terraform import azurerm_monitor_diagnostic_setting.main "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Sql/servers/<SQLServerName>/databases/master|Diagnostic Settings - Master"
After the Import is done, any changes you make from Terraform to that resource will get reflected in portal as well and you will be able to destroy the resource from terraform as well.

How to update an existing cloudflare_record in terraform and github actions

I creaed my project with code from Hashicorp tutorial "Host a static website with S3 and Cloudflare", but the tutorial didn't mention github actions. So, when I put my project in github actions, even though terraform plan and terraform apply output successfully locally, I get errors on terraform apply:
Error: expected DNS record to not already be present but already exists
with cloudflare_record.site_cname ...
with cloudflare_record.www
I have two resources in my main.tf, one for the site domain and one for www, like the following:
resource "cloudflare_record" "site_cname" {
zone_id = data.cloudflare_zones.domain.zones[0].id
name = var.site_domain
value = aws_s3_bucket.site.website_endpoint
type = "CNAME"
ttl = 1
proxied = true
}
resource "cloudflare_record" "www" {
zone_id = data.cloudflare_zones.domain.zones[0].id
name = "www"
value = var.site_domain
type = "CNAME"
ttl = 1
proxied = true
}
If I remove these lines of code from my main.tf and then run terraform apply locally, I get the warning that this will destroy my resource.
Which should I do?
add an allow_overwrite somewhere (don't see examples of how to use this in the docs) and the ways I've tried to add it generated errors.
remove the lines of code from main.tf knowing the github actions run will destroy my cloudflare_record.www and cloudflare_record.site_cname knowing I can see my zone id and CNAME if I log into cloudflare so maybe this code isn't necessary after the initial set up
run terrform import somewhere? If so, where do I find the zone ID and record ID
or something else?
Where is your terraform state? Did you store it locally or in a remote location?
Because it would explain why you don't have any problems locally and why it's trying to recreate the resources in Github actions.
More information about terraform backend (where the state is stored) -> https://www.terraform.io/docs/language/settings/backends/index.html
And how to create one with S3 for example ->
https://www.terraform.io/docs/language/settings/backends/s3.html
It shouldn't be a problem if Terraform would drop and re-create DNS records, but for better result, you need to ensure that GitHub Actions has access to the (current) workspace state.
Since Terraform Cloud provides a free plan, there is no reason not to take advantage of it. Just create a workspace through their dashboard, add "remote" backend configuration to your project and ensure that GitHub Actions uses Terraform API Token at runtime (you would set it via GitHub repository settings > Secrets).
You may want to check this example — Terraform Starter Kit
infra/backend.tf
infra/dns-records.tf
scripts/tf.js
Here is how you can pass Terraform API Token from secrets.TERRAFORM_API_TOKEN GitHub secret to Terraform CLI:
- env: { TERRAFORM_API_TOKEN: "${{ secrets.TERRAFORM_API_TOKEN }}" }
run: |
echo "credentials \"app.terraform.io\" { token = \"$TERRAFORM_API_TOKEN\" }" > ./.terraformrc

Issue when creating Logic App through Terraform

I'm trying to create a Logic App through Terraform and facing issue related to API Connection.
Here are the manual steps for creating the API Connection:
Create a Logic App in your resource group and go to Logic App Designer
Select the HTTP trigger request and click on "Next Step", then search and select "Azure Container Instance"
Click on Create or update container group and it should ask you to sign-in
Now if you scroll all the way down, you should see "Connected to ...... Change Connection"
If the Change Connection is clicked, it will show the existing aci connections or create a new one.
I'm trying to create a Logic App and I'm facing an issue with the above mentioned steps.
What I'm doing is:
Exported the existing Logic App template from another environment
Converted the values in the json as parameters and kept them in variables.tf and the final values in terraform.tfvars
The terraform plan is working fine, however the terraform apply is causing an issue
Error message:
Error: waiting for creation of Template Deployment "logicapp_arm_template" (Resource Group "resource_group_name"): Code="DeploymentFailed" Message="At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details." Details=[{"code":"NotFound","message":"{\r\n \"error\": {\r\n \"code\": \"ApiConnectionNotFound\",\r\n \"message\": \"**The API connection 'aci' could not be found**.\"\r\n }\r\n}"}]
Further troubleshooting shows that the error occurs in this line in terraform.tfvars
connections_aci_externalid = "/subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Web/connections/aci"
Deduced that the issue is since the "aci" is not created.
So, created the aci manually through the Azure Portal (see top of the post for steps).
However, when I hit terraform apply the new error below shows up:
A resource with the ID "/subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Resources/deployments/logicapp_arm_template" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group_template_deployment" for more information.
My question is, since I'm creating the Logic App using the existing template how should the "aci" portion be handled through Terraform?
For your last error message, you could remove the terraform.tfstate and terraform.tfstate.backup files in the terraform working directory and existing resources in the Azure portal then run terraform plan and terraform apply again.
If you have a separate working ARM Template, you can invoke the template deployment with terraform resource azurerm_resource_group_template_deployment. You could provide the contents of the ARM Template parameters file with argument parameters_content and the contents of the ARM Template file with argument template_content.
In this case, If you have manually created a new API Connection, you can directly input your new API connection id /subscriptions/<subscription_id>/resourceGroups/<resourceGroup_id>/providers/Microsoft.Web/connections/aci. Alternatively, you can create the API Connections automatically when you deploy your ARM Template with resource Microsoft.Web/connections. Read this blog for more samples.
If using the azurerm_resource_group_template_deployment, make sure that deployment mode is set to incremental, otherwise, you run into terrible state issues. An example from our terraform module can be seen below. We use this to deploy an arm template, which we design in our development environment and export from the Azure portal. This enables us to use parameters to deploy exactly the same logic app in test, acceptance, and production environments.
resource "azurerm_logic_app_workflow" "workflow" {
name = var.logic_app_name
location = var.location
resource_group_name = var.resource_group_name
}
resource "azurerm_resource_group_template_deployment" "workflow_deployment" {
count = var.arm_template_path == null ? 0 : 1
name = "${var.logic_app_name}-deployment"
resource_group_name = var.resource_group_name
deployment_mode = "Incremental"
template_content = file(var.arm_template_path)
parameters_content = jsonencode(local.parameters_content)
depends_on = [azurerm_logic_app_workflow.workflow]
}
Note the conditional using count. Setting arm_template_path = null by default enables us to deploy only the workflow "container" in our development environment. Which can then be used as a "canvas" for designing the logic app.

How to leave imported property unchanged?

I have a hosted zone in the main.tf:
provider "aws" {
region = "us-east-1"
}
resource "aws_route53_zone" "zone" {
}
I then can import an existing resource and use its parameters in other resources:
terraform import aws_route53_zone.zone <ZoneId>
Inspecting the state file I see the parameters are all there, including the domain name. But when I want to apply it it says that name is not found:
Error: aws_route53_zone.zone: "name": required field is not set
I don't want to specify the name in the .tf file as it would decrease the portability of my .tf, but specifying a placeholder would change the hosted zone itself.
Is there a way to ignore parameters for imported resources or specify them as "leave as-is"?
I could add a variable and populate that from the state file for every terraform calls, but I'm hoping for something simpler.
When you import a resource, Terraform doesn't (yet) automatically generate Terraform code for you and instead you must write the resource and then check the plan.
Normally the pattern would be to create a skeleton resource like you've done, import the resource and then fill out any required fields, run a plan, then adjust your resource configuration so that it doesn't make any unwanted changes.
From then on Terraform will be able to manage the resource as normal, applying any changes you make to the configuration or reverting changes done outside of Terraform back to how they are done in your Terraform code.

Terraform and Updates

Being able to capture infrastructure in a single Terraform file has obvious benefits. However, I am not clear in my mind how - once, for example, a virtual machine has been created - subsequent updates are handled.
So, to provide a specific scenario. Suppose that using Terraform we set up an Azure vm with SQL Server 2014. Then, after a month we decide that we should like to update that vm with the latest service pack for SQL Server 2014 that has just been released.
Is the recommended practice that we update the Terraform configuration file and re-apply it?
I have to disagree with the other two responses. Terraform can handle infrastructure updates just fine. The key thing to understand, however, is that Terraform largely follows an immutable infrastructure paradigm, which means that to "update" a resource, you delete the old resource and create a new one to replace it. This is much like functional programming, where variables are immutable, and to "update" something, you actually create a new variable.
The typical pattern with Terraform is to use it to deploy a server image, such as an Virtual Machine (VM) Image (e.g. an Amazon Machine Image (AMI)) or a Container Image (e.g. a Docker Image). When you want to "update" something, you create a new version of your image, deploy that onto a new server, and undeploy the old server.
Here's an example of how that works:
Imagine that you're building a Ruby on Rails app. You get the app working in dev and it's time to deploy to prod. The first step is to package the app as an AMI. You could do this using a tool like Packer. Now you have an AMI with id ami-1234.
Here is a Terraform template you could use to deploy this AMI on a server (an EC2 Instance) in AWS with an Elastic IP Address attached to it:
resource "aws_instance" "example" {
ami = "ami-1234"
instance_type = "t2.micro"
}
resource "aws_eip" "example" {
instance = "${aws_instance.example.id}"
}
When you run terraform apply, Terraform deploys the server, attaches an IP address to it, and now when users visit that IP, they will see v1 of your Rails app.
Some time later, you update your Rails app and want to deploy the new version, v2. To do that, you build a new AMI (i.e. you run Packer again) to get an ami with ID "ami-5678". You update your Terraform templates accordingly:
resource "aws_instance" "example" {
ami = "ami-5678"
instance_type = "t2.micro"
}
When you run terraform apply, Terraform undeploys the old server (which it can find because Terraform records the state of your infrastructure), deploys a new server with the new AMI, and now users will see v2 of your code at that same IP.
Of course, there is one problem here: in between the time when Terraform undeploys v1 and when it deploys v2, your users would see downtime. To work around that, you could use Terraform's create_before_destroy lifecycle setting:
resource "aws_instance" "example" {
ami = "ami-5678"
instance_type = "t2.micro"
lifecycle {
create_before_destroy = true
}
}
With create_before_destroy set to true, Terraform will create the replacement server first, switch the IP to it, and then remove the old server. This allows you to do zero-downtime deployment with immutable infrastructure (note: zero-downtime deployment works better with a load balancer that can do health checks than a simple IP address, especially if your server takes a long time to boot).
For more information on this, check out the book Terraform: Up & Running. The code samples for the book include an example of a zero-downtime deployment with a cluster of servers and a load balancer: https://github.com/brikis98/terraform-up-and-running-code
Terraform is an infrastructure provision tool, th configuration/deployment tools will be:
chef
saltstack
ansible
etc.,
As I am working with chef, so basically, I provision the server instance by terraform, then terraform (terraform provisioner) handles the control to chef for system configuration and deployment.
For the moment, terraform cannot delete the node/client in chef server, so after you terraform destroy, you need remove them by yourself.
Terraform isn't best placed for this sort of task. Terraform is an infrastructure management tool, not configuration management.
You should use tools such as chef, puppet, and ansible to deal with the configuration of the system.
If you must use terraform for this task; you could create a template_file resource and place in the configuration required to install the SQL server, and how to upgrade if a different version is presented. Reference: here
Put that code inside a provisioner under the null_resource resource. reference: here.
The trigger for this could be the variable containing the SQL version. So, when you present a different version of SQL it'll execute that provisioner on each instance to upgrade the versions.

Resources