How to update an existing cloudflare_record in terraform and github actions - terraform

I creaed my project with code from Hashicorp tutorial "Host a static website with S3 and Cloudflare", but the tutorial didn't mention github actions. So, when I put my project in github actions, even though terraform plan and terraform apply output successfully locally, I get errors on terraform apply:
Error: expected DNS record to not already be present but already exists
with cloudflare_record.site_cname ...
with cloudflare_record.www
I have two resources in my main.tf, one for the site domain and one for www, like the following:
resource "cloudflare_record" "site_cname" {
zone_id = data.cloudflare_zones.domain.zones[0].id
name = var.site_domain
value = aws_s3_bucket.site.website_endpoint
type = "CNAME"
ttl = 1
proxied = true
}
resource "cloudflare_record" "www" {
zone_id = data.cloudflare_zones.domain.zones[0].id
name = "www"
value = var.site_domain
type = "CNAME"
ttl = 1
proxied = true
}
If I remove these lines of code from my main.tf and then run terraform apply locally, I get the warning that this will destroy my resource.
Which should I do?
add an allow_overwrite somewhere (don't see examples of how to use this in the docs) and the ways I've tried to add it generated errors.
remove the lines of code from main.tf knowing the github actions run will destroy my cloudflare_record.www and cloudflare_record.site_cname knowing I can see my zone id and CNAME if I log into cloudflare so maybe this code isn't necessary after the initial set up
run terrform import somewhere? If so, where do I find the zone ID and record ID
or something else?

Where is your terraform state? Did you store it locally or in a remote location?
Because it would explain why you don't have any problems locally and why it's trying to recreate the resources in Github actions.
More information about terraform backend (where the state is stored) -> https://www.terraform.io/docs/language/settings/backends/index.html
And how to create one with S3 for example ->
https://www.terraform.io/docs/language/settings/backends/s3.html

It shouldn't be a problem if Terraform would drop and re-create DNS records, but for better result, you need to ensure that GitHub Actions has access to the (current) workspace state.
Since Terraform Cloud provides a free plan, there is no reason not to take advantage of it. Just create a workspace through their dashboard, add "remote" backend configuration to your project and ensure that GitHub Actions uses Terraform API Token at runtime (you would set it via GitHub repository settings > Secrets).
You may want to check this example — Terraform Starter Kit
infra/backend.tf
infra/dns-records.tf
scripts/tf.js
Here is how you can pass Terraform API Token from secrets.TERRAFORM_API_TOKEN GitHub secret to Terraform CLI:
- env: { TERRAFORM_API_TOKEN: "${{ secrets.TERRAFORM_API_TOKEN }}" }
run: |
echo "credentials \"app.terraform.io\" { token = \"$TERRAFORM_API_TOKEN\" }" > ./.terraformrc

Related

Can Terraform Destroy resources created using the "http data source" provider?

I have a terraform project for deploying a few VMs into Azure. Once the VMS are created successfully I'm wanting to automate the creation of DNS records. Additionally the application that runs on the VMs has APIs to POST configurations. I've successfully created my DNS records and POSTed configurations to the VMs using the http provider. However when I run a terraform destroy it obviously doesn't destroy them. I'm curious if there is a way when running terraform destroy to have these records and configurations deleted? Is there a way to manually add destroy steps in which I could just send more http requests to delete them?
Is there a better method to doing this that you would recommend?
Also I'll be going back and making all these fields sensitive variables with a .tfvars file. This is simply for testing right now.
DNS record example using Cloudns
data "http" "dns_record" {
url = "https://api.cloudns.net/dns/add-record.json?auth-id=0000&auth-password=0000&domain-name=domain.com&record-type=A&host=testhost&record=123.123.123.123&ttl=1800"
}
VM API config example
data "http" "config" {
url = "https://host.domain.com/api/configuration/endpoint"
method = "POST"
request_body = jsonencode({"name"="testfield", "field_type"="configuration"})
# Optional request headers
request_headers = {
authorization = var.auth
}
}
You should not use data-sources for operations that are having non-idempotent side effects or changing any external state. A data sources should only read information, because Terraform does not store them in the state. Therefore, there is no mechanism to destroy data sources, as there is nothing to destroy.
Specific Provider
In your case, there seem to be community providers for your DNS provider, e.g.:
mangadex-pub/cloudns. This way you could manage your DNS entry via a resource, which will be supported by destroy.
resource "cloudns_dns_record" "some-record" {
# something.cloudns.net 600 in A 1.2.3.4
name = "something"
zone = "cloudns.net"
type = "A"
value = "1.2.3.4"
ttl = "600"
}
null_resource with provisioners
In cases where there is no Terraform provider for the API you want to consume, you can try using a null_resource with provisioners. Providers have some caveats, use with caution. To cite the Terraform docs:
Use provisioners as a last resort. There are better alternatives for most situations.
resource "null_resource" "my_resource_id" {
provisioner "local-exec" {
command = "... command that creates the actual resource"
}
provisioner "local-exec" {
when = destroy
command = "... command that destroys the actual resource"
}
}

Terraform Cloud with remoted Backend connected to github VCS

I have terraform cloud as a backend integrated with github used to provision aws resources. When I change my terraform code and create a pull request, Terraform generates a plan.
Here is the structure of terraform.
Modules
alb
modulealb.tf
Environments
dev
alb.tf
In dev folder is my working directory for Terraform cloud workspace.
Issue is, when I make some changes in modulealb.tf and commit my changes to github, Terraform cloud does not recognize those changes and infrastructure updates are not planned.
How I can make Terraform cloud recognize my changes in modules.
From Vscode, I tried
terraform init -upgrade
terraform get -update
Modules are initialzed and I commit my changes to github but still those module changes are not being picked up by terraform cloud.
Please point me in the right direction.
Thank you.
Edit:
To provide more context, I am working with changing my security group and route table modules.
My previous modules are "aws_route_table" with inline route.
resource "aws_route_table" "public_rt" {
vpc_id = aws_vpc.main.id
route {
cidr_block = var.aws_route_table_public_rt_cidr_block
gateway_id = aws_internet_gateway.main.id
}
}
Now I commented that one inline route and did a pull request, terraform plan don't have any changes. Still I applied those changes and added a new module "aws_route" to include one route.
resource "aws_route" "aws_internet_gateway" {
route_table_id = aws_route_table.public_rt.id
destination_cidr_block = var.aws_route_table_public_rt_cidr_block
gateway_id = aws_internet_gateway.main.id
}
I created a pull request and terraform apply errored as that route is already existing and cannot create a duplicate route. So I deleted routes from AWS console and apply terraform, it is successful and added those changes.
In the similar way I only had "aws_security_group" with one inline ingress and one inline egress block. Now I added a new module by commenting out those inline blocks and deleted existing rules in AWS console, Terraform apply created those security group rules.
Hopefully I have done everything right, but the main issue here is when I have these "aws_security_group", "aws_route_table" modules with inline blocks when I comment those inline blocks terraform plan don't have any changes
Looking at my state file, For "aws_security_group" Inline ingress and egress and deleted/removed by terraform.

Terraform cloud config dynamic workspace name

I'm building CI/CD pipeline using GitHub Actions and Terraform. I have a main.tf file like below, which I'm calling from GitHub action for multiple environments. I'm using https://github.com/hashicorp/setup-terraform to interact with Terraform in GitHub actions. I have MyService component and I'm deploying to DEV, UAT and PROD environments. I would like to reuse main.tf for all of the environments and dynamically set workspace name like so: MyService-DEV, MyService-UAT, MyService-PROD. Usage of variables is not allowed in the terraform/cloud block. I'm using HashiCorp cloud to store state.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.0"
}
}
cloud {
organization = "tf-organization"
workspaces {
name = "MyService-${env.envname}" #<==not allowed to use variables
}
}
}
Update
I finally managed to get this up and running with helpful comments. Here are my findings:
TF_WORKSPACE needs to be defined upfront like: service-dev
I didn't get tags to work the way I want when running in automation. If I define a tag in cloud.workspaces.tags as 'service' then there is no way to set a second tag like 'dev' dynamically. Both of the tags are needed to during init ['service', 'dev'] in order for TF to select workspace service-dev automatically.
I ended up using tfe provider in order to set up workspaces(with tags) automatically. In the end I still needed to set TF_WORKSPACE=service-dev
It doesn't make sense to refer to terraform.workspace as part of the workspaces block inside a cloud block, because that block defines which remote workspaces Terraform will use and therefore dictates what final value terraform.workspace will have in the rest of your configuration.
To declare that your Terraform configuration belongs to more than one workspace in Terraform Cloud, you can assign each of those workspaces the tag "MyService" and then use the tags argument instead of the name argument:
cloud {
organization = "tf-organization"
workspaces {
tags = ["MyService"]
}
}
If you assign that tag to hypothetical MyService-dev and MyService-prod workspaces in Terraform Cloud and then initialize with the configuration above, Terraform will present those two workspaces for selection using the terraform workspace commands when working in this directory.
terraform.workspace will then appear as either MyService-dev or MyService-prod, depending on which one you have selected.

Accessing existing resource info from new resources

My header might not have summed up correctly my question.
So I have a terraform stack that creates a resource group, and a keyvault, amongst other things. This has already been ran and the resources exist.
I am now adding another resource to this same terraform stack. Namely a mysql server. Now I know if I just re-run the stack it will check the state file and just add my mysql server.
However as part of this mysql server creation I am providing a password and I want to write this password to the keyvault that already exists.
if I was doing this from the start my terraform would look like:
resource "azurerm_key_vault_secret" "sqlpassword" {
name = "flagr-mysql-password"
value = random_password.sqlpassword.result
key_vault_id = azurerm_key_vault.shared_kv.id
depends_on = [
azurerm_key_vault.shared_kv
]
}
however I believe as the keyvault already exists this would error as it wouldn't know this value azurerm_key_vault.shared_kv.id unless I destroy the keyvault and allow terraform to recreate it. is that correct?
I could replace azurerm_key_vault.shared_kv.id with the actual resource ID from azure, but then if I were to ever run this stack to create a new environment it would be writing the value into my old keyvault I presume?
I have done this recently for AWS deployment, you would do terraform import on azurerm_key_vault.shared_kv resource to bring it under terraform management and then you would be able to able to deploy azurerm_key_vault_secret
To import: you will need to build the resource azurerm_key_vault.shared_kv (this will require a few iterations).

Migrate Terraform CLI workspaces to Terraform Cloud error

I am trying to migrate a project's CLI workspaces to Terraform Cloud. I am using Terraform version 0.14.8 and following the official guide here.
$ terraform0.14.8 workspace list
default
* development
production
staging
Currently, the project uses the S3 remote state backend configuration
terraform {
backend "s3" {
profile = "..."
key = "..."
workspace_key_prefix = "environments"
region = "us-east-1"
bucket = "terraform-state-bucketA"
dynamodb_table = "terraform-state-bucketA"
encrypt = true
}
I changed the backend configuration to:
backend "remote" {
hostname = "app.terraform.io"
organization = "orgA"
workspaces {
prefix = "happyproject-"
}
}
and execute terraform0.14.8 init in order to begin the state migration process. Expected behaviour would be to create 3 workspaces in Terraform Cloud:
happyproject-development
happyproject-staging
happyproject-production
However, I get the following error:
$ terraform0.14.8 init
Initializing modules...
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Terraform detected that the backend type changed from "s3" to "remote".
Error: Error looking up workspace
Workspace read failed: invalid value for workspace
I also enabled TRACE level logs and just before it throws the error I can see this: 2021/03/23 10:08:03 [TRACE] backend/remote: looking up workspace for orgA/.
Notice the empty string after orgA/ and the omission of the prefix! I am guessing that TF tries to query Terraform Cloud for the default workspace, which is an empty string, and it fails to do so.
I have not been using the default workspace at all and it just appears when I am executing terraform0.14.8 init. The guide mentions:
Some backends, including the default local backend, allow a special default workspace that doesn't have a specific name. If you previously used a combination of named workspaces and the special default workspace, the prompt will next ask you to choose a new name for the default workspace, since Terraform Cloud doesn't support unnamed workspaces:
However, it never prompts me to choose a name for the default workspace. Any help would be much appreciated!
I had similar issue and what helped me was to create in advance the empty workspace with expected name and then run terraform init.
I have also copied .tfstate file from remote location to root directory of the project before doing init. Hope this will help you as well.
What I ended up doing was
Created the empty workspaces in Terraform Cloud
For every CLI workspace, I pointed the backend to the respective TFC workspace and executed terraform init. That way, the Terraform state was automatically migrated from S3 backend to TFC
Finally, after all CLI workspaces were migrated, I used the prefix argument of the workspaces block instead of the name argument to manage the different TFC workspaces

Resources