Can Terraform Destroy resources created using the "http data source" provider? - terraform

I have a terraform project for deploying a few VMs into Azure. Once the VMS are created successfully I'm wanting to automate the creation of DNS records. Additionally the application that runs on the VMs has APIs to POST configurations. I've successfully created my DNS records and POSTed configurations to the VMs using the http provider. However when I run a terraform destroy it obviously doesn't destroy them. I'm curious if there is a way when running terraform destroy to have these records and configurations deleted? Is there a way to manually add destroy steps in which I could just send more http requests to delete them?
Is there a better method to doing this that you would recommend?
Also I'll be going back and making all these fields sensitive variables with a .tfvars file. This is simply for testing right now.
DNS record example using Cloudns
data "http" "dns_record" {
url = "https://api.cloudns.net/dns/add-record.json?auth-id=0000&auth-password=0000&domain-name=domain.com&record-type=A&host=testhost&record=123.123.123.123&ttl=1800"
}
VM API config example
data "http" "config" {
url = "https://host.domain.com/api/configuration/endpoint"
method = "POST"
request_body = jsonencode({"name"="testfield", "field_type"="configuration"})
# Optional request headers
request_headers = {
authorization = var.auth
}
}

You should not use data-sources for operations that are having non-idempotent side effects or changing any external state. A data sources should only read information, because Terraform does not store them in the state. Therefore, there is no mechanism to destroy data sources, as there is nothing to destroy.
Specific Provider
In your case, there seem to be community providers for your DNS provider, e.g.:
mangadex-pub/cloudns. This way you could manage your DNS entry via a resource, which will be supported by destroy.
resource "cloudns_dns_record" "some-record" {
# something.cloudns.net 600 in A 1.2.3.4
name = "something"
zone = "cloudns.net"
type = "A"
value = "1.2.3.4"
ttl = "600"
}
null_resource with provisioners
In cases where there is no Terraform provider for the API you want to consume, you can try using a null_resource with provisioners. Providers have some caveats, use with caution. To cite the Terraform docs:
Use provisioners as a last resort. There are better alternatives for most situations.
resource "null_resource" "my_resource_id" {
provisioner "local-exec" {
command = "... command that creates the actual resource"
}
provisioner "local-exec" {
when = destroy
command = "... command that destroys the actual resource"
}
}

Related

Terraform AzureRM Continually Modifying API Management with Proxy Configuration for Default Endpoint

We are terraforming our Azure API Management instance.
...
resource "azurerm_api_management" "apim" {
name = "the-apim"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
...
hostname_configuration {
proxy {
host_name = "the-apim.azure-api.net"
negotiate_client_certificate = true
}
}
}
...
We need to include the hostname_configuration block so that we can switch negotiate_client_certificate to true for the default endpoint.
This does the job, however every time Terraform runs it plans to modify the APIM instance by adding the hostname_configuration block again:
+ hostname_configuration {
+ proxy {
+ host_name = "the-apim.azure-api.net"
+ negotiate_client_certificate = true
}
}
Is there a way to prevent this from happening? In the portal I can see this value is set to true.
I suggest you try to pair with lifecycle > ignore_changes.
The ignore_changes feature is intended to be used when a resource is created with references to data that may change in the future, but should not affect said resource after its creation. In some rare cases, settings of a remote object are modified by processes outside of Terraform, which Terraform would then attempt to "fix" on the next run. In order to make Terraform share management responsibilities of a single object with a separate process, the ignore_changes meta-argument specifies resource attributes that Terraform should ignore when planning updates to the associated remote object.
In your case, the hostname_configuration is considered a "nested block" or "attribute as block" in Terraform. So the usage of ignore_changes is not so straightforward (you can't just add the property name, as you would do if you wanted to ignore changes in your resource_group_name for example, which is directly a property). From an issue in GitHub back from 2018, it seems you could use the TypeSet hash of the nested block to add to an ignore sections.
Even though I can't test this, my suggestion for you:
deploy your azurerm_api_management resource normally with the hostname_configuration block
check the state file from your resource and get the typeset hash of the hostname_configuration part; should be similar to hostname_configuration.XXXXXX
add an ignore_changes section passing the above
resource "azurerm_api_management" "apim" {
# ...
lifecycle {
ignore_changes = [
"hostname_configuration.XXXXXX",
]
}
}
Sometimes such issues occur due to issues in the provider. Probably it is not storing the configuration in the state file or not retrieving the stored state for this block. Try upgrading the provider to the latest available provider and see if it sorts the issue.
If that does not solve it, you can try defining this configuration as a separate resource. As per the terraform documentation: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/api_management
It's possible to define Custom Domains both within the
azurerm_api_management resource via the hostname_configurations block
and by using the azurerm_api_management_custom_domain resource.
However it's not possible to use both methods to manage Custom Domains
within an API Management Service, since there'll be conflicts.
So Please try removing that hostname_configuration block and add it as separate resource as per this documentation: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/api_management_custom_domain
This will most likely fix the issue.

How to update an existing cloudflare_record in terraform and github actions

I creaed my project with code from Hashicorp tutorial "Host a static website with S3 and Cloudflare", but the tutorial didn't mention github actions. So, when I put my project in github actions, even though terraform plan and terraform apply output successfully locally, I get errors on terraform apply:
Error: expected DNS record to not already be present but already exists
with cloudflare_record.site_cname ...
with cloudflare_record.www
I have two resources in my main.tf, one for the site domain and one for www, like the following:
resource "cloudflare_record" "site_cname" {
zone_id = data.cloudflare_zones.domain.zones[0].id
name = var.site_domain
value = aws_s3_bucket.site.website_endpoint
type = "CNAME"
ttl = 1
proxied = true
}
resource "cloudflare_record" "www" {
zone_id = data.cloudflare_zones.domain.zones[0].id
name = "www"
value = var.site_domain
type = "CNAME"
ttl = 1
proxied = true
}
If I remove these lines of code from my main.tf and then run terraform apply locally, I get the warning that this will destroy my resource.
Which should I do?
add an allow_overwrite somewhere (don't see examples of how to use this in the docs) and the ways I've tried to add it generated errors.
remove the lines of code from main.tf knowing the github actions run will destroy my cloudflare_record.www and cloudflare_record.site_cname knowing I can see my zone id and CNAME if I log into cloudflare so maybe this code isn't necessary after the initial set up
run terrform import somewhere? If so, where do I find the zone ID and record ID
or something else?
Where is your terraform state? Did you store it locally or in a remote location?
Because it would explain why you don't have any problems locally and why it's trying to recreate the resources in Github actions.
More information about terraform backend (where the state is stored) -> https://www.terraform.io/docs/language/settings/backends/index.html
And how to create one with S3 for example ->
https://www.terraform.io/docs/language/settings/backends/s3.html
It shouldn't be a problem if Terraform would drop and re-create DNS records, but for better result, you need to ensure that GitHub Actions has access to the (current) workspace state.
Since Terraform Cloud provides a free plan, there is no reason not to take advantage of it. Just create a workspace through their dashboard, add "remote" backend configuration to your project and ensure that GitHub Actions uses Terraform API Token at runtime (you would set it via GitHub repository settings > Secrets).
You may want to check this example — Terraform Starter Kit
infra/backend.tf
infra/dns-records.tf
scripts/tf.js
Here is how you can pass Terraform API Token from secrets.TERRAFORM_API_TOKEN GitHub secret to Terraform CLI:
- env: { TERRAFORM_API_TOKEN: "${{ secrets.TERRAFORM_API_TOKEN }}" }
run: |
echo "credentials \"app.terraform.io\" { token = \"$TERRAFORM_API_TOKEN\" }" > ./.terraformrc

Create azurerm_sql_firewall_rule for an azurerm_app_service_plan in Terraform

I want to whitelist the ip addresses of an App Service Plan on a managed Sql Server.
The problem is, the resource azurerm_app_service_plan exposes its ip addresses as a comma-separated value, on the attribute possible_outbound_ip_addresses.
I need to create one azurerm_sql_firewall_rule for each of these ips.
If I try the following approach, Terraform gives an exception:
locals {
staging_app_service_ip = {
for v in split(",", azurerm_function_app.prs.possible_outbound_ip_addresses) : v => v
}
}
resource "azurerm_sql_firewall_rule" "example" {
for_each = local.staging_app_service_ip
name = "my_rules_${each.value}"
resource_group_name = data.azurerm_resource_group.example.name
server_name = var.MY_SERVER_NAME
start_ip_address = each.value
end_ip_address = each.value
}
I get then the error:
The "for_each" value depends on resource attributes that cannot be
determined until apply, so Terraform cannot predict how many instances
will be created. To work around this, use the -target argument to
first apply only the resources that the for_each depends on.
I'm not sure how to work around this.
For the time being, I have added the ip addresses as a variable, and am manually setting the value of the variable.
What would be the correct approach to create these firewall rules?
I'm trying to deal with the same issue. My way around it is to perform multi-step setup.
In the first step I run terraform configuration where it creates database, app service, api management and some other resources. Next I deploy the app. Lastly I run terraform again, but this time the second configuration creates sql firewall rules and api management api from deployed app swagger definition.

Adding tags created to child resources created by terraform

Terraform v0.11.9
+ provider.aws v1.41.0
I want to know if there is a way to update a resource that is not directly created in the plan but by a resource in the plan. The example is creating a managed Active Directory by using aws_directory_service_directory This process creates a security group and I want to add tags to the security group. Here is the snippet I'm using to create the resource
resource "aws_directory_service_directory" "NewDS" {
name = "${local.DSFQDN}"
password = "${var.ADPassword}"
size = "Large"
type = "MicrosoftAD"
short_name = "${local.DSShortName}"
vpc_settings {
vpc_id = "${aws_vpc.this.id}"
subnet_ids = ["${aws_subnet.private.0.id}",
"${aws_subnet.private.1.id}",
]
}
tags = "${merge(var.tags, var.ds_tags, map("Name", format("%s", local.VPCname)))}"
}
I can reference the newly created security group using
"${aws_directory_service_directory.NewDS.security_group_id}"
I can't use that to update the resource. I want to add all of the tags I have on the directory to the security, as well as updating the Name tag. I've tried using a local-exec provisioner, but the results have not been consistent and getting the map of tags to the command without hard coding it has not worked.
Thanks
I moved the local provider out of the directory service resource and into a dummy resource.
resource "null_resource" "ManagedADTags"
{
provisioner "local-exec"
{
command = "aws --profile ${var.profile} --region ${var.region} ec2 create-tags --
resources ${aws_directory_service_directory.NewDS.security_group_id} --tags
Key=Name,Value=${format("${local.security_group_prefix}-%s","ManagedAD")}"
}
}
(The command = is a single line)
Using the format command allowed me to send the entire list of tags to the resource. Terraform doesn't "manage" it, but it does allow me to update it as part of the plan.
You can then leverage the aws_ec2_tag resource, which works on non-ec2 resources as well, on conjunction with the provider attribute ignore_tags. Please refer to another answer I made on the topic for more detail.
aws already exposes api for that where you can tag resources not just a resource. not sure why terraform is not implementing that
Just hit this as well. Turns out the tags propagate from the directory service. So if you tag your directory appropriately, the name tag from your directory service will be applied to the security group.

Get terraform to ignore "associate_public_ip_address" status for stopped instance

I have a simple AWS deployment with a vpc, public subnet, route, and security group. Running terraform apply will launch an AWS instance, and I have that instance configured to associate a public IP. After the instance has been created, I run terraform plan and it properly says everything is up to date. No problems so far.
We have a management node that will shut down that instance if it's unused for a period of time as a cost saving measure.
Here's the problem: Once that instance is shut down, when I run terraform plan, the aws provider sees everything configured properly, but since the public IP has been released, the value for associate_public_ip_address no longer matches what is configured in the terraform configs, so terraform wants to delete and recreate that instance:
associate_public_ip_address: "false" => "true" (forces new resource)
Is there a way to get terraform to ignore just that one parameter?
This question is marginally related to https://github.com/hashicorp/terraform/issues/7262. But in my case, I don't want to set the expected state, I just want to be able to tell terraform to ignore that one parameter because it's ok that it's not associated right now, as long as it's configured to be associated when it starts.
(This occurred to me while writing this question: I have not experimented with configuring the subnet to automatically associate public ip for instances launched in it. Conceivably, by making the subnet automatically do it, and removing the option from "aws_instance", I might be able to make terraform not pay attention to that value...but I doubt it.)
You can use a lifecycle block to ignore certain attribute changes.
Using this, the resource is initially created using the provided value for that attribute. Upon a subsequent plan, apply, etc., Terraform will ignore changes to that attribute.
If we add an ignore for associate_public_ip_address in the lifecycle block, a stopped instance will no longer trigger a new resource.
Note that if you alter any other parameter that would require a new instance, the stopped one will be terminated and replaced.
Example based on the Terraform aws_instance example code :
provider "aws" {
region = "us-west-2"
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical account ID
}
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t2.micro"
associate_public_ip_address = "true"
tags {
Name = "HelloWorld"
}
lifecycle {
ignore_changes = ["associate_public_ip_address"]
}
}

Resources