Get inbound ip adress from azurerm_windows_web_app in Terraform - azure

I want to create azurerm_mssql_firewall_rule resource witch allows IP of "Azure windows web app".
When I write it manually it works fine. f.e.:
resource "azurerm_mssql_firewall_rule" "api_cloud" {
name = var.cloud_firewal_rule_name
server_id = azurerm_mssql_server.api.id
start_ip_address = "00.000.000.00"
end_ip_address = "00.000.000.00"
}
I want to get IP address like this start_ip_address/end_ip_address = azurerm_windows_web_app.api.inbound_ip_address.
But there isn't inbound option in azurerm_windows_web_app, I can only access outbound addresses azurerm_windows_web_app.api.outbound_ip_addresses.
Is there is anyway do something like this?
IN SHORT:
How to get this IP address with terraform?

Yes, there is but it's a bit complicated due to the way Terraform works. I used a Linux App Service in my examples but it should work identically for both Windows and Linux versions. Let's go:
So, things are a bit more complicated due to the fact that App Services have quite a big range of possible outbound IP addresses as they are running on a shared infrastructure. Therefore it returns a list with an unknown length. That makes things annoying for Terraform. As an example, this is how you usually iterate through multiple items in Terraform using for_each:
resource "azurerm_mssql_firewall_rule" "example" {
for_each = toset(azurerm_linux_web_app.api_app.outbound_ip_address_list)
name = "FirewallRule"
server_id = azurerm_mssql_server.example.id
start_ip_address = each.key
end_ip_address = each.key
}
In this snippet, you take the list of outbound IP addresses from the App Service, cast them to a set, and then iterate through it. However, this only works if the App Service already exists - if you are starting from an empty slate, you will face the following error:
azurerm_linux_web_app.api_app.outbound_ip_address_list is a list of
string, known only after apply
The "for_each" map includes keys
derived from resource attributes that cannot be determined until
apply, and so Terraform cannot determine the full set of keys that
will identify the instances of this resource.
When working with
unknown values in for_each, it's better to define the map keys
statically in your configuration and place apply-time results only in
the map values.
Alternatively, you could use the -target
planning option to first apply only the resources that the for_each
value depends on, and then apply a second time to fully converge.
Luckily Terraform has a quite helpful error message, which tells us how we can work around the problem. Using the -target parameter we can first create the App Service like this
terraform apply -target=azurerm_linux_web_app.api_app
This should only create the App Service and dependencies required by it. Afterward, we can then execute Terraform normally and it should work as desired without any errors. It's not very pretty, but currently, there are no better ways of achieving exactly what you want.

Related

Azure - Create Function App hostkey with Terraform azapi/bicep/powershell

I'm working on automating the rotation of my azure function app's host key, which is used to maintain a more secure connection between my API Management and my function apps. The issue is that I can not figure out how to accomplish this based on the lack of clear documentation. I found a document for how to create a key for a specific function within the function app, but not for the host level. I've tried using the web ui resource manager to figure out what the proper values are, but host seems to have no values available by GET request to help me see what the formatting needs to be. In fact, I can't find any reference to my function app's host keys anywhere in the resource manager UI. (Of course I can in the portal).
I don't care if it's powershell, bicep, ARM, terraform azapi, whatever, I'd just like to find a way to accomplish the creation of a new hostkey so that I can control it's rotation with terraform. Does anyone know how to accomplish this?
Right now my attempt looks like
resource "azapi_resource" "function_host_key" {
type = "Microsoft.Web/sites/host/functionkeys#2018-11-01"
name = "${azurerm_windows_function_app.api_function.name}-host-key"
parent_id = "${azurerm_windows_function_app.api_function.id}/host"
body = jsonencode({
properties = {
name = "test-key-terraform"
value = "asdfasdfasdfasdfasdfasdfasdf"
}
})
}
I also tried
resource "azapi_resource" "function_host_key" {
type = "Microsoft.Web/sites#2018-11-01"
name = "${azurerm_windows_function_app.api_function.name}-host-key"
parent_id = "${azurerm_windows_function_app.api_function.id}/functionsAppKeys"
location = var.region
}
since it said the body was invalid, but this also throws an error due to there being no body. I'm wondering if this just isn't possible.
I also just tried
resource "azapi_resource" "function_host_key" {
type = "Microsoft.Web/host/functionkeys#2018-11-01"
name = "${azurerm_windows_function_app.api_function.name}-host-key"
parent_id = "${azurerm_windows_function_app.api_function.id}/host"
location = var.region
}
and the result said that it was expecting
parent_id of `parent_id is invalid`: expect ID of `Microsoft.Web/host`
so I'm not sure what that parent_id should be.
I found an example through a bash/powershell script using the azure rest API, but I get a 403 error when I attempt to do it, I can only assume because my function app is secured, but I'm not sure a good way to determine that.
There must be a way to create a key programmatically...
UPDATE
I believe that this has been purposely made impossible now to do with terraform and I need to, as grose and backwards as it may be, use a CLI command in my pipeline. I understand you can do this, but it is (ofc my opinion) that if I am using terraform, I have terraform manage something, not have random CLI commands outside of terraform doing things that TF should be able to manage.
I created a key using az functionapp keys set and that worked, and the output explicitly stated that the type of resource which was created was Microsoft.Web/sites/host/functionKeys, so I went to the Azure Resource Explorer to see what versions were available for this type, since it clearly exists.. and found that nope, azure does not have it listed.
What confuses me is that I see this being done w/ ARM templates and I believe that my code matches theirs, just I'm using AZAPI.. and I get a not found error. Giving up for now

Terraform provisioner trigger only for new instances / only run once

I have conditional provision steps I want to run for all compute instances created, but only run once.
I know I can put the provisioning within the compute resource, but then it cannot be conditional.
If I put it in a null_resource, I need a trigger, and I don't know how to trigger on only the newly created resources (i.e. if I already have 1 instance, and want to scale to 2, I want to only run provisioning on the 2nd being created, not run again on the 1st which is already provisioned).
How can I get a variable that only gives me the id or ip of the instance just created, as opposed to all of them?
Below an example of the provisioner.
resource "null_resource" "provisioning" {
count = var.condition ? length(var.instance_ips) : 0
triggers = {
instance_ids = join(",", var.instance_ips)
}
connection {
agent = false
timeout = "4m"
host = var.instance_ips[count.index]
user = "user"
private_key = var.ssh_private_key
}
provisioner "remote-exec" {
inline = [ do something, then remove the public key from authorized_keys ]
}
}
PS: the reason I only can run once (as opposed to run again and do nothing if already provisioned) is that I want to destroy the provisioning public key after I'm done, since it is using a tf generated key pair and the private key ends up in the state file, I want to make sure someone who gets access to the key pair still cannot access the instance.
Once the public key is removed from the authorized_keys the provisioner running a second time will just fail to connect, timeout and fail.
I found that I can use the on_failure: continue key, but then if it actual fails for legitimate reasons it would continue too.
I also could use a key pair that is generated locally with a local-exec provisioner so it doesn't show in the state file, but then the key is a file, which is not much different if someone get access to it; the file needs to stay on the machine, which may not work well with a cloud resource manager env that is recreated on a need to run basis.
And then I'm sure there are other ways to provision a file or script, but in this case it contains instance dependency data generated by TF, that I don't want left in a cloud-init.
So, I come down to needing to figure a way to use a trigger that only contains the new instance(s)
Any ideas how to do this?
https://www.terraform.io/docs/provisioners/
This documentation lists provisioners as a last resource and provides some suggestions on how to avoid having to use it, for various common resources.
Execute the script from the user_data, which is specifically designed for provisional, run-once actions. Since defining the user_data supports all regular Terraform interpolation, you can use that opportunity to pass environment variables or selectively include/exclude parts of a script, if you need conditional logic.
The downside is that any change in user_data results in recreating the instances, or creating a new launch configuration/template.

terraform lifecycle prevent destroy

I am working with Terraform V11 and AWS provider; I am looking for a way to prevent destroying few resources during the destroy phase. So I used the following approach.
lifecycle {
prevent_destroy = true
}
When I run a "terraform plan" I get the following error.
the plan would destroy this resource, but it currently has
lifecycle.preven_destroy set to true. to avoid this error and continue with the plan.
either disable or adjust the scope.
All that I am looking for is a way to avoid destroying one of the resources and its dependencies during the destroy command.
AFAIK This feature is not yet supported
You need to remove that resource from state file and then reimport it
terraform plan | grep <resource> | grep id
terraform state rm <resource>
terraform destroy
terraform import <resource> <ID>
The easiest way to do this would be to comment out all of the the resources that you want to destroy and then do a terraform apply.
I've found the most practical way to manage this is through a combination of variables that allow the resource in question to be conditionally created or not on via the use of count, alongside having all other resources depend on the associated Data Source instead of the conditionally created resource.
A good example of this is a Route 53 Hosted Zone which can be a pain to destroy and recreate if you manage your domain outside of AWS and need to update your nameservers, waiting for DNS propagation each time you spin it up.
1. By specifying some variable
variable "should_create_r53_hosted_zone" {
type = bool
description = "Determines whether or not a new hosted zone should be created on apply."
}
2. you can use it alongside count on the resource to conditionally create it.
resource "aws_route53_zone" "new" {
count = var.should_create_r53_hosted_zone ? 1 : 0
name = "my.domain.com"
}
3. Then, by following up with a call to the associated Data Source
data "aws_route53_zone" "existing" {
name = "my.domain.com"
depends_on = [
aws_route53_zone.new
]
}
4. you can give all other resources consistent access to the resource's attributes regardless of whether or not your flag has been set.
resource "aws_route53_record" "rds_reader_endpoint" {
zone_id = data.aws_route53_zone.existing.zone_id
# ...
}
This approach is only slightly better than commenting / uncommenting resources during apply, but at least gives some consistent, documented way of working around it.

How to manage terraform for multiple repos

I have 2 repos for my project. A Static website and server. I want the website to be hosted by cloudfront and s3 and the server on elasticbeanstalk. I know these resources will need to know about a route53 resource at least to be under the same domain name for cors to work. Among other things such as vpcs and stuff.
So my question is how do I manage terraform with multiple repos.
I'm thinking I could have a seperate infrastructure repo that builds for all repos.
I could also have them seperate and pass in the arns/names/ids as variables (annoying).
You can use terraform remote_state for this. It lets you read the output variables from another terraform state file.
Lets assume you save your state files remotely on s3 and you have your website.tfstate and server.tfstate file. You could output your hosted zone ID of your route53 zone as hosted_zone_id in your website.tfstate and then reference that output variable directly in your server state terraform code.
data "terraform_remote_state" "website" {
backend = "s3"
config {
bucket = "<website_state_bucket>"
region = "<website_bucket_region>"
key = "website.tfstate"
}
}
resource "aws_route53_record" "www" {
zone_id = "${data.terraform_remote_state.website.hosted_zone_id}"
name = "www.example.com"
type = "A"
ttl = "300"
records = ["${aws_eip.lb.public_ip}"]
}
Note, that you can only read output variables from remote states. You cannot access resources directly, as terraform treats other states/modules as black boxes.
Update
As mentioned in the comments, terraform_remote_state is a simple way to share explicitly published variables across multiple states. However, it comes with 2 issues:
Close coupling between code components, i.e., producer of the variable cannot change easily.
It can only be used by terraform, i.e., you cannot easily share those variables across different layers. Configuration tools such as Ansible cannot use .tfstate natively without some additional custom plugin/wrapper.
The recommended HashiCorp way is to use a central config store such as Consul. It comes with more benefits:
Consumer is decoupled from the variable producer.
Explicit publishing of variables (like in terraform_remote_state).
Can be used by other tools.
A more detailed explanation can be found here.
An approach I've used in the past is to have a single repo for all of the Infrastructure.
An alternative is to have 2 separate tf configurations, each using remote state. Config 1 can use output variables to store any arns/ids as necessary.
Config 2 can then have a remote_state data source to query for the relevant arns/ids.
E.g.
# Declare remote state
data "terraform_remote_state" "network" {
backend = "s3"
config {
bucket = "my-terraform-state"
key = "network/terraform.tfstate"
region = "us-east-1"
}
}
You can then use output values using standard interpolation syntax
${data.terraform_remote_state.network.some_id}

How to use multiple Terraform Providers sequentially

How can I get Terraform 0.10.1 to support two different providers without having to run 'terraform init' every time for each provider?
I am trying to use Terraform to
1) Provision an API server with the 'DigitalOcean' provider
2) Subsequently use the 'Docker' provider to spin up my containers
Any suggestions? Do I need to write an orchestrating script that wraps Terraform?
Terraform's current design struggles with creating "multi-layer" architectures in a single configuration, due to the need to pass dynamic settings from one provider to another:
resource "digitalocean_droplet" "example" {
# (settings for a machine running docker)
}
provider "docker" {
host = "tcp://${digitalocean_droplet.example.ipv4_address_private}:2376/"
}
As you saw in the documentation, passing dynamic values into provider configuration doesn't fully work. It does actually partially work if you use it with care, so one way to get this done is to use a config like the above and then solve the "chicken-and-egg" problem by forcing Terraform to create the droplet first:
$ terraform plan -out=tfplan -target=digitalocean_droplet.example
The above will create a plan that only deals with the droplet and any of its dependencies, ignoring the docker resources. Once the Docker droplet is up and running, you can then re-run Terraform as normal to complete the setup, which should then work as expected because the Droplet's ipv4_address_private attribute will then be known. As long as the droplet is never replaced, Terraform can be used as normal after this.
Using -target is fiddly, and so the current recommendation is to split such systems up into multiple configurations, with one for each conceptual "layer". This does, however, require initializing two separate working directories, which you indicated in your question that you didn't want to do. This -target trick allows you to get it done within a single configuration, at the expense of an unconventional workflow to get it initially bootstrapped.
Maybe you can use a provider instance within your resources/module to set up various resources with various providers.
https://www.terraform.io/docs/configuration/providers.html#multiple-provider-instances
The doc talks about multiple instances of same provider but I believe the same should be doable with distinct providers as well.
A little bit late...
Well, got the same Problem. My workaround is to create modules.
First you need a module for your docker Provider with an ip variable:
# File: ./docker/main.tf
variable "ip" {}
provider "docker" {
host = "tcp://${var.ip}:2375/"
}
resource "docker_container" "www" {
provider = "docker"
name = "www"
}
Next one is to load that modul in your root configuration:
# File: .main.tf
module "docker01" {
source = "./docker"
ip = "192.169.10.12"
}
module "docker02" {
source = "./docker"
ip = "192.169.10.12"
}
True, you will create on every node the same container, but in my case that's what i wanted. I currently haven't found a way to configure the hosts with an individual configuration. Maybe nested modules, but that didn't work in the first tries.

Resources