I have terraform script which creates Backend address pools and Loadbalancer rules in Loadbalancer in resource group. These tasks are included in Azure-pipeline. FOr the first time I run the pipeline.Its creating properly. If I run the pipeline for the second time. Its not updating the existing one .Its keeping the Backend address pools and Loadbalancer rules which are created by previous release and adding the extra Backend address pools and Loadbalancer rules for this release which is causing duplicates in Backend address pools and Loadbalancer rules. Any suggestions on this please
resource "azurerm_lb_backend_address_pool" "example" {
resource_group_name = azurerm_resource_group.example.name
loadbalancer_id = azurerm_lb.example.id
name = "BackEndAddressPool"
}
resource "azurerm_lb_rule" "example" {
resource_group_name = azurerm_resource_group.example.name
loadbalancer_id = azurerm_lb.example.id
name = "LBRule"
protocol = "All"
frontend_port = 0
backend_port = 0
frontend_ip_configuration_name = "PublicIPAddress"
enable_floating_ip = true
backend_address_pool_id = azurerm_lb_backend_address_pool.example
}
This is likely happening because the Terraform state file is being lost between pipeline runs.
By default, Terraform stores state locally in a file named terraform.tfstate. When working with Terraform in a team, use of a local file makes Terraform usage complicated because each user must make sure they always have the latest state data before running Terraform and make sure that nobody else runs Terraform at the same time.
With remote state, Terraform writes the state data to a remote data store, which can then be shared between all members of a team. Terraform supports storing state in Terraform Cloud, HashiCorp Consul, Amazon S3, Alibaba Cloud OSS, and more.
Remote state is a feature of backends. Configuring and using remote backends is easy and you can get started with remote state quickly. If you then want to migrate back to using local state, backends make that easy as well.
You will want to configure Remote State storage to keep the state. Here is an example using Azure Blob Storage:
terraform {
backend "azurerm" {
resource_group_name = "StorageAccount-ResourceGroup"
storage_account_name = "abcd1234"
container_name = "tfstate"
key = "prod.terraform.tfstate"
}
}
Stores the state as a Blob with the given Key within the Blob Container within the Blob Storage Account. This backend also supports state locking and consistency checking via native capabilities of Azure Blob Storage.
This is more completely described in the azurerm Terraform backend docs.
Microsoft also provides a Tutorial: Store Terraform state in Azure Storage, which goes through the setup step by step.
Related
When creating an App Service Plan on my new-ish (4 day old) subscription using Terraform, I immediately get a throttling error
App Service Plan Create operation is throttled for subscription <subscription>. Please contact support if issue persists
The thing is, when I then go to the UI and create an identical service plan, I receive no errors and it creates without issue, so it's clear that there is actually no throttling issue for creating the app service plan since I can make it.
I'm wondering if anyone knows why this is occurring?
NOTE
I've gotten around this issue by just creating the resource in the UI and then importing it into my TF state... but since the main point of IaC is automation, I'd like to ensure that this unusual behavior does not persist when I go to create new environments.
EDIT
My code is as follows
resource "azurerm_resource_group" "frontend_rg" {
name = "${var.env}-${var.abbr}-frontend"
location = var.location
}
resource "azurerm_service_plan" "frontend_sp" {
name = "${var.env}-${var.abbr}-sp"
resource_group_name = azurerm_resource_group.frontend_rg.name
location = azurerm_resource_group.frontend_rg.location
os_type = "Linux"
sku_name = "B1"
}
EDIT 2
terraform {
backend "azurerm" {}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.15.0"
}
}
}
How to disable public access to Azure storage account but still accessible from cloudshell.
What I have and works:
Az-storage account that contains "terraform.tfstate" with public access
main.tf file in my "Azure Cloudshell" with "backend" config for remote statefile
terraform {
required_version = ">= 1.2.4"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.98.0"
}
}
# To store the state in a storage account
# Benefit=working with team and if local shell destroyed -> state=lost)
backend "azurerm" {
resource_group_name = "RG-Telco-tf-statefiles"
storage_account_name = "telcostatefiles"
container_name = "tf-statefile-app-1"
key = "terraform.tfstate"
}
}
This works perfectly.
But if I restrict public access in the storage account, my "Azure Cloudshell" has no permission to the statefile anymore.
How can I make it work and what are the best security best practices in this case?
I think this is what you need.
After you set this, you can make a network restriction rule and you can allow the cloud shell virtual network.
Some other best practices:
The storage account that stores the state file should be in a separate resource group and have a delete lock on it.
1 SAS token per user renewed every 6 months with a scope at the folder level, one container per project, and per environment
Storage with redundancy in a paired region for reading access in case of issues
I want to whitelist the ip addresses of an App Service Plan on a managed Sql Server.
The problem is, the resource azurerm_app_service_plan exposes its ip addresses as a comma-separated value, on the attribute possible_outbound_ip_addresses.
I need to create one azurerm_sql_firewall_rule for each of these ips.
If I try the following approach, Terraform gives an exception:
locals {
staging_app_service_ip = {
for v in split(",", azurerm_function_app.prs.possible_outbound_ip_addresses) : v => v
}
}
resource "azurerm_sql_firewall_rule" "example" {
for_each = local.staging_app_service_ip
name = "my_rules_${each.value}"
resource_group_name = data.azurerm_resource_group.example.name
server_name = var.MY_SERVER_NAME
start_ip_address = each.value
end_ip_address = each.value
}
I get then the error:
The "for_each" value depends on resource attributes that cannot be
determined until apply, so Terraform cannot predict how many instances
will be created. To work around this, use the -target argument to
first apply only the resources that the for_each depends on.
I'm not sure how to work around this.
For the time being, I have added the ip addresses as a variable, and am manually setting the value of the variable.
What would be the correct approach to create these firewall rules?
I'm trying to deal with the same issue. My way around it is to perform multi-step setup.
In the first step I run terraform configuration where it creates database, app service, api management and some other resources. Next I deploy the app. Lastly I run terraform again, but this time the second configuration creates sql firewall rules and api management api from deployed app swagger definition.
I am trying to create a web app inside an ASE ILB using the following configuration:
resource azurerm_app_service_plan "app_plan" {
name = var.app_plan_name
resource_group_name = var.resource_group_name
location = var.location
kind = "Windows"
sku {
tier = "Isolated"
size = "I1"
# capacity is required field though not mentioned in the documentation, it is the no of workers field
capacity = "1"
}
app_service_environment_id = var.app_service_environment_id
}
resource azurerm_app_service "app_service" {
name = var.app_service_name
resource_group_name = var.resource_group_name
location = var.location
app_service_plan_id = azurerm_app_service_plan.app_plan.id
}
resource azurerm_app_service_custom_hostname_binding "custom_name" {
hostname = var.custom_hostname
app_service_name = azurerm_app_service.app_service.name
resource_group_name = var.resource_group_name
}
Everything works fine as long as i provide unique name for web app. I understand the fact that the webapp name should be unique globally.
However if we create a web app inside ase ilb using portal, we have option called region and if we specify ase ilb in that attribute, it does not check for uniqueness globally, instead it will check for the uniqueness inside the ASE ILB.
If i want to mimic the same using Terraform, I provide ase ilb in the location attribute, it errors. What is the way to address Region field in Azure web app using Terraform
One more thing i have observed is if let's say i create a web app named dev1 inside ASE ilb. For first time it creates, since it does not check uniqueness globally, but checks for inside the ASE ILB and because we provide the ASE ILB in region. When i try to create dev1 again inside same ASE ILB, ideally it should warn with a resource exisits. It does not warn and gives a status of resource created successfully, but there are no 2 webapps with dev1 inside ASE ILB. I think this is some kind of error warning issue from Azure resource Manager because technically it should warn about a resource already being existed, or we should see 2 webapps with the name dev1.
Each app will resolve using the ASE internal domain name terminating with suffix « appserviceenvironment.net ». Therefore, your app must be unique on each ASE you intend to deploy your apps to. From what I recall an error is thrown when trying to deploy apps using the same name. Does that help?
My use-case is multiple AppService apps with different lifecycles sitting behind a single Application Gateway. I'd like to add a new listener, new multi-site routing rules, and a new backend pool whenever I add a new app without tearing down and re-creating the gateway.
Initially, my plan was to have a Terraform config for shared infra that creates a skeleton Application Gateway and then have separate application-specific Terraform configs to add listeners, backend address pools, and routing rules to this gateway for each app. It seems to be impossible to accomplish with TF though.
I can clearly add listeners, routing rules and backend pools to an exiting gateway using Azure CLI or Portal. Is there a way to do it with Terraform?
It seems that this is not currently possible due to the fact that the Application Gateway must be initialised with at least one of each of these configuration blocks.
While it is possible to add further definitions using the Azure CLI, that behaviour isn't currently compatible with the way Terraform works. Consider what would happen if backend address pools were initially defined inline as part of the azurerm_application_gateway block and then further definitions of azurerm_application_gateway_backend_address_pool (hypothetical resource block) were also specified.
It would be nice if Terraform could deal with this situation with a union of those two definitions but unfortunately it doesn't play nicely with both inline and standalone resource blocks. Hence the warning on azurerm_subnet resources explaining that inline subnets on azurerm_virtual_network would conflict.
NOTE on Virtual Networks and Subnet's:
Terraform currently provides both a standalone Subnet resource, and allows for Subnets to be defined in-line within the Virtual Network resource. At this time you cannot use a Virtual Network with in-line Subnets in conjunction with any Subnet resources. Doing so will cause a conflict of Subnet configurations and will overwrite Subnet's.
Logically it wouldn't be possible to have a similar warning for Application Gateway since it's inline resource blocks are mandatory (not so for Azure Virtual Networks)
For now, the options here would seem to be
Manage all application-specific aspects of the Application Gateway in the same place with native Terraform.
Create the skeleton definition of the Application Gateway and run local-exec provisioner CLI commands for application-specific configuration
provisioner "local-exec" {
command = <<EOT
az network application-gateway address-pool create `
--resource-group MyResourceGroup `
--gateway-name MyAppGateway `
--name MyAddressPool `
--servers 10.0.0.4 10.0.0.5 `
EOT
interpreter = ["PowerShell", "-command"]
}
Here is the reference doc from Terraform for managing Azure Application Gateway.
You can refer this sample code for adding new listeners,routing rules as well as backend pools to the existing application gateway. This template carries all the required arguments like,
http_listener - (Required) One or more http_listener blocks.
http_listener {
name = "https-listener-1"
frontend_ip_configuration_name = "feip"
frontend_port_name = "http-port"
protocol = "Http"
}
request_routing_rule - (Required) One or more request_routing_rule blocks.
request_routing_rule {
name = "${local.request_routing_rule_name}"
rule_type = "Basic"
http_listener_name = "${local.listener_name}"
backend_address_pool_name = "${local.backend_address_pool_name}"
backend_http_settings_name = "${local.http_setting_name}"
}
}
backend_address_pool - (Required) One or more backend_address_pool blocks as defined below.
backend_address_pool {
name = "${local.backend_address_pool_name}"
}