Could you please help with pointing me in the right direction? I am unable to reference the name of my frontend ip configuration, from my load balancer resource, to my load balancer rule resource.
I have read the documentation and reference the front end ip configuration name in the load balancer rule via the exported front_ip_configuration name. In addition to also trying just copying the name of the public IP name in the load balancer rule resource.
Please see the below load-balancer.tf config file
resource "azurerm_public_ip" "internal_lb_public_ip" {
name = "PublicIPForLB"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
allocation_method = "Static"
sku = "Standard"
}
# internal standard sku load balancer
resource "azurerm_lb" "internal_lb" {
name = "myIntLoadBalancer"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
sku = "Standard"
frontend_ip_configuration {
name = "LoadBalancerFrontEndIP"
public_ip_address_id = azurerm_public_ip.internal_lb_public_ip.id
}
tags = local.common_tags
}
#-------------------------------------------------------------
# backend address pool - defines the group of resources that will serve traffic for a given load-balancing rule.
resource "azurerm_lb_backend_address_pool" "backend_pool" {
loadbalancer_id = azurerm_lb.internal_lb.id
name = "myBackendPool"
}
# addresses of vms placed within backend address pool.
# note: "backend Addresses can only be added to a Standard SKU Load Balancer."
resource "azurerm_lb_backend_address_pool_address" "backend_address" {
count = 3
name = "vmIP${count.index}"
backend_address_pool_id = azurerm_lb_backend_address_pool.backend_pool.id
# private ip address taken from virtual machine network interface asssociated with virtual machine
ip_address = azurerm_network_interface.vm_nic[count.index].private_ip_address
virtual_network_id = azurerm_virtual_network.lbvnet.id
}
#-------------------------------------------------------------
# health probe
resource "azurerm_lb_probe" "lb_http_health_probe" {
loadbalancer_id = azurerm_lb.internal_lb.id
name = "HTTPHealthProbe"
protocol = "Http"
port = 80
# URI, uniform resource identifier, used for requesting health status from the backend endpoint
request_path = "/"
# time gap between health probes
interval_in_seconds = 15
# number of failed probe attempts after which the backend endpoint is removed from rotation
number_of_probes = 2
}
#-------------------------------------------------------------
# laod balancing rule defining how traffic is distributed to the VMs
# azure load balancer resources require the Load Balancer needs to have a FrontEnd IP Configuration Attached
resource "azurerm_lb_rule" "lb_rule" {
loadbalancer_id = azurerm_lb.internal_lb.id
name = "myHTTPRule"
protocol = "Tcp"
frontend_port = 80
backend_port = 80
# name of the frontend IP configuration to which the rule is associated.
frontend_ip_configuration_name = "${azurerm_lb.internal_lb.frontend_ip_configuration[0].id}"
# idle timeout in minutes for TCP connections.
idle_timeout_in_minutes = 15
# reference to a Backend Address Pool over which this Load Balancing Rule operates.
backend_address_pool_ids = ["${azurerm_lb_backend_address_pool.backend_pool.id}"]
# a "floating” IP is reassigned to a secondary server in case the primary server fails
enable_floating_ip = false
# the load balancing distribution type to be used by the Load Balancer
load_distribution = "None"
}
#-------------------------------------------------------------
Please see the below debug outpt:
│ Error: expanding Load Balancer Rule: [ERROR] Cannot find FrontEnd IP Configuration with the name /subscriptions/00000000-00000-00000-0000-0000000000/resourceGroups/-----/providers/Microsoft.Network/loadBalancers/myIntLoadBalancer/frontendIPConfigurations/LoadBalancerFrontEndIP
│
│ with azurerm_lb_rule.lb_rule,│ on load-balancer.tf line 63, in resource "azurerm_lb_rule" "lb_rule":
│ 63: resource "azurerm_lb_rule" "lb_rule" {
Use this instead
frontend_ip_configuration_name = azurerm_lb.internal_lb.frontend_ip_configuration[0].name
Related
I am trying to implement AKS Baselines with terraform, but I can't get my Application Gateway connect to the internal load balancer created by AKS.
My AKS config contains of a solr instance and a service with azure-load-balancer-internal annotation. AKS and created LB are in the same SUBNET while Application Gateway has it's own SUBNET, but they are all in the same VNET.
Kubernetes.tf
resource "kubernetes_service" "solr-service" {
metadata {
name = local.solr.name
annotations = {
"service.beta.kubernetes.io/azure-load-balancer-internal" : "true"
"service.beta.kubernetes.io/azure-load-balancer-internal-subnet" : "aks-subnet"
}
}
spec {
external_traffic_policy = "Local"
selector = {
app = kubernetes_deployment.solr.metadata.0.labels.app
}
port {
name = "http"
port = 80
target_port = 8983
}
type = "LoadBalancer"
load_balancer_ip = "192.168.1.200"
}
}
This config creates an internal load balancer in the MC_* resource group with frontend IP 192.168.1.200. The health check in the metrics blade is returning 100. So it looks like the created internal loadbalancer is working as expected.
Now I am trying to add this load balancer as backend_pool target in my Application gateway.
application-gateway.tf
resource "azurerm_application_gateway" "agw" {
name = local.naming.agw_name
resource_group_name = azurerm_resource_group.this.name
location = azurerm_resource_group.this.location
sku {
name = "Standard_Medium"
tier = "Standard"
capacity = 1
}
gateway_ip_configuration {
name = "Gateway-IP-Config"
subnet_id = azurerm_subnet.agw_snet.id
}
frontend_port {
name = "http-port"
port = 80
}
frontend_ip_configuration {
name = "public-ip"
public_ip_address_id = azurerm_public_ip.agw_ip.id
}
backend_address_pool {
name = "lb"
ip_addresses = ["192.168.1.200"]
}
backend_http_settings {
name = "settings"
cookie_based_affinity = "Disabled"
port = 80
protocol = "Http"
request_timeout = 60
}
http_listener {
name = "http-listener"
frontend_ip_configuration_name = "public-ip"
frontend_port_name = "http-port"
protocol = "Http"
}
request_routing_rule {
name = local.request_routing_rule_name
rule_type = "Basic"
http_listener_name = "http-listener"
backend_address_pool_name = "lb"
backend_http_settings_name = "settings"
}
}
I would expect Application Gateway now be connected to the internal load balancer and send all request over to it. But I get the message, that all backend pools are unhealthy. So it looks like, the Gateway can't access the provided IP.
I took a look at the Azure GIT baseline, but as far as I can see, they using FQDN instead of IP. I am pretty sure it's just some minor configuration issue, but I just can't find it.
I tried already using the Application Gateway as ingress controller (or http routing) and this worked, but I would like to implement it with internal load balancer, I also tried to add health check to the backend nodepool, this did not worked.
EDIT: I changed the LB to public and added the public IP to the Application Gateway and everything worked, so it looks like this is the issue, but I don't get why Application Gateway can't access the sibling subnet. I don't have any restrictions in place and by default Azure allows communication between subnets.
My mistake was to place the internal-load-balancer into the same snet like my kubernetes. When I changed the code and provided its own subnet, everything worked out fine. My final service config:
resource "kubernetes_service" "solr-service" {
metadata {
name = local.solr.name
annotations = {
"service.beta.kubernetes.io/azure-load-balancer-internal" : "true"
"service.beta.kubernetes.io/azure-load-balancer-internal-subnet" : "lb-subnet"
}
}
spec {
external_traffic_policy = "Local"
selector = {
app = kubernetes_deployment.solr.metadata.0.labels.app
}
port {
name = "http"
port = 80
target_port = 8983
}
type = "LoadBalancer"
load_balancer_ip = "192.168.3.200"
}
}
I'm deploying infrastructure on Azure using Terraform,
I'm using modules for a linux scale set an a load balancer and using azurerm_lb_nat_pool in order to have SSH access to the VMs,
I have a need now to retrieve the ports of the NAT rules for other purposes.
For the life of me I cannot find a way to retrieve them, went through all the terraform documentation and cannot find it under any data source or attribute reference.
Here is my LB code:
resource "azurerm_lb" "front-load-balancer" {
name = "front-load-balancer"
location = var.def-location
resource_group_name = var.rg-name
sku = "Standard"
frontend_ip_configuration {
name = "frontend-IP-configuration"
public_ip_address_id = var.public-ip-id
}
}
resource "azurerm_lb_nat_pool" "lb-nat-pool" {
resource_group_name = var.rg-name
loadbalancer_id = azurerm_lb.front-load-balancer.id
name = "lb-nat-pool"
protocol = "Tcp"
frontend_port_start = var.frontend-port-start
frontend_port_end = var.frontend-port-end
backend_port = 22
frontend_ip_configuration_name = "frontend-IP-configuration"
}
Any assistance would be very appreciated.
EDIT:
I tried exporting the inbound_nat_rules export on the azurerm_lb frontend IP configuration, it gives a list of the resources which I do not currently know how to extract the ports from::
output "frontend-ip-confguration-inbound-nat-rules" {
value = azurerm_lb.front-load-balancer.frontend_ip_configuration[*].inbound_nat_rules
}
Which results in this:
Changes to Outputs:
+ LB-frontend-IP-confguration-Inbound-nat-rules = [
+ [
+ "/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/weight-tracker-stage-rg/providers/Microsoft.Network/loadBalancers/front-load-balancer/inboundNatRules/lb-nat-pool.3",
+ "/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/weight-tracker-stage-rg/providers/Microsoft.Network/loadBalancers/front-load-balancer/inboundNatRules/lb-nat-pool.4",
+ "/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/weight-tracker-stage-rg/providers/Microsoft.Network/loadBalancers/front-load-balancer/inboundNatRules/lb-nat-pool.6",
],
]
I am trying to add a LB Rule using Terraform but unable to find a way to refer existing probe id ( Health Probe).
Below is my code.
resource "azurerm_lb_rule" "lb-rules" {
resource_group_name = var.lb_rg
loadbalancer_id = data.azurerm_lb.lb.id
name = var.LB_Rule_Name
protocol = var.protocol
frontend_port = var.frontend_port
backend_port = var.backend_port
frontend_ip_configuration_name = var.frontend_ip_configuration_name
backend_address_pool_ids = [data.azurerm_lb_backend_address_pool.backend.id]
}
From azurerm provider docs:
The following arguments are supported:
...
probe_id - (Optional) A reference to a Probe used by this Load Balancing Rule.
Therefore:
resource "azurerm_lb_rule" "lb-rules" {
[...]
probe_id = azurerm_lb_probe.your_probe.id
}
Not sure what I'm doing wrong. I only have one NAT rule in the terraform config and I'm not using a NAT pool.
Error:
azurerm_virtual_machine_scale_set.development-eastus-ss: compute.VirtualMachineScaleSetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidRequestFormat" Message="Cannot parse the request." Details=[{"code":"InvalidJsonReferenceWrongType","message":"Reference Id /subscriptions/sub-id/resourceGroups/prod-eastus-rg/providers/Microsoft.Network/loadBalancers/development-eastus-lb/inboundNatRules/development-eastus-lb-nat-http is referencing resource of a wrong type. The Id is expected to reference resources of type loadBalancers/inboundNatPools. Path Properties.UpdateGroups[0].NetworkProfile.networkInterfaceConfigurations[0].properties.ipConfigurations[0].properties.loadBalancerInboundNatPools[0]."}]
NAT Rule:
resource "azurerm_lb_nat_rule" "development-eastus-lb-nat-http" {
name = "development-eastus-lb-nat-http"
resource_group_name = "${var.resource_group_name}"
loadbalancer_id = "${azurerm_lb.development-eastus-lb.id}"
protocol = "Tcp"
frontend_port = 80
backend_port = 8080
frontend_ip_configuration_name = "development-eastus-lb-frontend"
Looks like this is an issue with trying to bind a single nat rule to a scaleset. As the error suggests, it is expecting a nat pool rather than a nat rule a nat pool will allow the load balancer and the scale set to build a group of rules where the load balancer will expose a different port per underlying VM to the same port on the VM.
Think about RDP where you want to be able to remote onto a specific VM, this would enable this by giving you a unique port to use that maps onto that VM.
resource "azurerm_lb_nat_pool" "test" {
resource_group_name = "${azurerm_resource_group.test.name}"
loadbalancer_id = "${azurerm_lb.test.id}"
name = "SampleApplicationPool"
protocol = "Tcp"
frontend_port_start = 80
frontend_port_end = 81
backend_port = 8080
frontend_ip_configuration_name = "PublicIPAddress"
}
If however you are looking to run a service such as a HTTP website on a different port internally than externally, such as 8080 on the local network and then port 80 on the external public network, then I would take a look at the lb rule as this specifically allows you to set the ports, as shown below.
resource "azurerm_lb_rule" "test" {
resource_group_name = "${azurerm_resource_group.test.name}"
loadbalancer_id = "${azurerm_lb.test.id}"
name = "LBRule"
protocol = "Tcp"
frontend_port = 3389
backend_port = 3389
frontend_ip_configuration_name = "PublicIPAddress"
}
Hopefully this helps
Creating Azure VM from terraform:
azurerm_network_interface.nic[1]: 1 error(s) occurred:
azurerm_network_interface.nic.1: network.InterfacesClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="RulesOfSameLoadBalancerTypeUseSameBackendPortProtocolAndIPConfig" Message="Load balancer rules /subscriptions/xxx/resourceGroups/dev/providers/Microsoft.Network/loadBalancers/webserver-lbip/inboundNatRules/RDP-VM0 and /subscriptions/xxx/resourceGroups/dev/providers/Microsoft.Network/loadBalancers/webserver-lbip/inboundNatRules/RDP-VM1 belong to the load balancer of the same type and use the same backend port 3389 and protocol Tcp with floatingIP disabled, must not be used with the same backend IP /subscriptions/xxx/resourceGroups/dev/providers/Microsoft.Network/networkInterfaces/NIC1/ipConfigurations/ipconfig1." Details=[]
Terraform:
resource "azurerm_network_interface" "nic" {
count = "${var.count}"
depends_on = ["azurerm_virtual_network.network", "azurerm_lb.webserver-lbip"]
name = "NIC${count.index}"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
ip_configuration {
name = "ipconfig1"
subnet_id = "${azurerm_subnet.web.id}"
private_ip_address_allocation = "dynamic"
load_balancer_backend_address_pools_ids = ["${azurerm_lb_backend_address_pool.webserver-lb-backend-pool.id}"]
load_balancer_inbound_nat_rules_ids = ["${azurerm_lb_nat_rule.webserver-nat1.id}", "${azurerm_lb_nat_rule.webserver-nat2.id}", "${azurerm_lb_nat_rule.webserver-nat3.id}", "${azurerm_lb_nat_rule.webserver-nat4.id}"]
}
}