Terraform retrieve inbound NAT rules ports - azure

I'm deploying infrastructure on Azure using Terraform,
I'm using modules for a linux scale set an a load balancer and using azurerm_lb_nat_pool in order to have SSH access to the VMs,
I have a need now to retrieve the ports of the NAT rules for other purposes.
For the life of me I cannot find a way to retrieve them, went through all the terraform documentation and cannot find it under any data source or attribute reference.
Here is my LB code:
resource "azurerm_lb" "front-load-balancer" {
name = "front-load-balancer"
location = var.def-location
resource_group_name = var.rg-name
sku = "Standard"
frontend_ip_configuration {
name = "frontend-IP-configuration"
public_ip_address_id = var.public-ip-id
}
}
resource "azurerm_lb_nat_pool" "lb-nat-pool" {
resource_group_name = var.rg-name
loadbalancer_id = azurerm_lb.front-load-balancer.id
name = "lb-nat-pool"
protocol = "Tcp"
frontend_port_start = var.frontend-port-start
frontend_port_end = var.frontend-port-end
backend_port = 22
frontend_ip_configuration_name = "frontend-IP-configuration"
}
Any assistance would be very appreciated.
EDIT:
I tried exporting the inbound_nat_rules export on the azurerm_lb frontend IP configuration, it gives a list of the resources which I do not currently know how to extract the ports from::
output "frontend-ip-confguration-inbound-nat-rules" {
value = azurerm_lb.front-load-balancer.frontend_ip_configuration[*].inbound_nat_rules
}
Which results in this:
Changes to Outputs:
+ LB-frontend-IP-confguration-Inbound-nat-rules = [
+ [
+ "/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/weight-tracker-stage-rg/providers/Microsoft.Network/loadBalancers/front-load-balancer/inboundNatRules/lb-nat-pool.3",
+ "/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/weight-tracker-stage-rg/providers/Microsoft.Network/loadBalancers/front-load-balancer/inboundNatRules/lb-nat-pool.4",
+ "/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/weight-tracker-stage-rg/providers/Microsoft.Network/loadBalancers/front-load-balancer/inboundNatRules/lb-nat-pool.6",
],
]

Related

Connect Azure Application Gateway with Internal AKS managed loadbalancer

I am trying to implement AKS Baselines with terraform, but I can't get my Application Gateway connect to the internal load balancer created by AKS.
My AKS config contains of a solr instance and a service with azure-load-balancer-internal annotation. AKS and created LB are in the same SUBNET while Application Gateway has it's own SUBNET, but they are all in the same VNET.
Kubernetes.tf
resource "kubernetes_service" "solr-service" {
metadata {
name = local.solr.name
annotations = {
"service.beta.kubernetes.io/azure-load-balancer-internal" : "true"
"service.beta.kubernetes.io/azure-load-balancer-internal-subnet" : "aks-subnet"
}
}
spec {
external_traffic_policy = "Local"
selector = {
app = kubernetes_deployment.solr.metadata.0.labels.app
}
port {
name = "http"
port = 80
target_port = 8983
}
type = "LoadBalancer"
load_balancer_ip = "192.168.1.200"
}
}
This config creates an internal load balancer in the MC_* resource group with frontend IP 192.168.1.200. The health check in the metrics blade is returning 100. So it looks like the created internal loadbalancer is working as expected.
Now I am trying to add this load balancer as backend_pool target in my Application gateway.
application-gateway.tf
resource "azurerm_application_gateway" "agw" {
name = local.naming.agw_name
resource_group_name = azurerm_resource_group.this.name
location = azurerm_resource_group.this.location
sku {
name = "Standard_Medium"
tier = "Standard"
capacity = 1
}
gateway_ip_configuration {
name = "Gateway-IP-Config"
subnet_id = azurerm_subnet.agw_snet.id
}
frontend_port {
name = "http-port"
port = 80
}
frontend_ip_configuration {
name = "public-ip"
public_ip_address_id = azurerm_public_ip.agw_ip.id
}
backend_address_pool {
name = "lb"
ip_addresses = ["192.168.1.200"]
}
backend_http_settings {
name = "settings"
cookie_based_affinity = "Disabled"
port = 80
protocol = "Http"
request_timeout = 60
}
http_listener {
name = "http-listener"
frontend_ip_configuration_name = "public-ip"
frontend_port_name = "http-port"
protocol = "Http"
}
request_routing_rule {
name = local.request_routing_rule_name
rule_type = "Basic"
http_listener_name = "http-listener"
backend_address_pool_name = "lb"
backend_http_settings_name = "settings"
}
}
I would expect Application Gateway now be connected to the internal load balancer and send all request over to it. But I get the message, that all backend pools are unhealthy. So it looks like, the Gateway can't access the provided IP.
I took a look at the Azure GIT baseline, but as far as I can see, they using FQDN instead of IP. I am pretty sure it's just some minor configuration issue, but I just can't find it.
I tried already using the Application Gateway as ingress controller (or http routing) and this worked, but I would like to implement it with internal load balancer, I also tried to add health check to the backend nodepool, this did not worked.
EDIT: I changed the LB to public and added the public IP to the Application Gateway and everything worked, so it looks like this is the issue, but I don't get why Application Gateway can't access the sibling subnet. I don't have any restrictions in place and by default Azure allows communication between subnets.
My mistake was to place the internal-load-balancer into the same snet like my kubernetes. When I changed the code and provided its own subnet, everything worked out fine. My final service config:
resource "kubernetes_service" "solr-service" {
metadata {
name = local.solr.name
annotations = {
"service.beta.kubernetes.io/azure-load-balancer-internal" : "true"
"service.beta.kubernetes.io/azure-load-balancer-internal-subnet" : "lb-subnet"
}
}
spec {
external_traffic_policy = "Local"
selector = {
app = kubernetes_deployment.solr.metadata.0.labels.app
}
port {
name = "http"
port = 80
target_port = 8983
}
type = "LoadBalancer"
load_balancer_ip = "192.168.3.200"
}
}

Unable to resolve the DNS address in Azure

I have a Hub-Spoke model. I also have an Azure DNS zone. I have a firewall in the Hub and Spoke uses Route table(s). I have created a VM in the Spoke and added the 'A' record in the Azure DNS zone, however, I am unable to resolve the DNS address in Azure.
I have an Azure Firewall with the following Roles
# Create a Azure Firewall Network Rule for DNS
resource "azurerm_firewall_network_rule_collection" "fw-net-dns" {
name = "azure-firewall-dns-rule"
azure_firewall_name = azurerm_firewall.azufw.name
resource_group_name = azurerm_resource_group.ipz12-dat-np-connection-rg.name
priority = 102
action = "Allow"
rule {
name = "DNS"
source_addresses = [
"*",
]
destination_ports = ["53"]
destination_addresses = [
"*",
]
protocols = ["TCP","UDP"]
}
}
I have a Route Table with the below Routes
resource "azurerm_route_table" "azurt" {
name = "AzfwRouteTable"
resource_group_name = azurerm_resource_group.ipz12-dat-np-connection-rg.name
location = azurerm_resource_group.ipz12-dat-np-connection-rg.location
disable_bgp_route_propagation = false
route {
name = "AzgwRoute"
address_prefix = "10.2.3.0/24" // CIDR of 2nd SPOKE
next_hop_type = "VirtualNetworkGateway"
}
route {
name = "Internet"
address_prefix = "0.0.0.0/0"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = azurerm_firewall.azufw.ip_configuration.0.private_ip_address
}
tags = {
environment = "Staging"
owner = "Someone#contoso.com"
costcenter = "IT"
}
depends_on = [
azurerm_resource_group.ipz12-dat-np-connection-rg
]
}
It is associated with the subnet
resource "azurerm_subnet_route_table_association" "virtual_machine_subnet_route_table_assc" {
subnet_id = azurerm_subnet.virtual_machine_subnet.id
route_table_id = azurerm_route_table.azurt.id
depends_on = [
azurerm_route_table.azurt,
azurerm_subnet.virtual_machine_subnet
]
}
I have a VM in the above mentioned subnet
resource "azurerm_network_interface" "virtual_machine_nic" {
name = "virtal-machine-nic"
location = azurerm_resource_group.ipz12-dat-np-applications-rg.location
resource_group_name = azurerm_resource_group.ipz12-dat-np-applications-rg.name
ip_configuration {
name = "internal"
subnet_id = data.azurerm_subnet.virtual_machine_subnet.id
private_ip_address_allocation = "Dynamic"
}
depends_on = [
azurerm_resource_group.ipz12-dat-np-applications-rg
]
}
resource "azurerm_windows_virtual_machine" "virtual_machine" {
name = "virtual-machine"
resource_group_name = azurerm_resource_group.ipz12-dat-np-applications-rg.name
location = azurerm_resource_group.ipz12-dat-np-applications-rg.location
size = "Standard_B1ms"
admin_username = "...."
admin_password = "...."
network_interface_ids = [
azurerm_network_interface.virtual_machine_nic.id
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "MicrosoftWindowsDesktop"
offer = "Windows-10"
sku = "21h1-pro"
version = "latest"
}
depends_on = [
azurerm_network_interface.virtual_machine_nic
]
}
I have created a Azure DNS Zone
resource "azurerm_dns_zone" "dns_zone" {
name = "learnpluralsight.com"
resource_group_name = azurerm_resource_group.ipz12-dat-np-connection-rg.name
depends_on = [
azurerm_resource_group.ipz12-dat-np-connection-rg
]
}
and added the 'A' record
But I am not able to resolve the FQDN
I tried to reproduce the same in my environment I am getting same request timed out.
To resolve this issue, you need to add Reverse lookup zone and Create PTR record for DNS server name and IP.
In Reverse lookup zone -> right click choose new zone
Click Next as a primary zone -> check the store the zone box ->Next -> click the 2nd option all the dns server...in the domain -> IPv4 reverse lookup -> Next
Here you should add your Ip address as 150.171.10 1st three octets -> click next -> choose to allow only secure dynamic -> next -> Finish
Once you refresh your default records are added and right click and create pointer (PTR) like below type your ip address 150.171.10.35 and provide your host name and your PTR will be added successfully
And when I run nslookup server run successfully without request timed out.
If this still persist in your search box -> network and internet -> ethernet -> Right click ->properties -> internet protocol version provide DNS server as below.
Still any issue occurs try:
preferred dns server as 8.8.8.8
Alternate DNS server as 8.8.4.4
or
preferred dns server as 8.8.8.8
Alternate DNS server as Your ip
Reference: dns request timeout (spiceworks.com)
Check whether you have given IPv6 as obtain DNS server automatically or else uncheck it.

How to add Health Probe id TO LB Rule while creating using Terraform

I am trying to add a LB Rule using Terraform but unable to find a way to refer existing probe id ( Health Probe).
Below is my code.
resource "azurerm_lb_rule" "lb-rules" {
resource_group_name = var.lb_rg
loadbalancer_id = data.azurerm_lb.lb.id
name = var.LB_Rule_Name
protocol = var.protocol
frontend_port = var.frontend_port
backend_port = var.backend_port
frontend_ip_configuration_name = var.frontend_ip_configuration_name
backend_address_pool_ids = [data.azurerm_lb_backend_address_pool.backend.id]
}
From azurerm provider docs:
The following arguments are supported:
...
probe_id - (Optional) A reference to a Probe used by this Load Balancing Rule.
Therefore:
resource "azurerm_lb_rule" "lb-rules" {
[...]
probe_id = azurerm_lb_probe.your_probe.id
}

Create SQL firewall rules with the sql server using targeted apply

I have an azurerm_sql_server and two azurerm_sql_firewall_rules for this server.
If I do a targeted terraform apply to create a resource depending on the SQL server, the SQL server is created but the firewall rules are not.
Can I require the firewall rules to always be deployed with the SQL server?
"Bonus": The SQL server is in a module and the database using the server is in another module :(
Example code:
infrastructure/main.tf
resource "azurerm_sql_server" "test" {
count = var.enable_dbs ? 1 : 0
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
name = local.dbs_name
version = "12.0"
administrator_login = var.dbs_admin_login
administrator_login_password = var.dbs_admin_password
}
resource "azurerm_sql_firewall_rule" "allow_azure_services" {
count = var.enable_dbs ? 1 : 0
resource_group_name = azurerm_resource_group.test.name
name = "AllowAccessToAzureServices"
server_name = azurerm_sql_server.test[0].name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
webapp/main.tf
resource "azurerm_sql_database" "test" {
count = var.enable_db ? 1 : 0
location = var.location
resource_group_name = var.resource_group_name
server_name = var.dbs_name
name = var.project_name
requested_service_objective_name = var.db_sku_size
}
main.tf
module "infrastructure" {
source = "./infrastructure"
project_name = "infra"
enable_dbs = true
dbs_admin_login = "someusername"
dbs_admin_password = "somepassword"
}
module "my_webapp" {
source = "./webapp"
location = module.infrastructure.location
resource_group_name = module.infrastructure.resource_group_name
project_name = local.project_name
enable_db = true
dbs_name = module.infrastructure.dbs_name
dbs_admin_login = module.infrastructure.dbs_admin_login
dbs_admin_password = module.infrastructure.dbs_admin_password
}
If the whole script is applied using terraform apply everything is fine.
But if only module.my_webapp should be applied using terraform apply -target module.my_webapp the firewall rule is missing because it is not the target and the target doesn't directly require it.
The rule is necessary nonetheless and should be applied every time the database server itself is applied.
Possible "Solution":
Add the firewall rules as output of the infrastructure module:
output "dbs_firewall_rules" {
value = concat(
azurerm_sql_firewall_rule.allow_azure_services,
azurerm_sql_firewall_rule.allow_office_ip
)
}
Then add this output as input to the webapp module:
variable "dbs_firewall_rules" {
description = "DB firewall rules required (used for the database in depends_on)"
type = list
}
And connect it in the main script:
module "my_webapp" {
...
dbs_firewall_rules = module.infrastructure.dbs_firewall_rules
...
}
Drawback: Only one type of resources can be put in the list. This is why I renamed it from dbs_dependencies to dbs_firewall_rules.
If you have the firewall rules defined in the same tf file as the SQL Server they should be getting deployed because the server is referenced which builds up the dependency graph correctly. I have ran into issues with it not working specifically with SQL Server firewall rules. What I eventually did was leverage the depends_on property on the SQL Server resource to make sure that those are always created. That looks like this:
resource "azurerm_sql_firewall_rule" "test" {
name = "FirewallRule1"
resource_group_name = "${azurerm_resource_group.test.name}"
server_name = "${azurerm_sql_server.test.name}"
start_ip_address = "10.0.17.62"
end_ip_address = "10.0.17.62"
depends_on = [azurerm_sql_server.test]
}
resource "azurerm_sql_server" "test" {
name = "mysqlserver"
resource_group_name = "${azurerm_resource_group.test.name}"
location = "${azurerm_resource_group.test.location}"
version = "12.0"
administrator_login = "mradministrator"
administrator_login_password = "thisIsDog11"
tags = {
environment = "production"
}
}
Then just add each rule that you know you want to that. If you are doing your rules outside your module, then you should be able to make this an argument that you can pass into it to force the linking.

Create custom domain for app services via terraform

I am creating azure app services via terraform and following there documentation located at this site :
https://www.terraform.io/docs/providers/azurerm/r/app_service.html
Here is the snippet for terraform script:
resource "azurerm_app_service" "app" {
name = "app-name"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
app_service_plan_id = "ommitted"
site_config {
java_version = "1.8"
java_container = "TOMCAT"
java_container_version = "8.5"
}
}
I need sub domain as well for my app services for which I am not able to find any help in terraform :
as of now url for app services is:
https://abc.azure-custom-domain.cloud
and I want my url to be :
https://*.abc.azure-custom-domain.cloud
I know this can be done via portal but is their any way by which we can do it via terraform?
This is now possible using app_service_custom_hostname_binding (since PR#1087 on 6th April 2018)
resource "azurerm_app_service_custom_hostname_binding" "test" {
hostname = "www.mywebsite.com"
app_service_name = "${azurerm_app_service.test.name}"
resource_group_name = "${azurerm_resource_group.test.name}"
}
This is not possible. You could the link you provided. If parameter is not in, the parameter is not supported by terraform.
You need do it on Azure Portal.
I have found it to be a tiny bit more complicated...
DNS Zone (then set name servers at the registrar)
App Service
Domain verification TXT record
CNAME record
Hostname binding
resource "azurerm_dns_zone" "dns-zone" {
name = var.azure_dns_zone
resource_group_name = var.azure_resource_group_name
}
resource "azurerm_linux_web_app" "app-service" {
name = "some-service"
resource_group_name = var.azure_resource_group_name
location = var.azure_region
service_plan_id = "some-plan"
site_config {}
}
resource "azurerm_dns_txt_record" "domain-verification" {
name = "asuid.api.domain.com"
zone_name = var.azure_dns_zone
resource_group_name = var.azure_resource_group_name
ttl = 300
record {
value = azurerm_linux_web_app.app-service.custom_domain_verification_id
}
}
resource "azurerm_dns_cname_record" "cname-record" {
name = "domain.com"
zone_name = azurerm_dns_zone.dns-zone.name
resource_group_name = var.azure_resource_group_name
ttl = 300
record = azurerm_linux_web_app.app-service.default_hostname
depends_on = [azurerm_dns_txt_record.domain-verification]
}
resource "azurerm_app_service_custom_hostname_binding" "hostname-binding" {
hostname = "api.domain.com"
app_service_name = azurerm_linux_web_app.app-service.name
resource_group_name = var.azure_resource_group_name
depends_on = [azurerm_dns_cname_record.cname-record]
}
I had the same issue & had to use PowerSHell to overcome it in the short-term. Maybe you could get Terraform to trigger the PSHell script... I haven't tried that yet!!!
PSHell as follows: -
$fqdn="www.yourwebsite.com"
$webappname="yourwebsite.azurewebsites.net"
Set-AzureRmWebApp -Name <YourAppServiceName> -ResourceGroupName <TheResourceGroupOfYourAppService> -HostNames #($fqdn,$webappname)
IMPORTANT: Make sure you configure DNS FIRST i.e. CNAME or TXT record for the custom domain you're trying to set, else PSHell & even the Azure Portal manual method will fail.

Resources