Unable to resolve the DNS address in Azure - azure

I have a Hub-Spoke model. I also have an Azure DNS zone. I have a firewall in the Hub and Spoke uses Route table(s). I have created a VM in the Spoke and added the 'A' record in the Azure DNS zone, however, I am unable to resolve the DNS address in Azure.
I have an Azure Firewall with the following Roles
# Create a Azure Firewall Network Rule for DNS
resource "azurerm_firewall_network_rule_collection" "fw-net-dns" {
name = "azure-firewall-dns-rule"
azure_firewall_name = azurerm_firewall.azufw.name
resource_group_name = azurerm_resource_group.ipz12-dat-np-connection-rg.name
priority = 102
action = "Allow"
rule {
name = "DNS"
source_addresses = [
"*",
]
destination_ports = ["53"]
destination_addresses = [
"*",
]
protocols = ["TCP","UDP"]
}
}
I have a Route Table with the below Routes
resource "azurerm_route_table" "azurt" {
name = "AzfwRouteTable"
resource_group_name = azurerm_resource_group.ipz12-dat-np-connection-rg.name
location = azurerm_resource_group.ipz12-dat-np-connection-rg.location
disable_bgp_route_propagation = false
route {
name = "AzgwRoute"
address_prefix = "10.2.3.0/24" // CIDR of 2nd SPOKE
next_hop_type = "VirtualNetworkGateway"
}
route {
name = "Internet"
address_prefix = "0.0.0.0/0"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = azurerm_firewall.azufw.ip_configuration.0.private_ip_address
}
tags = {
environment = "Staging"
owner = "Someone#contoso.com"
costcenter = "IT"
}
depends_on = [
azurerm_resource_group.ipz12-dat-np-connection-rg
]
}
It is associated with the subnet
resource "azurerm_subnet_route_table_association" "virtual_machine_subnet_route_table_assc" {
subnet_id = azurerm_subnet.virtual_machine_subnet.id
route_table_id = azurerm_route_table.azurt.id
depends_on = [
azurerm_route_table.azurt,
azurerm_subnet.virtual_machine_subnet
]
}
I have a VM in the above mentioned subnet
resource "azurerm_network_interface" "virtual_machine_nic" {
name = "virtal-machine-nic"
location = azurerm_resource_group.ipz12-dat-np-applications-rg.location
resource_group_name = azurerm_resource_group.ipz12-dat-np-applications-rg.name
ip_configuration {
name = "internal"
subnet_id = data.azurerm_subnet.virtual_machine_subnet.id
private_ip_address_allocation = "Dynamic"
}
depends_on = [
azurerm_resource_group.ipz12-dat-np-applications-rg
]
}
resource "azurerm_windows_virtual_machine" "virtual_machine" {
name = "virtual-machine"
resource_group_name = azurerm_resource_group.ipz12-dat-np-applications-rg.name
location = azurerm_resource_group.ipz12-dat-np-applications-rg.location
size = "Standard_B1ms"
admin_username = "...."
admin_password = "...."
network_interface_ids = [
azurerm_network_interface.virtual_machine_nic.id
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "MicrosoftWindowsDesktop"
offer = "Windows-10"
sku = "21h1-pro"
version = "latest"
}
depends_on = [
azurerm_network_interface.virtual_machine_nic
]
}
I have created a Azure DNS Zone
resource "azurerm_dns_zone" "dns_zone" {
name = "learnpluralsight.com"
resource_group_name = azurerm_resource_group.ipz12-dat-np-connection-rg.name
depends_on = [
azurerm_resource_group.ipz12-dat-np-connection-rg
]
}
and added the 'A' record
But I am not able to resolve the FQDN

I tried to reproduce the same in my environment I am getting same request timed out.
To resolve this issue, you need to add Reverse lookup zone and Create PTR record for DNS server name and IP.
In Reverse lookup zone -> right click choose new zone
Click Next as a primary zone -> check the store the zone box ->Next -> click the 2nd option all the dns server...in the domain -> IPv4 reverse lookup -> Next
Here you should add your Ip address as 150.171.10 1st three octets -> click next -> choose to allow only secure dynamic -> next -> Finish
Once you refresh your default records are added and right click and create pointer (PTR) like below type your ip address 150.171.10.35 and provide your host name and your PTR will be added successfully
And when I run nslookup server run successfully without request timed out.
If this still persist in your search box -> network and internet -> ethernet -> Right click ->properties -> internet protocol version provide DNS server as below.
Still any issue occurs try:
preferred dns server as 8.8.8.8
Alternate DNS server as 8.8.4.4
or
preferred dns server as 8.8.8.8
Alternate DNS server as Your ip
Reference: dns request timeout (spiceworks.com)
Check whether you have given IPv6 as obtain DNS server automatically or else uncheck it.

Related

Error creating azurerm_monitor_metric_alert for IPsec Site-to-Site on Azure with Terraform

So I am trying to create monitoring alerts for IPsec VPN Connections on a Azure Virtual Network Gateway. We have automated the creation of the actual deployment of the connections. However when I try to deploy the monitoring portion of the bundle I'm getting an error. The plan looks good but fails when actually deploying the Alerts.
resource "azurerm_monitor_action_rule_action_group" "vpn-alerts" {
name = "vpn-alerts"
resource_group_name = azurerm_resource_group.rg-canadacentral-prod-region.name
action_group_id = azurerm_monitor_action_group.azure-vpn-monitor.id
scope {
type = "ResourceGroup"
resource_ids = [azurerm_resource_group.rg-canadacentral-prod-region.id]
}
}
resource "azurerm_monitor_action_group" "main" {
name = "vgw-${var.azure_region}-${var.vdc_env}-monitor"
resource_group_name = azurerm_resource_group.rg-canadacentral-prod-region.name
short_name = "ipsec-alerts"
# webhook_receiver {
# name = "callmyapi"
# service_uri = "http://example.com/alert"
# }
}
resource "azurerm_monitor_metric_alert" "ipsec_tunnel" {
for_each = {for cn in var.vpn: cn.name => cn}
name = "lgw-${var.azure_region}-${var.vdc_env}-region-${each.value.name}-connectivity"
resource_group_name = azurerm_resource_group.rg-canadacentral-prod-region.name
scopes = [azurerm_local_network_gateway.region_to_site[each.value.name].id]
description = "IPSec tunnel did not receive any traffic for over 5 minutes"
criteria {
metric_namespace = "Microsoft.Insights/metricAlerts"
metric_name = "BitsInPerSecond"
aggregation = "Average"
operator = "LessThanOrEqual"
threshold = 0
dimension {
name = "ApiName"
operator = "Include"
values = ["*"]
}
}
action {
action_group_id = azurerm_monitor_action_group.main.id
}
}
Inital Failure
So I tried changing the metric namespace to both
Microsoft.Network/connections
Microsoft.Network/localNetworkGateways
And recieved this error
This is code used to deploy the alerting, Could it be an issue with the actual alert namespace not being found? Any help appreciated.
I tried to reproduce the same in my environment , for vpn gateways:
Received similar error:
creating or updating Monitor Metric Alert: (Name "example-metricalert" / Resource Group "xxx "): insights.MetricAlertsClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="ResourceNotFound" Message="{\"code\":\"BadRequest\",\"message\":\"Detect invalid value: Microsoft.Insights/metricAlerts for query parameter: 'metricnamespace', the value must be: Microsoft.Network/vpnGateways if the query parameter is provided, you can also skip this optional query parameter
Code:
resource "azurerm_virtual_network" "example" {
name = "example-network"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
address_space = ["10.0.0.0/16"]
}
resource "azurerm_virtual_wan" "example" {
name = "example-vwan"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
}
resource "azurerm_virtual_hub" "example" {
name = "example-hub"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
virtual_wan_id = azurerm_virtual_wan.example.id
address_prefix = "10.0.1.0/24"
}
resource "azurerm_vpn_gateway" "example" {
name = "example-vpng"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
virtual_hub_id = azurerm_virtual_hub.example.id
}
resource "azurerm_monitor_action_group" "ag" {
name = "myactiongroup"
resource_group_name = data.azurerm_resource_group.example.name
short_name = "ipsec-alerts"
}
resource "azurerm_monitor_metric_alert" "alert" {
name = "example-metricalert"
resource_group_name = data.azurerm_resource_group.example.name
scopes = ["/subscriptions/xxx/resourceGroups/xxxxx/providers/Microsoft.Network/vpnGateways"]
description = "IPSec tunnel did not receive any traffic for over 5 minutes"
target_resource_type = "Microsofts.Network/vpnGateways"
#Microsoft.Network/vpnGateways
criteria {
metric_namespace = "Microsoft.Insights/metricAlerts"
metric_name = "Percentage CPU"
aggregation = "Total"
operator = "GreaterThan"
threshold = 80
}
action {
action_group_id = azurerm_monitor_action_group.ag.id
}
}
So here metric namespace is correct but it doesnot contain metrics CPU percentage and all.
I checked the issue is due to non support of the tried namespaces for the newtwork related metrics:
Only below are supported for the metrics as they require
metrics-supported
Try with namespace Microsoft.Network/virtualNetworkGateways as localNetworkGateways is not the supported one.
And if you check Virtual network gateway , Local network gateway is involved in VPN connections .see Configure IPsec/IKE policy for site-to-site VPN connections
In my case, as i am setting monitor alerts for VPN gateway,
i used "Microsoft.Network/vpnGateways" in metric_namespace and "AverageBandwidth" as metric , which only supports average as aggregation and it executed succssully.
According to below table from above metrics-supported :
similarly check for vnetgateway:
resource "azurerm_monitor_metric_alert" "alert" {
name = "example-metricalert"
resource_group_name = data.azurerm_resource_group.example.name
scopes = ["/subscriptions/xx/resourceGroups/xxx/providers/Microsoft.Network/vpnGateways/kavexample-vpng"]
description = "IPSec tunnel did not receive any traffic for over 5 minutes"
// target_resource_type = "Microsofts.Network/vpnGateways"
#Microsoft.Network/vpnGateways
criteria {
metric_namespace="Microsoft.Network/vpnGateways"
metric_name = "AverageBandwidth"
aggregation = "Average"
operator = "GreaterThan"
threshold = 80
}
action {
action_group_id = azurerm_monitor_action_group.ag.id
}
}
we can then monitor in insights:

Terraform retrieve inbound NAT rules ports

I'm deploying infrastructure on Azure using Terraform,
I'm using modules for a linux scale set an a load balancer and using azurerm_lb_nat_pool in order to have SSH access to the VMs,
I have a need now to retrieve the ports of the NAT rules for other purposes.
For the life of me I cannot find a way to retrieve them, went through all the terraform documentation and cannot find it under any data source or attribute reference.
Here is my LB code:
resource "azurerm_lb" "front-load-balancer" {
name = "front-load-balancer"
location = var.def-location
resource_group_name = var.rg-name
sku = "Standard"
frontend_ip_configuration {
name = "frontend-IP-configuration"
public_ip_address_id = var.public-ip-id
}
}
resource "azurerm_lb_nat_pool" "lb-nat-pool" {
resource_group_name = var.rg-name
loadbalancer_id = azurerm_lb.front-load-balancer.id
name = "lb-nat-pool"
protocol = "Tcp"
frontend_port_start = var.frontend-port-start
frontend_port_end = var.frontend-port-end
backend_port = 22
frontend_ip_configuration_name = "frontend-IP-configuration"
}
Any assistance would be very appreciated.
EDIT:
I tried exporting the inbound_nat_rules export on the azurerm_lb frontend IP configuration, it gives a list of the resources which I do not currently know how to extract the ports from::
output "frontend-ip-confguration-inbound-nat-rules" {
value = azurerm_lb.front-load-balancer.frontend_ip_configuration[*].inbound_nat_rules
}
Which results in this:
Changes to Outputs:
+ LB-frontend-IP-confguration-Inbound-nat-rules = [
+ [
+ "/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/weight-tracker-stage-rg/providers/Microsoft.Network/loadBalancers/front-load-balancer/inboundNatRules/lb-nat-pool.3",
+ "/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/weight-tracker-stage-rg/providers/Microsoft.Network/loadBalancers/front-load-balancer/inboundNatRules/lb-nat-pool.4",
+ "/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/weight-tracker-stage-rg/providers/Microsoft.Network/loadBalancers/front-load-balancer/inboundNatRules/lb-nat-pool.6",
],
]

How to specify PTR for Azure PublicIP with Terraform

I am setting up an alias record in an Azure-hosted DNS zone to point to the public (egress) IP of a K8s cluster, like this:
data "azurerm_dns_zone" "example" {
name = "example.com"
}
locals {
egress_id = tolist(azurerm_kubernetes_cluster.k8s.network_profile.0.load_balancer_profile.0.effective_outbound_ips)[0]
egress_name = reverse(split("/", local.egress_id))[0]
egress_resource_group = reverse(split("/", local.egress_id))[4]
}
resource "azurerm_dns_a_record" "k8s" {
name = var.dns_prefix
zone_name = data.azurerm_dns_zone.example.name
resource_group_name = data.azurerm_dns_zone.example.resource_group_name
ttl = 300
target_resource_id = local.egress_id
}
output "ptr_command" {
value = "az network public-ip update --name ${local.egress_name} --resource-group ${local.egress_resource_group} --reverse-fqdn ${var.dns_prefix}.example.com --dns-name ${var.dns_prefix}-${local.egress_name}"
}
This works, and (just to prove that it works) I can also add a PTR record for reverse lookup with the explicit API command produced by the output block -- but can I get terraform to do that as part of apply? (One problem is that it would have to happen after the creation of the A record since Azure will check that it points at the correct IP).
(A k8s egress does not need a PTR record, I hear you say, but something like an outgoing SMTP server does need correct reverse lookup).
What I ended up doing was to add a local-exec provisioner to the DNS record resource -- but one that modifies the public IP using an explicit CLI command. Not a good solution because it is not where you'd look, but at least the ordering is right. Also I think the way I do it only works if you did az login to give Terraform access to your Azure account, though I'm sure you can configure az to use the same credentials as Terraform in other cases.
Here is a worked example with an explicit azurerm_public_ip resource, illustrating another Catch 22: On next apply, Terraform will see the reverse_fqdn attribute and attempt to remove it, unless you tell it that it's OK. (In the OP, the public IP was created by an azurerm_kubernetes_cluster resource and Terraform does not store its full state).
data "azurerm_dns_zone" "example" {
name = "example.com"
}
resource "random_id" "domain_label" {
byte_length = 31 # 62 hex digits.
}
resource "azurerm_public_ip" "example" {
lifecycle {
# reverse_fqdn can only be set after a DNS record has
# been created, so we do it in a provisioner there.
# Do not try to change it back, please.
ignore_changes = [reverse_fqdn]
}
name = "example"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
allocation_method = "Static"
domain_name_label = "x${random_id.domain_label.hex}"
}
resource "azurerm_dns_a_record" "example" {
name = "example"
zone_name = data.azurerm_dns_zone.example.name
resource_group_name = data.azurerm_dns_zone.example.resource_group_name
ttl = 300
target_resource_id = azurerm_public_ip.example.id
provisioner "local-exec" {
command = "az network public-ip update --name ${azurerm_public_ip.example.name} --resource-group ${azurerm_resource_group.example.name} --reverse-fqdn example.example.com"
}
}
I still have a problem that the target_resource_id attribute on the A record seems to disappear if Terraform replaces the VM that is using the network interface that the public IP is associated with. But another apply solves that.
This works for me when I skip explicit reverse_fqdn and workaround with azurerm_dns_a_record...
resource "azurerm_dns_zone" "example" {
name = local.subdomain
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_public_ip_prefix" "example" {
name = local.aks_public_ip_prefix_name
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
prefix_length = 31
tags = {
environment = "Production"
}
}
resource "azurerm_public_ip" "aks_ingress" {
name = local.aks_public_ip_name
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
allocation_method = "Static"
sku = "Standard"
domain_name_label = local.subdomain_prefix
public_ip_prefix_id = azurerm_public_ip_prefix.example.id
tags = {
environment = "Production"
}
}
resource "azurerm_dns_a_record" "example" {
name = "#"
zone_name = azurerm_dns_zone.example.name
resource_group_name = azurerm_resource_group.example.name
ttl = 300
target_resource_id = azurerm_public_ip.aks_ingress.id
}
enter image description here
enter image description here

How can I add a VM into a Load Balancer within Azure using Terraform?

I have been looking through the Terraform.io docs and its not really clear.
I know how to add a VM to a LB through the Azure portal, just trying to figure out how to do with with Terraform.
I do not see an option in the azurerm_availability_set or azurerm_lb to add a VM.
Please let me know if anyone has any ideas.
Devon
I'd take a look at this example I created. After you've created the LB, when creating each NIC make sure you add a backlink to the LB.
load_balancer_backend_address_pool_ids = ["${azurerm_lb_backend_address_pool.webservers_lb_backend.id}"]
Terraform load balanced server
resource "azurerm_lb_backend_address_pool" "backend_pool" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.lb.id}"
name = "BackendPool1"
}
resource "azurerm_lb_nat_rule" "tcp" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.lb.id}"
name = "RDP-VM-${count.index}"
protocol = "tcp"
frontend_port = "5000${count.index + 1}"
backend_port = 3389
frontend_ip_configuration_name = "LoadBalancerFrontEnd"
count = 2
}
You can get the whole file at this link. I think the code above is the most import thing. For more details about Load Balancer NAT rule, see azurerm_lb_nat_rule.
Might be late to answer this, but here goes. Once you have created LB and VM, you can use this snippet to associate NIC and LB backend pool:
resource "azurerm_network_interface_backend_address_pool_association" "vault" {
network_interface_id = "${azurerm_network_interface.nic.id}"
ip_configuration_name = "nic_ip_config"
backend_address_pool_id = "${azurerm_lb_backend_address_pool.nic.id}"
}
Ensure that the VMs sit across availability set. Else you won't be able to register the VMs into the LB.
I think this is what you need. You then need to create the association between the network interface and the virtual machine thru the subnet.
All of the answers here appear to be outdated. azurerm_network_interface_nat_rule_association should be used as of 2021-08-22:
resource "azurerm_lb_nat_rule" "example" {
resource_group_name = azurerm_resource_group.example.name
loadbalancer_id = azurerm_lb.example.id
name = "RDPAccess"
protocol = "Tcp"
frontend_port = 3389
backend_port = 3389
frontend_ip_configuration_name = "primary"
}
resource "azurerm_network_interface" "example" {
name = "example-nic"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_network_interface_nat_rule_association" "example" {
network_interface_id = azurerm_network_interface.example.id
ip_configuration_name = "testconfiguration1"
nat_rule_id = azurerm_lb_nat_rule.example.id
}

Create custom domain for app services via terraform

I am creating azure app services via terraform and following there documentation located at this site :
https://www.terraform.io/docs/providers/azurerm/r/app_service.html
Here is the snippet for terraform script:
resource "azurerm_app_service" "app" {
name = "app-name"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
app_service_plan_id = "ommitted"
site_config {
java_version = "1.8"
java_container = "TOMCAT"
java_container_version = "8.5"
}
}
I need sub domain as well for my app services for which I am not able to find any help in terraform :
as of now url for app services is:
https://abc.azure-custom-domain.cloud
and I want my url to be :
https://*.abc.azure-custom-domain.cloud
I know this can be done via portal but is their any way by which we can do it via terraform?
This is now possible using app_service_custom_hostname_binding (since PR#1087 on 6th April 2018)
resource "azurerm_app_service_custom_hostname_binding" "test" {
hostname = "www.mywebsite.com"
app_service_name = "${azurerm_app_service.test.name}"
resource_group_name = "${azurerm_resource_group.test.name}"
}
This is not possible. You could the link you provided. If parameter is not in, the parameter is not supported by terraform.
You need do it on Azure Portal.
I have found it to be a tiny bit more complicated...
DNS Zone (then set name servers at the registrar)
App Service
Domain verification TXT record
CNAME record
Hostname binding
resource "azurerm_dns_zone" "dns-zone" {
name = var.azure_dns_zone
resource_group_name = var.azure_resource_group_name
}
resource "azurerm_linux_web_app" "app-service" {
name = "some-service"
resource_group_name = var.azure_resource_group_name
location = var.azure_region
service_plan_id = "some-plan"
site_config {}
}
resource "azurerm_dns_txt_record" "domain-verification" {
name = "asuid.api.domain.com"
zone_name = var.azure_dns_zone
resource_group_name = var.azure_resource_group_name
ttl = 300
record {
value = azurerm_linux_web_app.app-service.custom_domain_verification_id
}
}
resource "azurerm_dns_cname_record" "cname-record" {
name = "domain.com"
zone_name = azurerm_dns_zone.dns-zone.name
resource_group_name = var.azure_resource_group_name
ttl = 300
record = azurerm_linux_web_app.app-service.default_hostname
depends_on = [azurerm_dns_txt_record.domain-verification]
}
resource "azurerm_app_service_custom_hostname_binding" "hostname-binding" {
hostname = "api.domain.com"
app_service_name = azurerm_linux_web_app.app-service.name
resource_group_name = var.azure_resource_group_name
depends_on = [azurerm_dns_cname_record.cname-record]
}
I had the same issue & had to use PowerSHell to overcome it in the short-term. Maybe you could get Terraform to trigger the PSHell script... I haven't tried that yet!!!
PSHell as follows: -
$fqdn="www.yourwebsite.com"
$webappname="yourwebsite.azurewebsites.net"
Set-AzureRmWebApp -Name <YourAppServiceName> -ResourceGroupName <TheResourceGroupOfYourAppService> -HostNames #($fqdn,$webappname)
IMPORTANT: Make sure you configure DNS FIRST i.e. CNAME or TXT record for the custom domain you're trying to set, else PSHell & even the Azure Portal manual method will fail.

Resources