I am creating Azure infra using terraform. I am able to create AppGateway in gateway subnet. The AppGateway required NSG rule to all access on ports 65200 - 65535, I have added the NSG. I am able to communicate with app behind AppGateway. But my jenkins pipeline fails when I try to destroy the complete setup, it says -
Error: Deleting Security Rule: (Name "AllowGatewayManagerInbound" / Network Security Group
Name "gateway" / Resource Group "primary"): network.SecurityRulesClient#Delete: Failure
sending request: StatusCode=400 -- Original Error:
Code="ApplicationGatewaySubnetInboundTrafficBlockedByNetworkSecurityGroup" Message="Network
security group /subscriptions/****/resourceGroups/primary/providers/Microsoft.Network/networkSecurityGroups
/gateway blocks incoming internet traffic on ports 65200 - 65535 to subnet
/subscriptions/****/resourceGroups/primary/providers/Microsoft.Network/virtualNetworks/primary/subnets/gateway,
associated with Application Gateway subscriptions/****/resourceGroups/primary/providers/Microsoft.Network/applicationGateways/primary-centralus.
This is not permitted for Application Gateways that have V2 Sku." Details=[]
Terraform code to create subnet, NSG and create AppGateway.
resource "azurerm_network_security_group" "gateway" {
name = "gateway"
location = var.location
resource_group_name = azurerm_resource_group.app.name
tags = var.tags
}
resource "azurerm_network_security_rule" "gateway_allow_gateway_manager_https_inbound" {
name = "AllowGatewayManagerInbound"
description = "Allow Azure application GatewayManager on management ports"
resource_group_name = azurerm_network_security_group.gateway.resource_group_name
network_security_group_name = azurerm_network_security_group.gateway.name
priority = 2510
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
source_address_prefix = "GatewayManager"
destination_port_range = "65200-65535"
destination_address_prefix = "*"
}
module "app_gateway" {
source = "../../modules/app_gateway"
name = "${azurerm_resource_group.app.name}-${var.location}"
location = azurerm_resource_group.app.location
resource_group_name = azurerm_resource_group.app.name
vnet_subnet_id = azurerm_subnet.gateway.id
app_public_dns_zone = local.app_public_dns_zone
a_record_domain_name = local.a_record_subdomain
key_vault = local.key_vault
ssl_certificates = local.ssl_certificates
env = local.suffix
tags = var.tags
depends_on = [
azurerm_network_security_group.gateway
]
}
I have added depends_on relationship between AppGateway and NSG as AppGateway depends on NSG.
I need help to destry these resources using terraform.
• The ‘Destroy’ task through the terraform code that you are using is failing because inbound connectivity from the Jenkins pipeline is not possible through the NSG to the Azure resources, i.e., Application gateway in this case since the NSG is blocking the Jenkins pipeline access to the Azure resources on ports 65200 – 65535.
Thus, since you have deployed the ‘Application gateway’ in the ‘Gateway’ subnet and you have already allowed inbound network connectivity through the NSG to the application deployed behind the application gateway.
• Therefore, ensure that this allow rule’s priority is set higher than the deny rules for the same category. Also, allow TCP ports 65200 - 65535 for the application gateway v2 SKU with the destination subnet as ‘Any’ and source as ‘GatewayManager’ service tag for the communication between the Jenkin pipeline and the Azure Resource manager to happen.
Do check and ensure that the below rules in the NSG are set correctly: -
a) Outbound Internet connectivity can't be blocked. Default outbound rules in the NSG allow Internet connectivity.
b) Don't remove the default outbound rules.
c) Don't create other outbound rules that deny any outbound connectivity.
d) Traffic from the ‘AzureLoadBalancer’ tag with the destination subnet as Any must be allowed.
Finally, do check the priority for all the above stated rules and configurations for if the priority of the inbound rules is set higher than the deny rules, then they won’t be effective. Please find the below snapshot for your reference: -
Related
I have 2 virtual networks in 2 different subscriptions as below:
VNET1 : 192.168.0.0/24 in subscription#1 (HUB)
VNET2 : 192.168.1.0/24 in subscription#2 (SPOKE)
I've created the peering and I am able to ping from both sides properly.
Now, I have created the Private Zone in subscription#1 (HUB) as shown below
resource "azurerm_private_dns_zone" "keyvalutzone" {
name = "privatelink.vaultcore.azure.net"
resource_group_name = azurerm_resource_group.ipz12-dat-np-connection-rg.name
depends_on = [
azurerm_resource_group.ipz12-dat-np-connection-rg
]
}
and it is Linked with VNET as shown below
resource "azurerm_private_dns_zone_virtual_network_link" "network_link_hub_vnet_keyvalut" {
name = "vnet_link_hub_keyvalut"
resource_group_name = azurerm_resource_group.ipz12-dat-np-connection-rg.name
private_dns_zone_name = azurerm_private_dns_zone.keyvalutzone.name
virtual_network_id = azurerm_virtual_network.hub_vnet.id
depends_on = [
azurerm_private_dns_zone.keyvalutzone,
azurerm_virtual_network.hub_vnet
]
}
Question: Do I need to associate this private DNS zone with all virtual networks including VNET2 in subscription#2 (SPOKE) so that private endpoints can be resolved in VNET2? If so, how do I associate this private DNS zone with VNET2?
Note: I have a Private DNS Resolver in subscription#1 (HUB) as it's inbound endpoint address is used as a custom DNS in VNET1 in subscription#1 (HUB)
resource "azurerm_private_dns_resolver" "hub_private_dns_resolver" {
name = "hub_private_dns_resolver"
resource_group_name = azurerm_resource_group.ipz12-dat-np-connection-rg.name
location = azurerm_resource_group.ipz12-dat-np-connection-rg.location
virtual_network_id = azurerm_virtual_network.hub_vnet.id
}
resource "azurerm_private_dns_resolver_inbound_endpoint" "hub_private_dns_resolver_ie" {
name = "hub_private_dns_resolver_ie"
private_dns_resolver_id = azurerm_private_dns_resolver.hub_private_dns_resolver.id
location = azurerm_private_dns_resolver.hub_private_dns_resolver.location
ip_configurations {
private_ip_allocation_method = "Dynamic"
subnet_id = azurerm_subnet.dns_resolver_inbound_subnet.id
}
}
I tried to reproduce the same in my environment and got the results like below:
You can use virtual network that belong to different subscription with private dns zone make sure you have write operation permission on the virtual networks and the private DNS zone like Network Contributor and Private DNS zone Contributor roles
If you are using private endpoint in a hub-and-spoke model from a different subscription or same subscription It is recommended to link the same private DNS zones to all spokes and hub virtual networks that contain clients that need DNS resolution from the zones.
You can link a private DNS zone with N no of virtual network. It is also possible to connect a private zone to a virtual network that is a part of a different subscription.
Make sure to Enable auto registration whenever a new virtual machine is created automatically registered with this private dns zone.
Then I have created virtual machine it registered automatically and try to add record like below:
Now try to test private dns zone and configure the firewall on both virtual machines to allow inbound ICMP packets in RDP powershell like below:
New-NetFirewallRule –DisplayName "Allow ICMPv4-In" –Protocol ICMPv4
Now from this machine vm2(infra002) I am able to ping vm1 using the automatically registered host name like below:
Reference:
Azure Private Endpoint DNS configuration | Microsoft
Would this understanding be correct?
In order to achieve the objective of connecting multiple subnets in a vnet to a single storage account using:
1.Service Endpoint - requires a service endpoint to be created in each subnet
2.Private Endpoint - single private endpoint in the vnet is sufficient for all subnets of this vnet(and same private endpoint works across peered vnets too,unlike service endpoints).
Regards,
Aditya Garg
That understanding is correct.
Service endpoints are created on the subnet level and need to be specified there, like in that Terraform example here:
resource "azurerm_subnet" "database-subnet" {
name = "database-subnet"
address_prefixes = ["10.0.2.0/24"]
resource_group_name = var.resourcegroup_name
virtual_network_name = azurerm_virtual_network.vnet1.name
service_endpoints = [ "Microsoft.Sql" ]
}
A private endpoint on the other hand gives you an IP in your own vnet representing a specific instance of a PaaS-Service (like a specific database within the Azure SQL Database service).
That internal IP is reachable from all your subnets. Intra-Subnet routing is done by default in Azure, so there's no need to set up some sort of custom / user defined routing.
When using service endpoints together with network security groups (nsg) on the subnet(s), one has to make sure to best use "service tags" within the nsg rules, since otherwise the system can break when PaaS-Services change their IP-ranges which might have been used in the nsg rules. So service tags are used instead of IP ranges in that scenario.
And of course like briefly mentioned in one of the answers in the linked question, there are further differences between the two options. I only like to mention the pricing very quickly:
Service endpoints are completely free of charge, whereas service endpoints have a pricing per hour + per data volume being send through them.
How can I set use remote VNET's gateway on a hub peer using terraform?
On the spoke, I'm trying to set the below highlighted "Use the remote virtual network's gateway or Route Server" via terraform:
I've tried setting the use_remote_gateways=true but as can see, it doesn't set it.
resource "azurerm_virtual_network_peering" "peer_lz_to_connectivity" {
provider = azurerm.lz
name = local.peer_to_connectivity_name
resource_group_name = azurerm_resource_group.rg.name
virtual_network_name = azurerm_virtual_network.vnet.name
remote_virtual_network_id = data.azurerm_virtual_network.fw_vnet.id
allow_forwarded_traffic = true
allow_gateway_transit = true
use_remote_gateways = true
}
More info:
On the hub peer of course this is not set. It just needs to be set on the spoke peer.
You can configure spoke Vnets to use the hub Vnet VPN gateway to communicate with remote networks. To allow gateway traffic to flow from spoke to hub and connect to remote networks, you must:
Configure the peering connection in the hub to allow gateway transit.
Configure the peering connection in each spoke to use remote
gateways.
Configure all peering connections to allow forwarded
traffic.
Here are a couple of Hub and Spoke architectures for your reference :
https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/hub-spoke?tabs=cli#virtual-network-peering
https://learn.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-peering-gateway-transit
In your Terraform code block above, you have set all 3 options (allow_forwarded_traffic, allow_gateway_transit & use_remote_gateways) to True, which is not possible. "allow gateway transit" option is enabled on the Hub Vnet where the VPN gateway is deployed and "use_remote_gateways" option is enabled on the spoke Vnet which needs to use the hub VPN gateway for access.
Refer : https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview#gateways-and-on-premises-connectivity
Below is the Terraform code block for enabling "use_remote_gateways" option on a spoke Vnet:
resource "azurerm_virtual_network_peering" "spoke1-hub-peer" {
name = "spoke1-hub-peer"
resource_group_name = azurerm_resource_group.spoke1-vnet-rg.name
virtual_network_name = azurerm_virtual_network.spoke1-vnet.name
remote_virtual_network_id = azurerm_virtual_network.hub-vnet.id
allow_virtual_network_access = true
allow_forwarded_traffic = true
allow_gateway_transit = false
use_remote_gateways = true
depends_on = [azurerm_virtual_network.spoke1-vnet, azurerm_virtual_network.hub-vnet , azurerm_virtual_network_gateway.hub-vnet-gateway]}
You can find the whole Terraform code block for hub & spoke topology in the below doc:
https://learn.microsoft.com/en-us/azure/developer/terraform/hub-spoke-spoke-network
I have a terraformed Azure MySQL instance and a WordPress docker instance running in an Azure Container Instance. Both come up fine, but I can't see a way to automatically allow access from the container instance to MySQL because 1) the traffic is not coming through the external IP address, and 2) I don't know where the actual IP address is being created, and 3) I can't see a way to determine what the IP address is.
resource "azurerm_container_group" "wp-container-group" {
name = var.container_group_name
location = azurerm_resource_group.wordpress-resource-group.location
resource_group_name = azurerm_resource_group.wordpress-resource-group.name
ip_address_type = "public"
dns_name_label = var.dns_label
os_type = "Linux"
container {
name = "wordpress"
image = "wordpress:latest"
...
}
...
}
resource "azurerm_mysql_server" "wordpress_mysql" {
name = "foo-bar"
location = azurerm_resource_group.wordpress-resource-group.location
resource_group_name = azurerm_resource_group.wordpress-resource-group.name
....
}
resource "azurerm_mysql_database" "wp-db" {
name = "wordpress"
resource_group_name = azurerm_resource_group.wordpress-resource-group.name
server_name = azurerm_mysql_server.wordpress_mysql.name
charset = "utf8"
collation = "utf8_general_ci"
}
This is set to allow traffic from the external IP address:
resource "azurerm_mysql_firewall_rule" "allow_container" {
name = "allow_wordpress_container"
resource_group_name = azurerm_resource_group.wordpress-resource-group.name
server_name = azurerm_mysql_server.wordpress_mysql.name
start_ip_address = azurerm_container_group.wp-container-group.ip_address
end_ip_address = azurerm_container_group.wp-container-group.ip_address
}
When I SSH into the container instance and try to connect via the command line mysql, it tells me that it's using a different IP address than the external one---the internal one is in the 52.x.x.x range. I can manually add this ip address as a firewall rule, but I want to do it automatically.
So my question is: where does this 52.x.x.x address get assigned, and how can I access it in Terraform so that I can automatically configure the firewall rule between the container instance and mysql?
The outbound IP address associated with the container instance is not available as a property of the container. The IP address is not guaranteed to persist beyond container restart either, so it would not be a reliable identifier for a firewall rule.
The simplest solution in this case would be to "Allow access to Azure services" in your database firewall. This is acheived by creating an azurerm_sql_firewall_rule having start_ip_address and end_ip_address set to "0.0.0.0"
Pay attention that "allowing access to Azure services" means access to all Azure services, even if not yours. The Azure Portal allows, when configuring the network connectivity of Azure Databases, to check "Allow public access from Azure services and resources within Azure to this server group" which seems nice. But the associated tooltip says "This option configures the firewall to allow connections from IP addresses allocated to any Azure service or asset, including connections from the subscriptions of other customers."
And also allowing IPs 0.0.0.0 to 255.255.255.255 to access your DB opens the door to the whole world ...
I am trying to secure some subnets in a virtual network.
I have Virtual Network 1 with Subnets A, B, C.
I have a VM in each subnet with default endpoints (RDP and WinRM).
I used the following commands to create and attach the Network Security Group to subnet C:
$SGName = 'SecurityGroupC'
$location = 'West US'
$virtualNetwork = '1'
$subnet = 'C'
New-AzureNetworkSecurityGroup -Name $SGName -Location $Location -Label $SGName
Get-AzureNetworkSecurityGroup -Name $SGName | Set-AzureNetworkSecurityGroupToSubnet -VirtualNetworkName $VirtualNetwork -SubnetName $Subnet
I can see the default rules by running:
Get-AzureNetworkSecurityGroup -Name $SGName -Detailed
Which shows the expected default rules:
Name : SecurityGroupC
Rules :
Type: Inbound
Name Priority Action Source Address Source Port Destination Destination Protocol
Prefix Range Address Prefix Port Range
---- -------- ------ --------------- ------------- ---------------- -------------- --------
ALLOW VNET INBOUND 65000 Allow VIRTUAL_NETWORK * VIRTUAL_NETWORK * *
ALLOW AZURE LOAD 65001 Allow AZURE_LOADBALAN * * * *
BALANCER INBOUND CER
DENY ALL INBOUND 65500 Deny * * * * *
Type: Outbound
Name Priority Action Source Address Source Port Destination Destination Protocol
Prefix Range Address Prefix Port Range
---- -------- ------ --------------- ------------- ---------------- -------------- --------
ALLOW VNET OUTBOUND 65000 Allow VIRTUAL_NETWORK * VIRTUAL_NETWORK * *
ALLOW INTERNET 65001 Allow * * INTERNET * *
OUTBOUND
DENY ALL OUTBOUND 65500 Deny * * * * *
Based on these rules my RDP endpoint on my VM in subnet C should stop working. However I am still able to RDP directly to my VM from the internet. Is there something I am missing?
When you create a VM it will create a RDP endpoint automatically. It appears that this setting overrides your Network Security Group values.
I usually add an ACL to it "0.0.0.0/0" "DENY" so I can re-enable it if I need to.
Per the function of Network Security Groups:
"Network security groups are different than endpoint-based ACLs. Endpoint ACLs work only on the public port that is exposed through the Input endpoint. An NSG works on one or more VM instances and controls all the traffic that is inbound and outbound on the VM."
The first inbound rule = "ALLOW VNET INBOUND 65000 Allow VIRTUAL_NETWORK * VIRTUAL_NETWORK * "
This allows all traffic inside to the virtual machine. Since there is no endpoint ACL on the VM and that RDP endpoint is enabled, then traffic can get to the VM.
Update: You are correct. It should not allow RDP access. As per this link under FAQ: http://azure.microsoft.com/blog/2014/11/04/network-security-groups/
4. I have defined RDP endpoint for my VM and I am using a Network Security Group do I need a Access control rule to connect to the RDP port from Internet?
Yes, the default rules in Network Security Group does not allow access to any port from Internet, the users have to create a specific rule to allow RDP traffic.
I have just found the same thing. I also found that deleting and recreating the endpoint then allows the NSG to function as expected, i.e. it seems that if the NSG is created/linked after the endpoint, it doesn't work but if the NSG is done first, it does!
You have to apply the changes, that's why you're not getting the expected behaviour:
Set-AzureRmVirtualNetwork -VirtualNetwork $virtualNetwork
Hope this helps!