Azure Front Door in the front of Application Gateway - azure

I've deployed Azure Front Door in the front of Application Gateway.
Now I want to route all traffics through Front Door and restrict direct access to Application Gateway's public IP address.
How to do that?
Here's what I'm trying to do

I've got the answer from Microsoft Azure Support.
I needed to add a Network Security Group(NSG) and link Application Gateway Subnet to it.
NSG inbound rules:
Source: Service Tag
Source service tag: AzureFrontDoor.Backend
Source Port ranges: *
Destination: Any
Destination port ranges: *
Protocol: Any
Action: Allow
Priority: 200
Source: Service Tag
Source service tag: GatewayManager
Source Port ranges: *
Destination: Any
Destination port ranges: 65200-65535
Protocol: Any
Action: Allow
Priority: 300
Source: Service Tag
Source service tag: VirtualNetwork
Source Port ranges: *
Destination: Any
Destination port ranges: *
Protocol: Any
Action: Allow
Priority: 400
Source: Service Tag
Source service tag: AzureLoadBalancer
Source Port ranges: *
Destination: Any
Destination port ranges: *
Protocol: Any
Action: Allow
Priority: 500
Source: Any
Source Port ranges: *
Destination: Any
Destination port ranges: *
Protocol: Any
Action: Deny
Priority: 600
Here's how my NSG looks like

From the Microsoft docs, these are the Network Security Group rules attached to the App Gateway subnet you need:
Azure CLI example:
# Set up reusable variables
app="myapp"; echo $app
env="prod"; echo $env
l="eastus2"; echo $l
tags="env=$env app=$app"; echo $tags
app_rg="rg-$app-$env"; echo $app_rg
agic_nsg_n="nsg-agic-$app-$env"; echo $agic_nsg_n
# Creates an AGW NSG with Default rules
az network nsg create \
--resource-group $app_rg \
--name $agic_nsg_n \
--location $l \
--tags $tags
# AllowGatewayManagerInbound
az network nsg rule create \
--name AllowGatewayManagerInbound \
--direction Inbound \
--resource-group $app_rg \
--nsg-name $agic_nsg_n \
--priority 300 \
--destination-port-ranges 65200-65535 \
--protocol TCP \
--source-address-prefixes GatewayManager \
--destination-address-prefixes "*" \
--access Allow
# AllowAzureFrontDoor.BackendInbound
az network nsg rule create \
--name AllowAzureFrontDoor.Backend \
--direction Inbound \
--resource-group $app_rg \
--nsg-name $agic_nsg_n \
--priority 200 \
--destination-port-ranges 443 80 \
--protocol TCP \
--source-address-prefixes AzureFrontDoor.Backend \
--destination-address-prefixes VirtualNetwork \
--access Allow
The assumptions are:
Incoming traffic from Azure Front Door is either through port 80 HTTP or 443 HTTPs. In case you require, update the ports or use Any.
I have an Azure Kubernetes Service behind the Application Gateway configured as an Application Gateway Ingress Controller (AGIC), hence the destination is VirtualNetwork. Again, based on your specific scenario you could update it or leave it as Any.
Here is also a complete GitHub code example within the Azure directory.
Questions from Comments:
I dont understand how this prevents anyone other than FD from accessing the AKS apps via the AG public IP address though. Could you please clarify? #AndyMoose
NSG rules work as follows: based on rule priority, incoming requests will be evaluated against NSG rules, if the request matches the NSG rule the NSG rule applies its Action (Allow or Deny). If the request does not match the rule, it evaluates the next rule. For example:
If you or anyone attempts to access the Azure Application Gateway (agw) public ip (pip) it will check the NSG rules as follows:
200: your request does not match since the incoming request is not coming from the AzureFrontDoor.Backend
300: your request does not match since the incoming request is not coming from the GatewayManager
65000: your request does not match since the incoming request is not coming from the within the VirtualNetwork
65001: your request does not match since the incoming request is not coming from the within AzureLoadBalancer
65500: your request matches the rule since the NSG rules accepts all incoming sources, ports, protocols. Therefore, the NSG rule applies its action (Deny)

Related

Azure Container Instances - allow outbound connection to internet

I am running Ubuntu 18.04 in Container instances in Private Virtual Network. The container does not have access to the internet. How to enable access to specific URL on the internet?
Yes romanzdk, You are in right direction, Seems some corporate firewall rules do not allow connection to the outside world.
By default, Azure Firewall denies (blocks) inbound and outbound
traffic.
You can Define a use-defined route on the ACI subnet, to divert traffic to the Azure firewall.set the next hop type to VirtualAppliance, and pass the firewall's private IP address as the next hop address.
az network route-table route create \
--resource-group $RESOURCE_GROUP_NAME \
--name DG-Route \
--route-table-name Firewall-rt-table \
--address-prefix 0.0.0.0/0 \
--next-hop-type VirtualAppliance \
--next-hop-ip-address $FW_PRIVATE_IP
FW_PRIVATE_IP="$(az network firewall ip-config list \
--resource-group $RESOURCE_GROUP_NAME \
--firewall-name myFirewall \
--query "[].privateIpAddress" --output tsv)"
Also you can create a NAT rule on the firewall to translate and filter inbound internet traffic to the application container.
For more information how to outbound and inbound traffic to a container group by overcoming firewall refer this Microsoft Document

Azure Container Instance Security VPN

I've got an azure container instance... I've added it into a vnet... with a private IP address..10.0.0.4
I want only a handful of azure app services to be able to call the rest API that this azure container instance exposes, how do I give those azure app services the ability to call the container?
Cheers
Andrew
There are several ways in which you might achieve this.
One would be to Configure a single public IP address for outbound and inbound traffic to an Azure container group With this method, you can deploy an Azure Container Instance in a Virtual network as you have already done.
Then,
Deploy Azure Firewall in network
First, use the az network vnet subnet create to add a subnet named AzureFirewallSubnet for the firewall. AzureFirewallSubnet is the required name of this subnet.
az network vnet subnet create \
--name AzureFirewallSubnet \
--resource-group $RESOURCE_GROUP_NAME \
--vnet-name $aci-vnet \
--address-prefix 10.0.1.0/26
Use the following Azure CLI commands to create a firewall in the subnet.
If not already installed, add the firewall extension to the Azure CLI using the az extension add command:
az extension add --name azure-firewall
Create the firewall resources:
az network firewall create \
--name myFirewall \
--resource-group $RESOURCE_GROUP_NAME \
--location eastus
az network public-ip create \
--name fw-pip \
--resource-group $RESOURCE_GROUP_NAME \
--location eastus \
--allocation-method static \
--sku standard
az network firewall ip-config create \
--firewall-name myFirewall \
--name FW-config \
--public-ip-address fw-pip \
--resource-group $RESOURCE_GROUP_NAME \
--vnet-name $aci-vnet
Update the firewall configuration using the az network firewall update command:
az network firewall update \
--name myFirewall \
--resource-group $RESOURCE_GROUP_NAME
Get the firewall's private IP address using the az network firewall ip-config list command. This private IP address is used in a later command.
FW_PRIVATE_IP="$(az network firewall ip-config list \
--resource-group $RESOURCE_GROUP_NAME \
--firewall-name myFirewall \
--query "[].privateIpAddress" --output tsv)"
Get the firewall's public IP address using the az network public-ip show command. This public IP address is used in a later command.
FW_PUBLIC_IP="$(az network public-ip show \
--name fw-pip \
--resource-group $RESOURCE_GROUP_NAME \
--query ipAddress --output tsv)"
Define user-defined route on ACI subnet
Define a use-defined route on the ACI subnet, to divert traffic to the Azure firewall. For more information, see Route network traffic.
Create Route Table
First, run the following az network route-table create command to create the route table. Create the route table in the same region as the virtual network.
az network route-table create \
--name Firewall-rt-table \
--resource-group $RESOURCE_GROUP_NAME \
--location eastus \
--disable-bgp-route-propagation true
Create route
Run az network-route-table route create to create a route in the route table. To route traffic to the firewall, set the next hop type to VirtualAppliance, and pass the firewall's private IP address as the next hop address.
az network route-table route create \
--resource-group $RESOURCE_GROUP_NAME \
--name DG-Route \
--route-table-name Firewall-rt-table \
--address-prefix 0.0.0.0/0 \
--next-hop-type VirtualAppliance \
--next-hop-ip-address $FW_PRIVATE_IP
Associate route table to ACI subnet
Run the az network vnet subnet update command to associate the route table with the subnet delegated to Azure Container Instances.
az network vnet subnet update \
--name $aci-subnet \
--resource-group $RESOURCE_GROUP_NAME \
--vnet-name $aci-vnet \
--address-prefixes 10.0.0.0/24 \
--route-table Firewall-rt-table
Finally,
Configure rules on firewall
By default, Azure Firewall denies (blocks) inbound and outbound traffic.
Configure NAT rule on firewall to ACI subnet
Create a NAT rule on the firewall to translate and filter inbound internet traffic to the application container you started previously in the network. For details, see Filter inbound Internet traffic with Azure Firewall DNAT
Create a NAT rule and collection by using the az network firewall nat-rule create command:
az network firewall nat-rule create \
--firewall-name myFirewall \
--collection-name myNATCollection \
--action dnat \
--name myRule \
--protocols TCP \
--source-addresses '$SOURCE_ADDRESSES' \
--destination-addresses $FW_PUBLIC_IP \
--destination-ports 80 \
--resource-group $RESOURCE_GROUP_NAME \
--translated-address $ACI_PRIVATE_IP \
--translated-port 80 \
--priority 200
Add NAT rules as needed to filter traffic to other IP addresses in the subnet. For example, other container groups in the subnet could expose IP addresses for inbound traffic, or other internal IP addresses could be assigned to the container group after a restart.
Note: Replace $SOURCE_ADDRESSES with a space-separated list of your App Services' outbound IP Addresses.
Create outbound application rule on the firewall
Run the following az network firewall application-rule create command to create an outbound rule on the firewall. This sample rule allows access from the subnet delegated to Azure Container Instances to the FQDN checkip.dyndns.org. HTTP access to the site is used in a later step to confirm the egress IP address from Azure Container Instances.
az network firewall application-rule create \
--collection-name myAppCollection \
--firewall-name myFirewall \
--name Allow-CheckIP \
--protocols Http=80 Https=443 \
--resource-group $RESOURCE_GROUP_NAME \
--target-fqdns checkip.dyndns.org \
--source-addresses 10.0.0.0/24 \
--priority 200 \
--action Allow
An alternative method can be to Integrate your App Service with an Azure virtual network. With Azure Virtual Network (VNets), you can place many of your Azure resources in a non-internet-routable network. The VNet Integration feature enables your apps to access resources in or through a VNet. VNet Integration doesn't enable your apps to be accessed privately.
Please find a pictorial example here. You can then connect the the App Service virtual Network with the ACI Virtual Network through Vnet-toVnet peering or Vnet-to-Vnet VPN Gateway
However, with this method, you will have to integrate all the Azure App Services that will be connecting to your ACI with a Virtual Network.

How to open outbound port using Azure cli?

I have a Linux VM on Azure. I want to allow outbound traffic on some ports. For inbound, I have used this command on the Azure CLI.
az vm open-port --resource-group myResourceGroup --name myVM --port 80
Is there an equivalent Azure CLI command for opening outbound traffic?
Yes you can open outbound ports using CLI. You need to open the outbound port in the Network Security Group. You can find the docs here:
https://learn.microsoft.com/en-us/cli/azure/network/nsg/rule?view=azure-cli-latest
A sample command is
az network nsg rule create --name
--nsg-name
--priority
--resource-group
[--access {Allow, Deny}]
[--description]
[--destination-address-prefixes]
[--destination-asgs]
[--destination-port-ranges]
[--direction {Inbound, Outbound}]
[--protocol {*, Esp, Icmp, Tcp, Udp}]
[--source-address-prefixes]
[--source-asgs]
[--source-port-ranges]
[--subscription]

How NAT port forwarding works in Internal Load Balancer in azure?

I'm trying to create a internal load balancer in azure to manage the traffic. I have two VM's attached to the Backend pool and assigned a private ip for FE Load Balancer and attached NATrule1 & 2 to each vm by following azure doc. My questions is how this port forwarding works in the below NAT rules
azure network lb inbound-nat-rule create --resource-group nrprg --lb-name ilbset --name NATrule1 --protocol TCP --frontend-port 5432 --backend-port 3389
azure network lb inbound-nat-rule create --resource-group nrprg --lb-name ilbset --name NATrule2 --protocol TCP --frontend-port 5433 --backend-port 3389.
Frontend is having different port number and backend is having same port number. When the traffic comes through two ports in front end, how backend port will decide to which vm traffic should be sent ? Isn't that port numbers should be reverse like
azure network lb inbound-nat-rule create --resource-group nrprg --lb-name ilbset --name NATrule1 --protocol TCP --frontend-port 3389 --backend-port 5432
azure network lb inbound-nat-rule create --resource-group nrprg --lb-name ilbset --name NATrule2 --protocol TCP --frontend-port 3389--backend-port 5433.
(I'm doing this through CLI 2.0)
Any help will be greatly appreciated.
Thanks.
azure network lb inbound-nat-rule create --resource-group nrprg --lb-name ilbset --name NATrule1 --protocol TCP --frontend-port 5432 --backend-port 3389
azure network lb inbound-nat-rule create --resource-group nrprg --lb-name ilbset --name NATrule2 --protocol TCP --frontend-port 5433 --backend-port 3389
We should use this script to create NAT rules.
We can't use the same ports for one IP address to connect to different services.
Let's say, if we use second scripts to create NAT rules, it will like this:
192.168.1.4:3389--------->10.0.0.4:5432
192.168.1.4:3389--------->10.0.0.4:5433
The outside network traffic will confuse, so we can't use second script to create NAT rules.
RDP service listen on port 3389 by default.
If we use script 1 to create NAT rules, like this:
192.168.1.4:5432--------->10.0.0.4:3389
192.168.1.4:5433--------->10.0.0.4:3389
In this way, when we try to access 192.168.1.4:5432, NAT will forwarding traffic to 10.0.0.4:3389. If we try to access 192.168.1.4:5433, NAT will forwarding traffic to 10.0.0.4:3389.

Can't see application running on external IP of instance

Google Compute Engine newbie here.
I'm following along with the bookshelf tutorial: https://cloud.google.com/nodejs/tutorials/bookshelf-on-compute-engine
But run into a problem. When I try to view my application on http://[YOUR_INSTANCE_IP]:8080 with my external IP
Nothing shows up. I've tried running the tutorial again and again, but still same problem avails.
EDIT:
My firewall rules: http://i.imgur.com/gHyvtie.png
My VM instance:
http://i.imgur.com/mDkkFRW.png
VM instance showing the correct networking tags:
http://i.imgur.com/NRICIGl.png
Going to http://35.189.73.115:8080/ in my web browser still fails to show anything. Says "This page isn't working"
TL;DR - You're most likely missing firewall rules to allow incoming traffic to port 8080 on your instances.
Default Firewall rules
Google Compute Engine firewall by default blocks all ingress traffic (i.e. incoming network traffic) to your Virtual Machines. If your VM is created on the default network (which is usually the case), few ports like 22 (ssh), 3389 (RDP) are allowed.
The default firewall rules are described here.
Opening ports for ingress
The ingress firewall rules are described in detail here.
The recommended approach is to create a firewall rule which allows incoming traffic to your VMs (containing a specific tag you choose) on port 8080 . You can then associate this tag only to the VMs where you will want to allow ingress 8080.
The steps to do this using gcloud:
# Create a new firewall rule that allows INGRESS tcp:8080 with VMs containing tag 'allow-tcp-8080'
gcloud compute firewall-rules create rule-allow-tcp-8080 --source-ranges 0.0.0.0/0 --target-tags allow-tcp-8080 --allow tcp:8080
# Add the 'allow-tcp-8080' tag to a VM named VM_NAME
gcloud compute instances add-tags VM_NAME --tags allow-tcp-8080
# If you want to list all the GCE firewall rules
gcloud compute firewall-rules list
Here is another stack overflow answer which walks you through how to allow ingress traffic on specific ports to your VM using Cloud Console Web UI (in addition to gcloud).
PS: These are also part of the steps in the tutorial you linked.
# Add the 'http-server' tag while creating the VM
gcloud compute instances create my-app-instance \
--image=debian-8 \
--machine-type=g1-small \
--scopes userinfo-email,cloud-platform \
--metadata-from-file startup-script=gce/startup-script.sh \
--zone us-central1-f \
--tags http-server
# Add firewall rules to allow ingress tcp:8080 to VMs with tag 'http-server'
gcloud compute firewall-rules create default-allow-http-8080 \
--allow tcp:8080 \
--source-ranges 0.0.0.0/0 \
--target-tags http-server \
--description "Allow port 8080 access to http-server"

Resources