Virtual network peering connection azure - azure

I have the following three virtual networks: - VNETa - VNETb - VNETc All the network traffic between the three virtual networks will be routed through VNET1a.
need to create the virtual networks, and then to ensure that all the Azure virtual machines can connect to other virtual machines by using their private IP address.
The solutions must NOT require any virtual gateways and must minimize the number of peerings. What should you do from the Azure portal before you configuring IP routing?

You could make peering between VNETa and VNETb, peering between VNETa and VNETc. Without a virtual network gateway and without a separate peering connection between those spokes VNETb and VNETc, to make the spoke connectivity, you need to deploy a virtual appliance as the hub in the network VNETa, then make two UDRs in each spoke VNets VNETb and VNETc to route traffic from one spoke network to another spoke network via NVA. In this scenario, you must configure the peering connections to allow forwarded traffic. see the explanation link.
For more details of UDR configuration, you could refer to this blog about Azure Networking - Hub-Spoke with NVA and Azure Firewall

The key to answering this question is to understand that the question is indicating that an IP routing solution will be configured after you have provisioned the necessary resources and configured appropriately: "...before you configure IP routing".
You do not need a gateway subnet or virtual gateways to implement a hub and spoke topology assuming that you are going to provision, for example, a VM with IP Forwarding enabled on the vNIC to act as a router.
Create your 3 subnets, in your example VNETa, VNETb and VNETc
From VNETa, create a peering with VNETb using the Resource Manager Deployment Model
Ensure "Allow forwarded traffic from VNETa to VNETb" is enabled
Repeat steps 2 & 3, substituting VNETb for VNETc
And that's it. Now when you configure IP routing you will provision a router VM or some other Network Virtual Appliance (NVA) in the hub network and create a Route Table for later application to VNETb and VNETc specifying the router VM's internal IP as the next hop.
Jamie.

Related

VMs in separate subnet can ping each other

In Azure, I have 2 VMs, each in their own subnet (see image below). To my surprise, both VMs can "see" each other (using ping).
The subnet address ranges are:
net1-subnet1: 10.0.1.0/24
net1-subnet2: 10.0.2.0/24
The VMs (NIC) IPs are:
vm1: 10.0.1.4
vm2: 10.0.2.4
This is the setup
Why are both VMs able to ping each other? I thought since they are in different subnets, they would not be able to "see" each other. Is this an Azure specific feature?
Thanks
Azure routes traffic between subnets in the same virtual network (or peered virtual networks) by default as described in the Azure virtual networks overview.
You can use network security groups to filter traffic flowing in- and outbound to/from these subnets. The default rules will allow traffic from a virtual network, so you will have to add some of your own rules with a higher priority to the Network Security Groups (NSG). See docs in NSG here

Azure AKS vnet to another vnet communication

We have managed AKS Cluster and it has a few applications PODS. In the same subscription, we have a few servers in the different Resource Group and different VNET. We have a requirement to happen a communication between these two VNET's. I have configured vnet peering between two VNET's but we can see that the communication is not happening.
When I add a rule like "Allow port 443 from all networks" on to the NSG of Virtual machines then everything works fine.
Troubleshooting steps are done.
VNET Peering
Got an API Server IP Address from the "kubeconfig" file and added in the NSG of VM's in a diff RG.
But did not resolve an issue. Could you please help me to fix the issue.
AKS Resources are behind the Internal Load Balancer, so peering did not help. I had to use the Public IP Address provisioned during the AKS Creation process in the NSG. After adding PIP(Available in MC_rg-*** resource group) everything started working.
I would suggest to try connecting the VNET's through VPN gateways .
From an Azure virtual network, connecting to another virtual network is essentially the same as connecting to an on premises network via site-to-site (S2S) VPN.
You will need to go through the below listed steps :
Create VNetA and VNetB and the Corresponding Local Networks.
Create the Dynamic Routing VPN Gateways for each virtual network.
Connect the VPN Gateways.
Please find the referred document for implementing the same solution I have mentioned above .
For more information on difference of vnet peering and vnet gateway you can refer this document.

Routing traffic in Azure VPN

I have 3 VNets, 3 Point-2-Site VPN Gateways, one for each Vnet, and VNet peering is setup as below image.
What I want to achieve is:
If I use VPN1, I can ping all VMs in all 3 VNets.
If I use VPN2, I can only ping VMs in VNet 2 and 1.
If I use VPN3, I can only ping VMs in VNet 3 and 1.
As I understand, to achieve 1, I have to allow forwarded traffic in both peering. But then, 2 and 3 cannot be fulfilled - I can ping all VMs regardless what VPN I use. Is that correct?
What should be the right way to do this?
Update: For more details, here's my use case:
In VNet 1, I have an Intranet server, which should be available for everyone.
In VNet 2, I have a development server.
In VNet 3, I have a test server.
A manager should be able to access all servers --> VPN1.
A developer should be able to access the Intranet and the Dev server --> VPN2
A tester should be able to access the Intranet and the Test server --> VPN3
For your requirements, I believe you could achieve it via configuring VPN gateway transit for virtual network peering a hub-and-spoke network architecture. In this network architecture, you need to deploy one VPN gateway in the VNet1(as the hub) and peer with the other two VNets(as the spoke) instead of deploying VPN gateways in every spoke virtual network. Routes to the gateway-connected virtual networks or on-premises networks will propagate to the routing tables for the peered virtual networks using gateway transit.
The following diagram shows how gateway transit works with virtual network peering.
In this case, you could configure the VNet1 peers with VNet2 and VNet1 peers with VNet3 each other.
On the peering from VNet1 to VNet2 and VNet1 to VNet3, enable the Allow gateway transit option. On the peering from VNet2 to VNet1 and VNet3 to VNet1, set the Use remote gateways option.

Question concerning forward traffic on Azure Virtual Networks

I have a routing problem which I am struggling to solve in the Azure cloud platform concerning traffic that needs to be routed from one vnet to another vnet via another vnet and two VPN tunnels.
Here is a description of the set-up:
I do have two Azure Virtual Networks (VNET1 and VNET2) that each one has its own route-based Azure VPN Gateway and one 3rd party virtual network (VNET3) which is connected to the first Azure virtual network VNTE1 via an IPsec VPN tunnel. Below are the address spaces of all 3 virtual networks.
VNET1 10.20.0.0/16 (Azure vnet)
VNET2 10.30.0.0/16 (Azure vnet)
VNET3 10.0.0.0/12 (3rd party vnet)
Here is what I can do:
The VNET1 is connected via an IPsec VPN tunnel with the VNET3. Thus I am able to ping from a VM in the VNET1 10.20.10.5 a VM in the VNET3 10.0.0.1 and they can ping me back.
The VNET1 is connected via an IPsec VPN tunnel with VNET2. Thus, I am able to ping from a VM in the VNET1 10.20.10.5 a VM in the VNET2 10.30.10.5
Here what i cannot do:
I cannot ping from a VM in the VNET2 10.30.10.5 the VM in VNET3 10.0.0.1.
Here is what I tried to do to solve the problem without any success so far:
My assumption is that the network VNET2 does not know how to route the traffic to the network VNET3. Thus, I created an Azure Route table and I assigned the route table to the subnet 10.30.10.0/24 and I created the rule that all the traffic to the network 10.0.0.0/12 should be routed to the VPN GateWay of the VNTE2. My expectation is that once the traffic will go to the GW it will reach the VNET1 which knows how to route it to the VNET3. This didn't work.
Although I think is not needed since VNET1 already knows how to route the traffic to the VNET3 I have also created a routing table for 10.0.0.0/12 similar to the one above. This didn't help either.
Am I missing a route somewhere, If so which rule and where? Or do I even need to have a VM acting as a router? (I hope not)
I think your issue is the limitation of Azure Virtual Gateway:
The on-premises networks connecting through policy-based VPN devices with this mechanism can only connect to the Azure virtual network; they cannot transit to other on-premises networks or virtual networks via the same Azure VPN gateway.
https://learn.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-connect-multiple-policybased-rm-ps
So, even if you use the same VPN Gateway to connect with VNET 3 and VNET 2, by design VNET 3 and VNET 2 cannot communicate.
To resolve this issue, I recommend to use peering. Your configuration is similar to classic Hub-Spoke topology. Your VNET1 is Hub, VNET2 is Spoke, VNET3 is kind of "on-prem".
No changes needed to configuration between VNET1 and VNET3. You need to establish peering between VNET1 and VNET2 and backwards and apply following configuration:
Configure the peering connection in the hub to allow gateway transit.
Configure the peering connection in each spoke to use remote gateways.
Configure all peering connections to allow forwarded traffic.
https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/hub-spoke
In this case, VNET3 will be able to communicate with HUB (VNET1) and all spokes (VNET2 and any others connected to VNET1). VNET2 can communication with HUB (VNET1) and on-prem (VNET3) when the tunnel is up.
Warning: Spokes are not able to communicate between each other without a forwarding gateway in HUB, i.e. if you add VNET4 with peering to and from VNET1, VNET4 will not able to ping VMs in VNET2. But they could communicate with HUB and on-prem without any additional appliances.

Connect to an on-premise network from an external computer via an Azure VPN Gateway

My goal is to connect from an external computer to both a Azure virtual network as well as a small on-premise network via an Azure VPN Gateway:
The Azure virtual network has the address range 10.1.0.0/16.
The on-premise network has the address range 10.2.0.0/16.
So far, I have done the following:
Set up a virtual gateway on the virtual network.
The virtual gateway is configured as a point-to-site VPN gateway.
The virtual gateway is connected to the on-premise network via a site-to-site connection.
So the topology looks like this:
VPN-client =p2s=> Azure =s2s=> On-premise
I can now dial in via VPN, but I can only ping addresses within the virtual network. On-premise addresses are not reachable.
I have also added the line
ADD 10.2.0.0 MASK 255.255.0.0 default METRIC default IF default
to the routes.txt file on the VPN client, but it's still not working.
This is not possible to achieve this.
Why
First, Azure VNet is a logic isolation and segmentation. Each virtual network is isolated from other virtual network.
When you try to connect the VNet Via P2S VPN, your client can communicate with resources in the VNet. But it cannot direct the traffic out of the VNet.
When you try to connect the VNet via S2S VPN, your site can communicate with the resources in the VNet.But it cannot direct the traffic out of the VNet.
Because they are using different Gateway and have different CIDR and Azure VNet cannot route the inbound traffic to one specify outbound gateway.
For Example
VNetA <peering or VPN gateway> VNetB <peering or VPN gateway> VNetC
But VNetA cannot communicate with VNetC
This is important for Azure VNet to reach isolation and segmentation.

Resources