Two VMs connected through VNet-to-VNet not pinging each other - azure

Again, I tried to create a VNet-to-VNet connection.
Briefly, I created
Gateway Subnet at East US Region
Gateway Subnet at West US Region
Virtual Network Gateway for East US Region and
Virtual Network Gateway for West US Region
Using Connection type VNet-to-VNet, I connected both Virtual Network Gateway from both sides.
I created connection between both Virtual Network Gateway.
The status of both connections says, Connected.
Windows Server Domain Controller is set up at East US and Windows 10 is installed at West US.
Windows 10 is unable to ping and join the Windows Server Domain Controller.
While joining the Domain Controller, the error message is
The issue is
I am able to connect both VMs which is at two different VNets using RDP with Public IP.
Both VMs’ virtual network gateways are also connected to each other through Connections.
I am able to connect one VM from another using RDP with Private IP.
But I am not able to join Windows 10 VM to Windows Server 2016 Domain Controller.
I request please go through the link https://1drv.ms/u/s!Ail_S1qZOKPmlgBU5fLviInoisrx?e=ImrqpL and help me to fix the issue related to VNet-to-Vnet Connection so that Windows 10 VM from one VNet can join the Windows Server 2016 Domain Controller VM which is at another VNet.
I hope you'll consider it positively.
Regards
TekQ

You might have to create routes, you are not using recommended private address space so routes are not created for you.
Azure automatically creates default routes
for the following address prefixes: 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16: Reserved for private use in RFC 1918.
100.64.0.0/10: Reserved in RFC 6598.
Check the effective routes to seen next hop for traffic in the peering address space.
https://learn.microsoft.com/en-us/azure/virtual-network/diagnose-network-routing-problem
Additional Information on VNet Routing
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview

Instead of rely on Vnet Gateway and VPN S2S, you could as well using Vnet Peering between region.
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview

I agree with the other answers. Global VNet Peering would remove the necessity of using a VPN GW, which greatly simplifies the environment and removes the monthly cost of hosting a pair of GWs. Assuming you need those GWs for other connections to VPN devices on-premises, then you can still use this design.
As Hannel pointed out, you're using public ranges for your private networks. That is also okay, but routing would be affected for VMs in those subnets if they attempted to go to actual public IPs in those ranges. Note that Hewlett Packard owns large parts of those ranges, so if your VM needed to get info from an HP website, you would have to create manual UDRs to route that traffic to Next Hop Internet.
So, please do check your Effective Routes on your NICs. You can check this from the NIC and also from Network Watcher. This should help you identify if another route is taking precedence or even if you have a route sending traffic to a virtual appliance.
Do make sure that you chose VNet-to-VNet when you set up your connection. If you chose IPSec, then you would need to have correctly configured your local network gateways.

Related

How do I know that a Virtual Machine in Azure use the Local network gateway route to connect to an on-premise network?

Here a Data engineer who needs your help to setup a connection to an on-premise environment :)!
I have created a virtual network (10.0.0.0/16) with a default subnet (10.0.0.0/24).
Then I created a (Windows) virtual machine which is connected to the vnet/subnet and has allowed ICMP inbound and outbound rules for the ping test. Ping google.com is no problem.
The next step was to create a Virtual network gateway & Local network gateway to connect to an on-premise environment.
The Local network gateway has an Site-to-site (IPsec) connection to a VPN device from a third party (over which I have no control). Status in the Azure portal = 'Connected'.
The third party is able to ping the Virtual Machine in Azure, the 'data in' property on the VPN connection shows that 2 kb (ping) has been received. So that works!
When i try to send a ping command to the ip-address (within the 'address space' specified from the Local network gateway) the ping command fails (Request timed out.).
After a lot of searching on google/stackoverflow I found out that I need to configure a Route Table in Azure because of the BGP = disabled setting. So hopefully I did a good job configure the Routing Table Routes but still I can't perform a successful ping :(!
Do you guys/girls know which step/configuration I have forgotten or where I made a mistake?
I would like to understand why I cannot perform a successful ping to the on-premise environment. If you need more information, please let me know
Site-to-site (IPsec) connection screenshot/config
Routing Table setup screenshot/config
Routing Table Routes in more detail
If you are NOT using BGP between the Azure VPN gateway and this particular network, you must provide a list of valid address prefixes for the Address space in your local network gateway. The address prefixes you specify are the prefixes located on your on-premises network.
In this case, it looks like you have added the address prefixes. Make sure that the ranges you specify here do not overlap with ranges of other networks that you want to connect to. Azure will route the address range that you specify to the on-premises VPN device IP address. There are no other operations that we can do. We don't need to set UDR, especially we don't associate a route table to the Gateway Subnet. Also, avoid associating a network security group (NSG) to the Gateway Subnet. You can check the route table by selecting Effective routes for a network interface in Azure VM. Read more details here.
If you would like to verify the connection from Azure VNet to an on-premise network, ensure that you PING a real private IP address from your on-premise network(I mean the IP address is assigned to an on-premise machine), you can check the IP address with ipconfig/all in local CMD. Moreover, you could Enable ICMP through the Windows firewall inside the Azure VM with the PowerShell command New-NetFirewallRule –DisplayName "Allow ICMPv4-In" –Protocol ICMPv4. Or, instead of using PING, you can use the PowerShell command Test-NetConnection to test a connection to a remote host.
If the problem persists, you could try to reset the Azure VPN gateway and reset the tunnel from the on-premises VPN device. To go further, you could follow these steps to identify the cause of the problem.

Having on-prem IP to point to Azure VM

I have a case where I want to migrate on-prem servers to Azure, but I should still have the local IPs pointing to these VMs. I mean by the local IPs the country-range of IPs since these VMs should be accessed using country IPs for regulatory reasons.
I heard that this is possible, but I have no idea what type of resources I should use to allow this, VNET, VPN, ExpressRoute ?? And how to do it as I have no experience in networking what so ever.
Regards,
NAT is a method of remapping one IP address space into another by modifying network address information in Internet Protocol (IP) datagram packet headers while they are in transit across a traffic routing device.
You can setup a site-to-site VPN between on-prem and Azure Vnet, then deploy a server on-prem run as the NAT device.
It is possible, but with some complications and constraints:
You can run these servers/VMs in Azure using their public IP addresses. You need to create the Virtual Network using these address ranges, but it is possible. The catch here is that these public IP addresses are only accessible via cross premises connectivity solutions such as Azure VPN gateway or Azure ExpressRoute. You cannon access these VMs using their "public" IP addresses directly over the Internet. For this purpose, these public IP address ranges are really treated as "private addresses".
Once you create the virtual network with the public IP addresses (as private address space) in Azure, you will also need to make sure your routing in the on premises network is configured correctly to forward the traffic to these VMs over the VPN tunnels or MPLS/WAN network if you are using ExpressRoute.
If these servers/VMs need to accept requests directly from the Internet, the traffic from the Internet will still come to your on premises network because that's where your ISPs will direct the traffic. You will need to ensure these traffic will be routed correctly over the cross premises connectivity (VPN/ExpressRoute) to Azure.
Hope this helps a bit. Please let me know if this answers your question.
Thanks,
Yushun [MSFT]

getting a block of public IP subnet from microsoft

Does anyone know if its possible to have my corporate azure account to be assigned a block (e.g. subnet) of azure public IP within a region to make it easier to create firewall rules for my corporate firewall which blocks most outgoing ports.
Our customer does not want anyone sourced inside from the corporate .com account to have access to all 22 and 3389 ports out on the internet, but will limit them to a subnet if we can be assigned a subnet on which we will hang our bastion servers on.
I wouldn't know about blocks of IP's, but you can certainly create a virtual network in which you create all your resources in Azure, and hten configure a firewall in azure, which will have a permanent IP. This can then be used to set up a site-to-site VPN thing between your corporate network and the machines in Azure.
https://azure.microsoft.com/en-gb/services/virtual-network/
For public facing ports, you can add another virtual network card and rest assured that the traffic on one card cannot, in any way pass over to the other, network connected card.
This would also be a better strategy than to set up a range of VM's in Azure with public IP's.

Azure Networking: Traffic through VPN to Virtual Machine dropped

We are attempting to move our domain controller to the cloud to facilitate a distributed network. The crux of the problem we're having is that I am unable to send network traffic through the VPN to the VNet and VM domain controller I've created there.
The setup is as follows: (Main Office) SonicWALL NSA 220. (Branch Office 1) SonicWALL TZ105. (Branch Office 2) SonicWALL TZ105. (Azure) VNet with Site-to-Site networking enabled, VM residing within a subnet within the VNet. I've manually configured the VNet gateway to create the VPN connections to all three locations and have confirmed that the VPNs are live and operational and appear to be functioning correctly.
The VNet was created with a "dynamic" routing gateway, per SonicWALL documentation. The SonicWALLs are configured with "tunneled" VPNs and static routes created from each office to the VM subnet. I have not created any outgoing NAT translation rules because I am operating under the assumption that the VNet gateway performs that function. I've enabled incoming translation rules.
I've created the Windows 2012 R2 virtual machine and configured it as a domain controller. Disabled Windows Firewall (by turning it off in the control panel) and intend to install McAfee SaaS (but will not do so until I have everything working as intended). As of right now, the virtual machine can ping hosts on all three office networks (main office, branch office 1, and branch office 2) however the VM cannot be pinged from outside the subnet in Azure.
The Azure configuration looks like this:
Address Space: 192.168.0.0/21
Subnet 1: 192.168.1.0/16
Gateway: 192.168.0.0/29
Local Network 1: 192.168.10.0/16
Local Network 2: 192.168.11.0/16
Local Network 3: 192.168.12.0/16
Routing configuration is as follows:
Source: [Local Subnet]
Destination: [Azure Subnet 1]
Type: All
Interface: VPN Tunnel
The Virtual Machine resides on Subnet 1 with a static IP address (e.g., 192.168.1.4) configured through Windows Azure Powershell.
Ping from the VM to our local networks works fine. Ping from our local networks to the VNet/VM does not work.
I have a feeling that the problem lies in NAT translation. I looked but was completely unable to find any documentation, discussion, information, or resources addressing how the Azure VNet gateway translates incoming and outgoing traffic. I've tried adding translation rules for incoming traffic from Azure to our local network to no avail.
Any ideas? I am not very familiar with network troubleshooting tools so if a response asks for creation of a log or use of any such tools please provide some detail as to how to do it.
Thanks,
Adam
With further troubleshooting I was able to solve the issue and I can now ping all systems from Azure to Local Network and from Local Network to Azure. The problem was with a default NAT rule on the SonicWALL which provided for use of our public IP address for all traffic originating inside our corporate network unless a more specific rule otherwise applies.
To solve the problem I added the following NAT rule:
Source:
Original-Local Subnet
Translated-Original
Destination:
Original-Azure Subnet
Translated-Original
Service:
Original-Any
Translated-Original
Interface:
Inbound-Any
Outbound-Any
This rule corrected the scenario we were experiencing where our firewall was translating all traffic being sent to Azure as our public IP address which, obviously, would create a problem.

Does Azure Point-to-Site or Site-to-Site VPN support cloud services?

I haven't dug in completely, but I'm trying to figure out if the new Azure VPN offerings are just for your own VMs or if they will allow cloud services to connect to your corporate network. For example, I can I use it to have my worker role print to a network printer on my corporate network.
As long as your cloud service is part of a virtual network, it will have an IP address of the VPN subnet assigned to it, and all addresses are accessible (subject to your own networking configuration). Two things to be careful of:
The VPN IP address of the individual instances are subject to change. Every time a role recycles, or you redeploy, the instance IP address will change. This may be a problem if your security requires specific IP addresses. This can be helped by maintaining these ip addresses in your own DNS.
The cloud service load balancer is 'external' and cannot be placed on the virtual network. This means that your cloud service is not addressable as a single endpoint. You have to communicate with each individual role and load balance yourself. Similarly, outgoing data comes from individual roles, not the cloud service (see 1 above).
I haven't tried personally, but you should be able to do just that by joining your cloud service to a virtual network. See this article for details on how to do this: http://convective.wordpress.com/2012/08/26/windows-azure-cloud-services-and-virtual-networks/.

Resources