Simple internal communication between Azure VMs on the same Virtual Network - azure

I have created 2 VMs in the same Virtual Network within the same Cloud Service. They do not have public endpoints. I would like to have the VMs be able to recognize each other as if they are on a local network. For example, I'd like to be able to reference them by machine names using \\ syntax, e.g. on VM1, I'd like to be able to access \\VM2_host_name\shared_folder. Can someone please provide me the steps to configure my VMs to enable this scenario.
Notes: I tried referencing them by their internal IP addresses, and also enabled ICMP traffic in the Windows Firewall. I even entirely turned off the firewalls for both machines just to test. No luck. I can't ping these machines either by host name or IP address from the other machine even without firewall. I have also reviewed similar sounding questions such as (Azure VMs Virtual Network inter-communication) but to no avail.
More Information:
From VM_A (internal IP 10.0.0.5), I'm trying to communicate with VM_B (internal IP 10.0.0.4). Both VMs belong to the same cloud service "MyCloudServiceName". For this test, I also turned off their firewalls just reduce the variables at play.
C:\Users\Matt>NSLookup VM_B
Server: UnKnown
Address: 168.XX.XXX.XX
Non-authoritative answer:
Name: VM_B.MyCloudServiceName.hX.internal.cloudapp.net
Address: 10.0.0.4
C:\Users\Matt>ping VM_B
Pinging VM_B.MyCloudServiceName.hX.internal.cloudapp.net [10.0.0.4] with 32 bytes of data:
Reply from 10.0.0.5: Destination host unreachable.
Reply from 10.0.0.5: Destination host unreachable.
Reply from 10.0.0.5: Destination host unreachable.
Reply from 10.0.0.5: Destination host unreachable.
Ping statistics for 10.0.0.4:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss)
So what I can tell is that the DNS resolution is working. But the machines are still isolated from each other even within the same cloud service.
Note that my actual scenario is I have an ASP.NET Web API self hosted on a service running on one machine which I'd like to be able to access from the other in the same cloud service internally.

We had a similar issue and it was due to the Checksum Offload of the Network Adapters.
Thanks to Microsoft Azure Support for helping us diagnose the issue.
The simple fix is to run this on every machine then do a quick reboot:
Disable-NetAdapterChecksumOffload * -TcpIPv4
Here is an article describing the problem in more details:
http://systemscentre.blogspot.com.au/2013/05/problems-clustering-virtual-machines-on.html

PING may not work as its often disabled in network environments. So I would suggest an NSLookup to verify it is able to resolve the location of the other server.
If both VMs are already in the same cloud service, virtual network should not be necessary as Azure provides basic DNS resolution within that cloud service boundary. By doing a NSLookup -all on each server, you should be able to identify the names they are currently using.
Once you've verified that they can resolve each other, you shouldn't have any other issues getting them to address each other providing you're not using an unsupported protocol (such as UDP multi-cast).

Related

Accessing Service Running on Azure Windows Machine on Specific Port

I have an Azure Windows Virtual Machine where I have enabled the Inbound Rule Port 8080 under Network Security Group. However, when I try to check the connectivity from my Windows Machine to Azure VM it fails. I used the below command.
>telnet <public_ip_address_of_the_vm> 8080
Connecting To XX.XXX.XXX.XXX...Could not open connection to the host, on port 8080: Connect failed
Note: The VM is enabled with Public IP Address. How to further troubleshoot this issue?
The first thing to do is ensure the VM is running. Then, look at is the Effective Security Rules for the NIC in question.
If the VM has multiple NICs you need to look at the effective rules for each nic (they can be different).
To run a quick test to determine if traffic is allowed to or from a VM, use the IP flow verify capability of Azure Network Watcher. IP flow verify tells you if traffic is allowed or denied. If denied, IP flow verify tells you which security rule is denying the traffic.
If there are no security rules causing a VM's network connectivity to fail, the problem may be due to:
Firewall software running within the VM's operating system
Routes configured for virtual appliances or on-premises traffic. Internet traffic can be redirected to your on-premises network via forced-tunneling. If you force tunnel internet traffic to a virtual appliance, or on-premises, you may not be able to connect to the VM from the internet. To learn how to diagnose route problems that may impede the flow of traffic out of the VM, see Diagnose a virtual machine network traffic routing problem.
Full Troubleshooting Docs with step-by-step instructions.

Azure ssh connection from home

I want to connect to a vm in the Azure cloud from home i.e. without a fixed IP. I have added the two security rules for network interface and NSG respectively to accept inbound connections on the ssh port 22 using the ipv4 address given by showip.net. This doesn't work and I get a connection time-out - I tried out ipv6 address as well. If I do the very same thing for another server (outside Azure), the very same procedure works. The native ip address for both my home computer and the virtual machine I use as alternative are IPv6.
So the question is - does my connection from home fail, because there is some sort of reverse lookup failing or what could be the other causes?
Thanks!
It sounds like most likely the issue is some weird NATing of your ISP - especially when IPv6 comes into play, it can often be a bit hard to find the actual external IP address that your requests are coming from. You can try different sites like whatsmyip.com etc to see if you find another one that you can add.
Apart from that, there are various things you could try:
Use SSH from the Azure Cloud Shell (https://shell.azure.com)
Use Azure Bastion to have a jump host in the same VNET
Use a point-to-site VPN from your PC into your VNET

Azure Point to Site port 445

I've setup Azure point to site and I'm able to connect from my computer to an Azure VM (file share). I'm also able to ping my computer IP address from the Azure VM. However, I'm not able to connect to any resource on my local computer. When trying to access a file share on my computer from the Azure VM I get the following error:
file and print sharing resource (169.254.108.240) is online but isn't responding to connection attempts.
The remote computer isn’t responding to connections on port 445, possibly due to firewall or security policy settings, or because it might be temporarily unavailable. Windows couldn’t find any problems with the firewall on your computer.
Port 445 is enabled on my local computer:
netsh firewall set portopening TCP 445 ENABLE
As an additional test If I issue a \169.254.108.240 from my local computer point to itself it works fine. The same try from the Azure VM gives me the error above.
Thanks,
Your IP address (169.254.*) is a non-routable address. You'll need to get a valid IP (say with DHCP, or set manually) and allow connections to your machine. If you have a firewall, this means adding a NAT rule to it.
If possible, try making the connection from another computer on your LAN to isolate any other firewall/Azure issues.
I think you have to consider several concepts while implementing azure network, first try to put point to site network on a different range of IPs (like 10.4.0.0) then try to disable firewall on your computer and try again, if you have proper routing device it should go through and get the feedback form the local machine.

Azure Web Role can't see VM's internal IP (but VM can see web role)

I have a web role (WR) and a virtual machine (VM) hosted on Azure, both are within the same Virtual Network (VNet), and on the same subnet.
If I look at the azure portal and go to the VNet page, the dashboard shows both my VM and my WR are on the network with internal IP addresses as I expect:
VM: 10.0.0.4
WR: 10.0.0.5
I can Remote Desktop to both machines, from the VM, I can ping 10.0.0.5 and get a response, from the WR, if I ping 10.0.0.4 all I ever get is a Timeout.
I've been following the instructions from: http://michaelwasham.com/2012/08/06/connecting-web-or-worker-roles-to-a-simple-virtual-network-in-windows-azure/ and there is no mention of any additional settings I need to do to either machine - but is there something I'm missing?
Do I need to open up the VM to be contactable?
Extra information:
At the moment, the VM has an Http and Https end point available publicly, but I aim to turn those off and only use the WR for that (hence wanting to connect using the internal IP).
I don't want to use the public IP unless there is absolutely no way around it, and from what I've read that doesn't seem to be the case.
For completeness, moving my comment to an answer: While the virtual network is allowing traffic in both directions, you'll need to enable ICMP via the firewall, which will then let your pings work properly.

Azure VMs Virtual Network inter-communication

I'm new to Azure (strike 1) and totally suck at networking (strike 2).
Nevertheless, I've got two VMs up and running in the same virtual network; one will act as a web server and the other will act as a SQL database server.
While I can see that their internal IP addresses are both in the same network I'm unable to verify that the machines can communicate with each other and am sort of confused regarding the appropriate place to address this.
Microsoft's own documentation says
All virtual machines that you create in Windows Azure can
automatically communicate using a private network channel with other
virtual machines in the same cloud service or virtual network.
However, you need to add an endpoint to a machine for other resources
on the Internet or other virtual networks to communicate with it. You
can associate specific ports and a protocol to endpoints. Resources
can connect to an endpoint by using a protocol of TCP or UDP. The TCP
protocol includes HTTP and HTTPS communication.
So why can't the machines at least ping each other via internal IPs? Is it Windows Firewall getting in the way? I'm starting to wonder if I've chose the wrong approach for a simple web server/database server setup. Please forgive my ignorance. Any help would be greatly appreciated.
If both the machines are in the same Virtual Network, then just turn off Windows Firewall and they will be able to ping each other. Other way is to just allow all incoming ICMP traffic in Windows Firewall with Advanced Settings.
However there is a trick. Both the machines will see each other by IP Addresses, but there will be no name resolution in so defined Virtual Network. Meaning that you won't be able to ping by name, but only by direct IP address. So, if want your Website (on VM1) to connect to SQL Server (on VM2), you have to address it by full IP Address, not machine name.
The only way to make name resolution within a Virtual Network is to use a dedicated DNS server, which you maintain and configure on-premises.
This article describes in details name resolution scenarios in Windows Azure. Your particular case is this:
Name resolution between virtual machines and role instances located in
the same virtual network, but different cloud services
You could potentially achieve name resolution, if you put your VMs is same cloud service. Thus you will not even require dedicated virtual network.
If your VMs are inside a Virtual Network in Azure, then you have to make sure two things.
Required Port is enabled.
Firewall is disabled on the server.
I was trying to connect to one VM where SQL Server DB was installed, from another VM. I Had to enable 1433 port in the VM where SQL was installed. For this you need to add an MSSQL endpoint to the VM on the azure management portal. After that i disabled windows firewall. Then i was able to connect to the VM from another.

Resources