Azure ASG internal connectivity - azure

I created an application security group, assigned it to two VMs and there is a lot more in that resource group but my question is when I RDP into one of the VMs, I cannot ping the other VM and or reach a website hosted on the other VM. Plus because of an NSG, I am able to reach that website from my local machine.
I thought using ASGs mean, I don't have to do anything else for connected VMs to talk to each other? Also of note, if I open up the ASG to everything in the NSG, I am able to ping and reach the site from the other VM. What am I missing?
Both VMs are in the same vnet and subnet. Screenshot of NIC of one of the VMs below:

when I RDP into one of the VMs, I cannot ping the other VM and or
reach a website hosted on the other VM. Plus because of an NSG, I am
able to reach that website from my local machine.
You're able to connect to the other VM from the VM because VMs in the same virtual network can communicate with each other over any port, by default. This means you can access the other VM using its private IP address from one VM. Note, by default, Firewall inside the VM may disable the ICMP packages, you may use netsh advfirewall firewall add rule name="ICMP Allow incoming V4 echo request" protocol=icmpv4:8,any dir=in action=allow to enable the ICMP inbound traffic if you work on Windows Azure VM or temporarily turn windows firewall off to test this when you ping each other.
In this case, you may check the above first. If you still do not ping VMs or reach a website hosted on the other VM2 from the VM1 inside the private network. I may think that something is blocking on the NSG side. It is not a good way to use PING test the VMs connectivity. You could use telnet to verify if the specific port is blocking.
I thought using ASGs mean, I don't have to do anything else for
connected VMs to talk to each other?
Yes, you don't have to do anything else for connected VMs to talk to each other as they already in the same subnet where they can communicate with each other.
You may refer to more details about Application security groups.

Related

Accessing Service Running on Azure Windows Machine on Specific Port

I have an Azure Windows Virtual Machine where I have enabled the Inbound Rule Port 8080 under Network Security Group. However, when I try to check the connectivity from my Windows Machine to Azure VM it fails. I used the below command.
>telnet <public_ip_address_of_the_vm> 8080
Connecting To XX.XXX.XXX.XXX...Could not open connection to the host, on port 8080: Connect failed
Note: The VM is enabled with Public IP Address. How to further troubleshoot this issue?
The first thing to do is ensure the VM is running. Then, look at is the Effective Security Rules for the NIC in question.
If the VM has multiple NICs you need to look at the effective rules for each nic (they can be different).
To run a quick test to determine if traffic is allowed to or from a VM, use the IP flow verify capability of Azure Network Watcher. IP flow verify tells you if traffic is allowed or denied. If denied, IP flow verify tells you which security rule is denying the traffic.
If there are no security rules causing a VM's network connectivity to fail, the problem may be due to:
Firewall software running within the VM's operating system
Routes configured for virtual appliances or on-premises traffic. Internet traffic can be redirected to your on-premises network via forced-tunneling. If you force tunnel internet traffic to a virtual appliance, or on-premises, you may not be able to connect to the VM from the internet. To learn how to diagnose route problems that may impede the flow of traffic out of the VM, see Diagnose a virtual machine network traffic routing problem.
Full Troubleshooting Docs with step-by-step instructions.

Communicate between VMs on connected Azure Virtual Networks

I have two virtual networks (classic) in Azure, and I need to be able to ssh between vms on these networks. I have followed the instructions here (https://azure.microsoft.com/en-us/blog/vnet-to-vnet-connecting-virtual-networks-in-azure-across-different-regions/), and successfully connected the networks. However, when I try and ping vm1 on vnet1 and vm2 on vnet2 the request times out, so it looks like vm1 cannot see vm2. Are there any further steps I need to take to allow communication? Shouldn't they be able to see each other's private IP addresses?
That's a pretty loaded question, but I think there is a better walk through for you to have a look at:
Configure a VNet-to-VNet connection in the Azure Classic Portal
or
Configure a VNet-to-VNet connection for virtual networks in the same subscription by using Azure Resource Manager and PowerShell
Pick your poison... I've verified both of these work as intended if you follow the steps carefully.

getting a block of public IP subnet from microsoft

Does anyone know if its possible to have my corporate azure account to be assigned a block (e.g. subnet) of azure public IP within a region to make it easier to create firewall rules for my corporate firewall which blocks most outgoing ports.
Our customer does not want anyone sourced inside from the corporate .com account to have access to all 22 and 3389 ports out on the internet, but will limit them to a subnet if we can be assigned a subnet on which we will hang our bastion servers on.
I wouldn't know about blocks of IP's, but you can certainly create a virtual network in which you create all your resources in Azure, and hten configure a firewall in azure, which will have a permanent IP. This can then be used to set up a site-to-site VPN thing between your corporate network and the machines in Azure.
https://azure.microsoft.com/en-gb/services/virtual-network/
For public facing ports, you can add another virtual network card and rest assured that the traffic on one card cannot, in any way pass over to the other, network connected card.
This would also be a better strategy than to set up a range of VM's in Azure with public IP's.

Azure Web Role can't see VM's internal IP (but VM can see web role)

I have a web role (WR) and a virtual machine (VM) hosted on Azure, both are within the same Virtual Network (VNet), and on the same subnet.
If I look at the azure portal and go to the VNet page, the dashboard shows both my VM and my WR are on the network with internal IP addresses as I expect:
VM: 10.0.0.4
WR: 10.0.0.5
I can Remote Desktop to both machines, from the VM, I can ping 10.0.0.5 and get a response, from the WR, if I ping 10.0.0.4 all I ever get is a Timeout.
I've been following the instructions from: http://michaelwasham.com/2012/08/06/connecting-web-or-worker-roles-to-a-simple-virtual-network-in-windows-azure/ and there is no mention of any additional settings I need to do to either machine - but is there something I'm missing?
Do I need to open up the VM to be contactable?
Extra information:
At the moment, the VM has an Http and Https end point available publicly, but I aim to turn those off and only use the WR for that (hence wanting to connect using the internal IP).
I don't want to use the public IP unless there is absolutely no way around it, and from what I've read that doesn't seem to be the case.
For completeness, moving my comment to an answer: While the virtual network is allowing traffic in both directions, you'll need to enable ICMP via the firewall, which will then let your pings work properly.

Azure VMs Virtual Network inter-communication

I'm new to Azure (strike 1) and totally suck at networking (strike 2).
Nevertheless, I've got two VMs up and running in the same virtual network; one will act as a web server and the other will act as a SQL database server.
While I can see that their internal IP addresses are both in the same network I'm unable to verify that the machines can communicate with each other and am sort of confused regarding the appropriate place to address this.
Microsoft's own documentation says
All virtual machines that you create in Windows Azure can
automatically communicate using a private network channel with other
virtual machines in the same cloud service or virtual network.
However, you need to add an endpoint to a machine for other resources
on the Internet or other virtual networks to communicate with it. You
can associate specific ports and a protocol to endpoints. Resources
can connect to an endpoint by using a protocol of TCP or UDP. The TCP
protocol includes HTTP and HTTPS communication.
So why can't the machines at least ping each other via internal IPs? Is it Windows Firewall getting in the way? I'm starting to wonder if I've chose the wrong approach for a simple web server/database server setup. Please forgive my ignorance. Any help would be greatly appreciated.
If both the machines are in the same Virtual Network, then just turn off Windows Firewall and they will be able to ping each other. Other way is to just allow all incoming ICMP traffic in Windows Firewall with Advanced Settings.
However there is a trick. Both the machines will see each other by IP Addresses, but there will be no name resolution in so defined Virtual Network. Meaning that you won't be able to ping by name, but only by direct IP address. So, if want your Website (on VM1) to connect to SQL Server (on VM2), you have to address it by full IP Address, not machine name.
The only way to make name resolution within a Virtual Network is to use a dedicated DNS server, which you maintain and configure on-premises.
This article describes in details name resolution scenarios in Windows Azure. Your particular case is this:
Name resolution between virtual machines and role instances located in
the same virtual network, but different cloud services
You could potentially achieve name resolution, if you put your VMs is same cloud service. Thus you will not even require dedicated virtual network.
If your VMs are inside a Virtual Network in Azure, then you have to make sure two things.
Required Port is enabled.
Firewall is disabled on the server.
I was trying to connect to one VM where SQL Server DB was installed, from another VM. I Had to enable 1433 port in the VM where SQL was installed. For this you need to add an MSSQL endpoint to the VM on the azure management portal. After that i disabled windows firewall. Then i was able to connect to the VM from another.

Resources