ICMP outbound to Internet with Azure Firewall - azure

I have a rule in Azure Firewall that basically should allow ICMP to any destination.
{
description: 'Allow Ping to Any'
name: 'rule-allow-ping'
ruleType: 'NetworkRule'
destinationAddresses: [
'*'
]
destinationPorts: [
'*'
]
ipProtocols: [
'ICMP'
]
sourceIpGroups: [
ipVMs.id
]
}
When I ping internally (between 2 peered VNETs), it works fine.
When I ping externally (for example 8.8.8.8), I get no reply.
Is there an additional setting to enable or is it a "normal" behavior (documented somewhere)?

https://learn.microsoft.com/nl-nl/archive/blogs/mast/use-port-pings-instead-of-icmp-to-test-azure-vm-connectivity
Because the ICMP protocol is not permitted through the Azure load balancer, you will notice that you are unable to ping an Azure VM from the internet, and from within the Azure VM, you are unable to ping internet locations.
However if you were to give the VM a Public IP you can ping it from the internet, as long as you create a Network Security Group rule and rule in Azure Firewall to allow it inbound.
Even with a public IP however letting the VM ping the internet outbound is not possible:
Also note that while an instance-level public IP lets you communicate directly to a specific VM instead of through the cloud service VIP that can be used for multiple VMs, ICMP is not permitted in that scenario either.
And as far as I know this also applies to any VM behind Azure Firewall.

Related

How to configure Azure ContainerApps with a Static Outbound IP?

In the documentation for Azure ContainerApps Ports and IP Addresses section it indicates that the
Outbound public IP
Used as the "from" IP for outbound connections that leave the virtual network. These
connections aren't routed down a VPN. Using a NAT gateway or other proxy for outbound
traffic from a Container App environment isn't supported. Outbound IPs aren't guaranteed
and may change over time.
The inbound IP for a ContainerApps Environment is fixed. Azure Container Instances (not ContainerApps) on the other hand seem to have documented capability to configure a static outbound IP via NAT Gateway.
Is there a way to configure a static outbound IP for Azure ContainerApps as well?
If not, which alternate deployment models for a long-running background service are recommended? The requirement is that an external service can count on a fixed outbound IP (or very small range, not the entire DataCenter IP ranges) for whitelisting.
** EDIT - It seems that NAT on VNet is not yet supported on ACA - https://github.com/microsoft/azure-container-apps/issues/522
way to configure a static outbound IP for Azure ContainerApps as well?
No, we can't configure outbound public IP via container apps; that information is there in the official documentation documentation itself.
try this out, Create outbound application rule on the firewall
using below command
az network firewall application-rule create
It will create an outbound rule on the firewall. This rule allows access from the subnet to Azure Container Instances.
HTTP access to the site will configure through egress IP address from Azure Container Instances.
i have found one blog refer this

Azure ASG internal connectivity

I created an application security group, assigned it to two VMs and there is a lot more in that resource group but my question is when I RDP into one of the VMs, I cannot ping the other VM and or reach a website hosted on the other VM. Plus because of an NSG, I am able to reach that website from my local machine.
I thought using ASGs mean, I don't have to do anything else for connected VMs to talk to each other? Also of note, if I open up the ASG to everything in the NSG, I am able to ping and reach the site from the other VM. What am I missing?
Both VMs are in the same vnet and subnet. Screenshot of NIC of one of the VMs below:
when I RDP into one of the VMs, I cannot ping the other VM and or
reach a website hosted on the other VM. Plus because of an NSG, I am
able to reach that website from my local machine.
You're able to connect to the other VM from the VM because VMs in the same virtual network can communicate with each other over any port, by default. This means you can access the other VM using its private IP address from one VM. Note, by default, Firewall inside the VM may disable the ICMP packages, you may use netsh advfirewall firewall add rule name="ICMP Allow incoming V4 echo request" protocol=icmpv4:8,any dir=in action=allow to enable the ICMP inbound traffic if you work on Windows Azure VM or temporarily turn windows firewall off to test this when you ping each other.
In this case, you may check the above first. If you still do not ping VMs or reach a website hosted on the other VM2 from the VM1 inside the private network. I may think that something is blocking on the NSG side. It is not a good way to use PING test the VMs connectivity. You could use telnet to verify if the specific port is blocking.
I thought using ASGs mean, I don't have to do anything else for
connected VMs to talk to each other?
Yes, you don't have to do anything else for connected VMs to talk to each other as they already in the same subnet where they can communicate with each other.
You may refer to more details about Application security groups.

Azure SNMP problems

I want to setup a monitor system in Azure. The monitor system is using snmp protocol.
However, I have some problems.
My monitor system private subnet is not same as other hosts.
I also tried to use public address. On my mac, I tried to use snmpwalk to azure public IP (VM and also allowed 161 in Azure firewall policy) which return Timeout.
(e.g snmpwalk -v2c -c xxx AzurePublicIP system)
Any suggestion can let me use snmpwalk from VM1 to VM2 (different subnet?)
Many thanks!!
Any suggestion can let me use snmpwalk from VM1 to VM2 (different
subnet?)
If the Azure VMs in the different subsets of the different virtual network, you need make sure the VMs can communicate with each other in the different virtual networking using Virtual network peering. If the Azure VMs in the different subsets of the same virtual network, by default, they can communicate with each other without any further steps.
Moreover, you need to add inbound security rules of NSG which is associated with the VM2 subnet to allow SNMP ports and UDP protocol access to your subnet.
UDP 161: Used when management stations communicate with agents, e.g. Polling
UDP 162: Used when agents send unsolicited Traps to the management station
Also, check the firewall inside each of Azure VMs.
Hope this helps.

Cannot access Azure VM Scaleset ip address externally

I have created a Virtual Machine Scaleset in Azure
This scaleset is made up of 5 VMs
There is a public ip
When I do a ping on my public ip I get no response, nor do I get a response with the full name, e.g.
myapp.uksouth.cloudapp.azure.com
Is there something I have missed?
I am wondering if I have to add my machine's IP somewhere?
I am trying to remote into the machines within the scaleset eventually!
This scaleset will be used for azure service fabric
Paul
If you deploy your scale set with "public IP per VM", then each VM gets its own public IP: https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-networking#public-ipv4-per-virtual-machine. However, this is not the default in the portal. In the portal, the default is to create a load balancer in front of the scale set with a single public IP on the LB (today, at least; no guarantee it will stay this way). It also comes with NAT rules configured to allow RDP/SSH on ports 50000 and above. They won't necessarily be contiguous, though (at least in the default configuration), so you will need to examine the NAT rules on the load balancer to see which ports are relevant. Once you do, you should be able to do ssh -p <port-from-nat-rule> <public-ip> to ssh in (or similar in your RDP client for Windows).
When I do a ping on my public ip I get no response
Azure does not support ping.
For test, you can use RDP/SSH public IP address with different ports to test the connection.
Are you create VMSS with Azure marketplace? If yes, the Azure LB will configured.
If the load balancer created by your self, please check LB probes, backend pools(all vms should in that backend pools), load balancer rules and NAT rules.
Also you can configure log analytics for Azure load balancer to monitor it.

Azure Web Role can't see VM's internal IP (but VM can see web role)

I have a web role (WR) and a virtual machine (VM) hosted on Azure, both are within the same Virtual Network (VNet), and on the same subnet.
If I look at the azure portal and go to the VNet page, the dashboard shows both my VM and my WR are on the network with internal IP addresses as I expect:
VM: 10.0.0.4
WR: 10.0.0.5
I can Remote Desktop to both machines, from the VM, I can ping 10.0.0.5 and get a response, from the WR, if I ping 10.0.0.4 all I ever get is a Timeout.
I've been following the instructions from: http://michaelwasham.com/2012/08/06/connecting-web-or-worker-roles-to-a-simple-virtual-network-in-windows-azure/ and there is no mention of any additional settings I need to do to either machine - but is there something I'm missing?
Do I need to open up the VM to be contactable?
Extra information:
At the moment, the VM has an Http and Https end point available publicly, but I aim to turn those off and only use the WR for that (hence wanting to connect using the internal IP).
I don't want to use the public IP unless there is absolutely no way around it, and from what I've read that doesn't seem to be the case.
For completeness, moving my comment to an answer: While the virtual network is allowing traffic in both directions, you'll need to enable ICMP via the firewall, which will then let your pings work properly.

Resources