We have a Geode locator and server running on a machine that is behind a NAT firewall. I have replaced the firewall’s IP address with A.B.C.D and the internal IP address of the machine running the Geode locator with W.X.Y.Z.
Previously the machine running Geode had only Windows Defender Firewall enabled and an inbound rule set to allow traffic to ports 10334, 1099 and 40404 from remote IP addresses which we whitelisted. This setup allowed us to connect to the Geode locator from those remote IP addresses that were whitelisted.
However, once we placed the same machine behind the NAT firewall and configured the same rule we set up under Windows Firewall, we can no longer connect to the locator from the remote IP addresses whitelisted. The IP addresses we whitelisted are for machines outside of the firewall.
For example, when we tried connecting to the locator through gfsh, it gave us a java connection exception as shown below in Figure 1. It appears it was able to connect to the locator running on 10334 but failed to do so for the JMX manager on port 1099 using the internal IP address of the machine running the Geode locator.
On the second try, we tried specifying the firewall’s IP address for the JMX manager tag but got a slightly different connection exception shown in Figure 1.
We also ran two Wireshark captures from the whitelisted IP address on port 1099 for one Geode locator that was behind the Window’s Firewall only in Figure 2 and the other one behind the NAT firewall (A.B.C.D) in Figure 3. We noticed on the capture for the NAT firewall, it wasn’t able to establish a RMI stream which we think is the cause of the exception given on gfsh.
Do we need to start the locator with specific settings to get this to work or is this related to allowing RMI traffic/stream on the NAT firewall? Please find the settings that the Geode locator was started with in Figure 4. In the Gemfire properties file we have the server-bind-address and jmx-manager-bind-address tags set to the internal IP address of the machine (W.X.Y.Z).
Figure 1
Figure 2 - Wireshark capture from remote machine to Geode locator behind Windows Defender Firewall
Figure 3 - Wireshark capture from remote machine to Geode locator behind NAT firewall
Figure 4
You might need to set the jmx-manager-hostname-for-clients property to point to the firewall's IP address as well. I think what happens when gfsh connects is that it actually connects to the locator port first (10334) and then discovers the JMX manager address and port and connects to that.
There is also another way to get gfsh to connect over http - https://geode.apache.org/docs/guide/114/configuring/cluster_config/gfsh_remote.html. You could try this if you can't get JMX to work. It does require starting a management http service however.
Related
I have a newly installed MikroTik switch, and have successfully configured it for VPN traffic. However, behind the switch is a Linux server to which I am unable to connect via PuTTY. I can see the server and its IP address in Winbox->IP->DHCP Server->Leases, but as I say, I can't connect from within the VPN. I've made several attempts to add a rule to the firewall that would permit access and I've even gone so far as to uncheck the firewall router box in Quick Set, but no matter what I've tried, it always times out. To be clear, I'd like the server to be visible to all machines connected to the switch - both via ethernet and via pp2p.
I've been googling for hours, and I'm completely new to network engineering, so any help would be greatly appreciated.
I think the problem may be due to NAT and your VPN IP Subnet. I have my VPN users in 192.168.4.0/24 the main subnet is 192.168.0.0/22. In Winbox got to IP>Firewall then in the NAT tab make sure you have a masquerade action on your VPN subnet. I think the VPN quick set adds one but if your using different subnets it gets confused. See the image for what I have set for my VPN users to access servers and resources on the main network.
I have an Azure external loadbalancer with a backend pool that contains 1 kubernetes master server and has a load balancing rule on port 443.
I added a rule with priority 500 to deny all traffic coming from the internet on port 443 to the kubernetes master server. Works fine
I added a rule with priority 400 to accept traffic coming from a certain public ip because I only want to be able to connect from that ip. I expected that I should be able to connect but I can't.
If I change the rule that accepts traffic from the source ip to internet then it works fine.
What am I missing?
Kind Regards
"I added a rule with priority 400 to accept traffic coming from a
certain public ip because I only want to be able to connect from that
ip. I expected that I should be able to connect but I can't.
If I change the rule that accepts traffic from the source ip to
internet then it works fine. What am I missing?"
Things that you might have missed:
Make sure you are not specifying the source port !! It would be
taken from a pool of available ports referred to as ephemeral ports
from the client that you initiate the connection.
You are blocking the Allow Azure Load Balancer IP which is a default rule.
Load Balancer health probes originate from the IP address 168.63.129.16 and must not be blocked for probes to mark your instance up. Review probe source IP address for details.
Create a separate rule to allow this IP as this is a MSFT IP you should have no issues allowing this.** Before deny all (Priority <500)
That should fix your issue for sure !!
Diagnosis & RCA:
Why this is happening, the Azure Load balancer Probe IP is being blocked and hence the backend server is being marked as unhealthy by the load balancer.
I have a Linux VM on azure, which I can access using SSH without any issues. I needed access to another port(lets say 7077) from outside, and here is what i have done so far, but unable to establish connectivity
Created an inbound rule from the networking settings, it created the rule on the Network security Group attached to the network interface.
Added a new Network Security Group, attached it to the Subnet
If I do a netcat request on port 22, i get a successful connectivity, but for the port 7077 I get connection refused.
Also with IP flow verification passes for the port
Any pointer would be helpful.
You need to allow that same port in the firewall settings of the VM. The OS itself is what is refusing the connection suggesting you have not setup any firewall rules to allow that port.
Try adding a allow rule in the firewall settings and see if you can reach that port.
https://www.digitalocean.com/community/tutorials/how-to-list-and-delete-iptables-firewall-rules
http://www.thegeekstuff.com/2011/02/iptables-add-rule/
Ubuntu 17.04
https://help.ubuntu.com/lts/serverguide/firewall.html
I've setup Azure point to site and I'm able to connect from my computer to an Azure VM (file share). I'm also able to ping my computer IP address from the Azure VM. However, I'm not able to connect to any resource on my local computer. When trying to access a file share on my computer from the Azure VM I get the following error:
file and print sharing resource (169.254.108.240) is online but isn't responding to connection attempts.
The remote computer isn’t responding to connections on port 445, possibly due to firewall or security policy settings, or because it might be temporarily unavailable. Windows couldn’t find any problems with the firewall on your computer.
Port 445 is enabled on my local computer:
netsh firewall set portopening TCP 445 ENABLE
As an additional test If I issue a \169.254.108.240 from my local computer point to itself it works fine. The same try from the Azure VM gives me the error above.
Thanks,
Your IP address (169.254.*) is a non-routable address. You'll need to get a valid IP (say with DHCP, or set manually) and allow connections to your machine. If you have a firewall, this means adding a NAT rule to it.
If possible, try making the connection from another computer on your LAN to isolate any other firewall/Azure issues.
I think you have to consider several concepts while implementing azure network, first try to put point to site network on a different range of IPs (like 10.4.0.0) then try to disable firewall on your computer and try again, if you have proper routing device it should go through and get the feedback form the local machine.
I am using Azure Virtual Machine (Windows Server 2008 R2 image) provided from the gallery and created Public port and private port using the portal. I did remote log in to VM and I run a TCP server application inside VM (TCP server binds to the private port of the VM). Problem I face is that I can not connect it through the public IP and port (from external machine). I have created a inbound rule in VM's Firewall, where I enable connection to the Private port of VM. I tried recreating the VM, also the new ports. Still problem persists. One more thing I observed is that my TCP Client is able to connect to RemoteDesktop port of the VM also the PowerShell port. But does not connect to the port that I created through the portal. Pls suggest what can be wrong?
Note: I also observed some weird behavior. I enabled all ports for my TCP Server app in Inbound rule of firewall and found that some unknown IP (was similar to azure internal IP) is connecting to my server. Why it is happening?
I would like to understand as to how you are trying to connect with the Virtual Machine, using RDP or trying to test the connectivity, for example, using Port Ping.
Endpoints for RDP and Powershell are configured by default. So if you are trying to connect using Remote Desktop, you can directly connect to the VM using MSTSC from Run and provide the IP of the VM followed by the Port Number using the below format
xx.xx.xx.xx:3389
However if you would like to test the connectivity to the VM, I suggest you to use Port Ping instead of ICMP ping since ICMP traffic is blocked by the Azure load balancer and the ping requests timeout. While Ping.exe uses ICMP, other tools such as PsPing, Nmap, or Telnet allow you to test connectivity to a specific TCP port.
On the other hand, after creating the VM, you can add endpoints additionally as needed. You can also manage incoming traffic to the public port by configuring rules for the Network Access Control List (ACL) of the endpoint.
The private port is used internally by the virtual machine to listen for traffic on that endpoint.
The public port is used by the Azure load balancer to communicate with the virtual machine from
external resources. After you create an endpoint, you can use the network access control list
(ACL) to define rules that help isolate and control the incoming traffic on the public port. For
more information, see About Network Access Control Lists.