pfSense - Firewall between subnets - firewall

On my LAN, I have 2 networks. Let's say 192.168.10.0/24 and 192.168.20.0/24. I would like to use pfSense to allow or deny access from LAN1 to LAN2, depending on the IP.
On my test server, I have 2 NICs. On NIC1, I configured the IP 192.168.10.1/24 and on NIC2 192.168.20.1/24.
NIC1 is connected to the switch, where I can access pfSense using my notebook, configured with IP 192.168.10.2. On NIC2, there is another switch and another notebook with IP 192.168.20.2.
I went to the Firewall rules and granted access from all sources and protocols from LAN1 to LAN2. But even then, I can't ping LAN2. What do I need to do to be able to access LAN2 from LAN1?
Current scenario: https://prnt.sc/vqua7f
Intended scenario: https://prnt.sc/vquc4z

System/Routing/Static Routes/
add rout on each gateway to the other subnet

Related

How to open port 22 on azure Kubernetes service for the Loopback Ip 127.0.0.1

How we should open port 22 on aks loopback IP.
We are trying to do telnet on loopback IP using port 22 which is working fine on any Linux VM but on AKS we are getting the error Connection closed.
• Note that AKS clusters have unrestricted outbound (egress) internet access. This level of network access allows nodes and services you run to access external resources as needed. If you wish to restrict egress traffic, a limited number of ports and addresses must be accessible to maintain healthy cluster maintenance tasks. The simplest solution to securing outbound addresses lies in the use of a firewall device that can control outbound traffic based on domain names. Azure Firewall, for example, can restrict outbound HTTP and HTTPS traffic based on the FQDN of the destination. You can also configure your preferred firewall and security rules to allow these required ports and addresses.
Thus, you can configure an inbound rule and an outbound rule to allow traffic on port 22, i.e., SSH for destination IP address as 127.0.0.1 (Loopback IP address). To do so, kindly refer to the documentation link below: -
https://learn.microsoft.com/en-us/azure/aks/limit-egress-traffic#adding-firewall-rules
According to the above link, you must deploy a firewall and create a UDR hop to Azure firewall and associate it to AKS. Thus, in this way, if you configure the Azure firewall with the AKS cluster, you will be able to control the ingress and egress port traffic.

Azure: one VM with two services on two NICs with two public IPs

Setup
I am setting up an Azure VM (Standard E2as_v4 running Debian 10) to serve multiple services. I want to use a separate public IP address for each service. To test whether I can do this, I set up the following:
vm1
- nic1
- vnet1, subnet1
- ipconfig1: 10.0.1.1 <-> p.0.0.1
- nsg1
- allow: ssh (22)
- nic2
- vnet1, subnet2
- ipconfig2: 10.0.2.1 <-> p.0.0.2
- nsg2
- allow: http (80)
vnet1
- subnet1: 10.0.1.0/24
- subnet2: 10.0.2.0/24
- address space: [10.0.1.0/24, 10.0.2.0/24]
Where 10.x.x.x IPs are private and p.x.x.x IPs are public.
nic1 (network interface) and its accompanying nsg1 (network security group) were created automatically when I created the VM; otherwise they are symmetrical to nic2, nsg2 (except for nsg2 allowing HTTP rather than SSH). Also, both NICs register fine on the VM.
Problem
I can connect to SSH via the public IP on nic1 (p.0.0.1). However, I fail to connect to HTTP via the public IP on nic2 (p.0.0.2).
Things I've tried
Listening on 0.0.0.0. To check whether it is a problem with my server, I had my HTTP server listen on 0.0.0.0. Then I allowed HTTP on nsg1, and added a secondary IP configuration on nic1 with another public IP (static 10.0.1.101 <-> p.0.0.3). I added the static private IP address manually in the VM's configuration (/run/network/interfaces.d/eth0; possibly not the right file to edit but the IP was registered correctly). I was now able to connect via both public IPs associated with nic1 (p.0.0.1 and p.0.0.3) but still not via nic2 (p.0.0.2). This means I successfully set up two public IPs for two different services on the VM, but they share the same NIC.
Configuring a load-balancer. I also tried to achieve the same setup using a load balancer. In this case I created a load balancer with two backend pools - backend-pool1 for nic1 and backend-pool2 for nic2. I diverted SSH traffic to backend-pool1 and HTTP traffic to backend-pool2. The results were similar to the above (SSH connected successfully, HTTP failed unless I use backend-pool1 rather than backend-pool2). I also tried direct inbound NAT rules - with the same effect.
Check that communication via subnet works. Finally, I created a VM on subnet2. I can communicate with the service using the private IP (10.0.2.1) regardless of the NSG configuration (I tried a port which isn't allowed on the NSG and it passed). However, it doesn't work when I use the public IP (p.0.0.2).
Question
What am I missing? Is there a setting I am not considering? What is the reason for not being able to connect to my VM via a public IP address configured on an additional NIC?
Related questions
Configuring a secondary NIC in Azure with an Internet Gateway - the answer refers to creating a secondary public IP
Multiple public IPs to Azure VM - the answer refers to creating a load balancer
Notes: I can try to provide command lines to recreate the setup, if this is not enough information. The HTTP server I am running is:
sudo docker run -it --rm -p 10.0.2.1:80:80 nginx
And I replaced to listen on 0.0.0.0 for subsequent tests.
Here's the final topology I used for testing.
To allow the secondary interface (with a public IP) to access to or from the Internet, we don't need to create a load balancer. Instead, we can use iproute to maintain multiple routing tables. Read http://www.rjsystems.nl/en/2100-adv-routing.php and this SO answer for more details.
After my validation, you can add the following configurations and It was working on Linux (ubuntu 18.04) VM for me.
Activate Linux advanced routing on a Debian GNU/Linux system, install the iproute package:
apt-get install iproute
Configure two default routes
echo 2 cheapskate >> /etc/iproute2/rt_tables
Add the new default route to table cheapskate and then display it:
~# ip route add default via 10.0.2.1 dev eth1 table cheapskate
~# ip route show table cheapskate
default via 10.0.2.1 dev eth1
Add a rule for when a packet has a from pattern of 10.0.2.4 in which case the routing table cheapskate should be used with a priority level of 1000.
ip rule add from 10.0.2.4 lookup cheapskate prio 1000
The kernel searches the list of ip rules starting with the lowest priority number, processing each routing table until the packet has been routed successfully.
After all of this, you can check it with the following command, you will see the public IP address attached to the secondary interface.
curl --interface eth1 api.ipify.org?format=json -w "\n"
Please note you have enough permission to do all of the above steps.

How do I know that a Virtual Machine in Azure use the Local network gateway route to connect to an on-premise network?

Here a Data engineer who needs your help to setup a connection to an on-premise environment :)!
I have created a virtual network (10.0.0.0/16) with a default subnet (10.0.0.0/24).
Then I created a (Windows) virtual machine which is connected to the vnet/subnet and has allowed ICMP inbound and outbound rules for the ping test. Ping google.com is no problem.
The next step was to create a Virtual network gateway & Local network gateway to connect to an on-premise environment.
The Local network gateway has an Site-to-site (IPsec) connection to a VPN device from a third party (over which I have no control). Status in the Azure portal = 'Connected'.
The third party is able to ping the Virtual Machine in Azure, the 'data in' property on the VPN connection shows that 2 kb (ping) has been received. So that works!
When i try to send a ping command to the ip-address (within the 'address space' specified from the Local network gateway) the ping command fails (Request timed out.).
After a lot of searching on google/stackoverflow I found out that I need to configure a Route Table in Azure because of the BGP = disabled setting. So hopefully I did a good job configure the Routing Table Routes but still I can't perform a successful ping :(!
Do you guys/girls know which step/configuration I have forgotten or where I made a mistake?
I would like to understand why I cannot perform a successful ping to the on-premise environment. If you need more information, please let me know
Site-to-site (IPsec) connection screenshot/config
Routing Table setup screenshot/config
Routing Table Routes in more detail
If you are NOT using BGP between the Azure VPN gateway and this particular network, you must provide a list of valid address prefixes for the Address space in your local network gateway. The address prefixes you specify are the prefixes located on your on-premises network.
In this case, it looks like you have added the address prefixes. Make sure that the ranges you specify here do not overlap with ranges of other networks that you want to connect to. Azure will route the address range that you specify to the on-premises VPN device IP address. There are no other operations that we can do. We don't need to set UDR, especially we don't associate a route table to the Gateway Subnet. Also, avoid associating a network security group (NSG) to the Gateway Subnet. You can check the route table by selecting Effective routes for a network interface in Azure VM. Read more details here.
If you would like to verify the connection from Azure VNet to an on-premise network, ensure that you PING a real private IP address from your on-premise network(I mean the IP address is assigned to an on-premise machine), you can check the IP address with ipconfig/all in local CMD. Moreover, you could Enable ICMP through the Windows firewall inside the Azure VM with the PowerShell command New-NetFirewallRule –DisplayName "Allow ICMPv4-In" –Protocol ICMPv4. Or, instead of using PING, you can use the PowerShell command Test-NetConnection to test a connection to a remote host.
If the problem persists, you could try to reset the Azure VPN gateway and reset the tunnel from the on-premises VPN device. To go further, you could follow these steps to identify the cause of the problem.

How to whitelist source IPs on Azure VMs fronted by Azure Load Balancer

I have a public facing, standard sku, Azure Load Balancer that forwards the incoming requests for a certain port to a virtual machine, using load balancing rules. This virtual machine has a NSG defined at the subnet level, that allows incoming traffic for that port, with source set to as 'Internet'.
Presently, this setup works, but I need to implement whitelisting - to allow only a certain set of IP addresses to be able to connect to this virtual machine, through the load balancer. However, if I remove the 'Internet' source type in my NSG rule, the VM is no longer accessible through the Load Balancer.
Has anyone else faced a similar use case and what is the best way to setup IP whitelisting on VMs that are accessible through Load Balancer. Thanks!
Edit: to provide more details
Screenshot of NSGs
These are the top level NSGs defined at the subnet.
We have a public load balancer that fronts the virtual machine where above NSGs are applied. This virtual machine doesn’t have a specific public IP and relies on the Load Balancer’s public IP.
The public Load Balancer forwards all traffic on port 8443 and port 8543 to this virtual machine, without session persistence and with Outbound and inbound using the same IP.
Below are the observations I have made so far:
Unless I specify the source for NSG rule Port_8443 (in above table) as ‘Internet’, this virtual machine is not accessible on this port, via the load balancer’s public IP.
When I retain the NSG rule Port_8543, which whitelists only specific IP addresses, this virtual machine is not accessible on this port, via the load balancer’s public IP – even when one of those whitelisted clients try to connect to this port.
I tried adding the NSG rule Custom_AllowAzureLoadBalancerInBound, to a higher priority than the port_8543, but it still didn’t open up this access.
I also tried to add the Azure Load balancer VIP (168.63.129.16) to the Port_8543 NSG, but that too didn’t open-up the access to port 8543, on load balancer’s public IP.
I have played with Load Balancing rules options too, but nothing seems to achieve what I am looking for – which is:
Goal 1: to open-up the virtual machine’s access on port 8443 and port 8543 to only the whitelisted client IPs, AND
Goal 2: allow whitelisted client IPs to be able to connect to these ports on this virtual machine, using the load balancer’s public IP
I am only able to achieve one of the above goals, but not both of them.
I have also tried the same whitelisting with a dedicated public IP assigned to the virtual machine; and that too loses connectivity to ports, where I don't assign 'Internet' source tag.
Azure has default rules in each network security group. It allows inbound traffic from the Azure Load Balancer resources.
If you want to restrict the clients to access your clients, you just need to add a new inbound port rule with the public IP address of your clients as the Source and specify the Destination port ranges and Protocol in your specific inbound rules. You could check the client's public IPv4 here via open that URL on your client's machine.
Just wanted to add a note for anyone else stumbling here:
If you are looking to whitelist an Azure VM (available publicly or privately) for few specific client IPs, below are the steps you must perform:
Create a NSG for the VM (or subnet) - if one not already available
Add NSG rules to Allow inbound traffic from specific client IPs on specific ports
Add a NSG rule to Deny inbound traffic from all other sources [This is really optional but will help in ensuring security of your setup]
Also, please note that look at all public IPs that your client machines are planning to connect with. Especially while testing, use public IPs and not the VPN gateway address ranges - which is what we used and ended up getting a false negative of our whitelisting test.

Allow Mobile internet to ssh in AWS EC2 instance

I've set up security group for more security to allow only known Ip address to access my EC2 instance. So for that, I have added know IP address in the inbound rule to allow ssh access. But it's not allowing ssh connection when trying to connect through mobile internet. because mobile internet IP address continuously changes. so how can I get the public IP address when connecting net through mobile?
Thank you in advance !!
Generally your mobile gives you IP address dynamically and every time you will get a new IP. But all these IPs usually fall within a range. So you can ALLOW a CIDR which contains your IP in the inbound rule. For example your IP is A.B.C.D then you can give a CIDR A.B.C.D/24 which will match all IPs that start with A.B.C.(1-255). But if you really need to get a public IP you will need to talk to your Mobile company

Resources