Proxmox with OPNsense as Firewall/GW - routing issue - linux

This setup should be based on a proxmox, being behind a opnsense VM hosted on the Proxmox itself which will protect proxmox, offer a firewall, a privat LAN and DHCP/DNS to the VMs and offer a IPsec connection into the LAN to access all VMs/Proxmox which are not NATed. The server is the typical Hetzner Server, so only on NIC but multiple IPs or/subnets on this NIC.
Due to the cluster-blocker with the PCI-passthrough setup this is my alternative
Proxmox Server with 1 NIC(eth0)
3 Public 1IPs, IP2/3 are routed by MAC in the datacenter (to eth0)
KVM bridged setup ( eth0 no ip, vmbr0 bridged to eth0 with IP1 )
A private LAN on vmbr30, 10.1.7.0/24
A shorewall on the proxmox server
To better outline the setup, i create this drawing: (not sure its perfect, tell me what to improve)
Textual description:
Network interfaces on Proxmox
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
pre-up sleep 2
auto vmbr0
# docs at
iface vmbr0 inet static
address External-IP1(148.x.y.a)
netmask 255.255.255.192
# Our gateway is reachable via Point-to-Point tunneling
# put the Hetzner gateway IP address here twice
gateway DATACENTER-GW1
pointopoint DATACENTER-GW1
# Virtual bridge settings
# this one is bridging physical eth0 interface
bridge_ports eth0
bridge_stp off
bridge_fd 0
pre-up sleep 2
bridge_maxwait 0
metric 1
# Add routing for up to 4 dedicated IP's we get from Hetzner
# You need to
# opnsense
up route add -host External-IP2(148.x.y.b)/32 dev vmbr0
# rancher
up route add -host External-IP2(148.x.y.c)/32 dev vmbr0
# Assure local routing of private IPv4 IP's from our
# Proxmox host via our firewall's WAN port
up ip route add 10.1.7.0/24 via External-IP2(148.x.y.b) dev vmbr0
auto vmbr30
iface vmbr30 inet static
address 10.1.7.2
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
pre-up sleep 2
metric 1
Shorewall on Proxmox
interfaces
wan eth0 detect dhcp,tcpflags,nosmurfs
wan vmbr0 detect bridge
lan vmbr30 detect bridge
policies:
lan lan ACCEPT - -
fw all ACCEPT - -
all all REJECT INFO -
OPNsense
WAN is ExternalIP2, attached to vmbr0 with MAC-XX
LAN is 10.1.7.1, attached to vmbr30
What is working:
The basic setup works fine, i can access opnsense with IP2, i can access proxmox on IP1 and i can access rancher-VM on ip3 - that is what does not need any routing.
i can connect with a IPSec mobile client to OPNsense, offering access to LAN (10.1.7.0/24) from a virtual ip range 172.16.0.0/24
i can access 10.1.7.1 ( opnsense ) while connected with OpenVPN
i can access 10.1.7.11 / 10.1.7.151 from OPNsense(10.1.7.1) (shell)
i can access 10.1.7.11 / 10.1.7.1 from othervm(10.1.7.151) (shell)
Whats not working:
a) connecting to 10.1.7.11/10.1.7.151 or 10.1.7.2 from the IPsec client
b) [SOLVED in UPDATE 1]connecting to 10.1.7.2 from 10.1.7.1 (opnsense)
c) Its seems like i have asynchron routing, and while i can access e.g. 10.1.7.1:8443 i see a lot if entries
d) IPSec LAN sharing would include i rule in IPSEC chain, "from * to LAN ACCEPT" - but that did not work for me, i had to add "from * to * ACCEPT"
Questions:
I) Of course i want to fix a)b)c)d), probably starting with understanding c) and d)
II) would it help, in this setup, to add a second NIC?
III) could it be an issue that i activated net.ipv4.ip_forward on the proxmox host ( shouldnt it be routed rather? )
When i got this straighten out i would love to place a comprehensive guide on how to run OPNsense as a Appliance with a private network in on Proxmox, passing some services to the outer world using HAproxe+LE and also accessing the private lan using IPsec
UPDATE1:
Removing up ip route add 10.1.7.0/24 via IP2 dev vmbr0 from vmbr0 on proxmox fixed the issue that neither proxmox could access 10.1.7.0/24 nor it could have been access from the LAN network.
UPDATE2:
I created an updated / changed setup where pci-passthrough is used. Goals are the same - it reduces the complexity - see here

Some direly needed rough basics first:
There's routing, which is IP's and packets on layer3.
There's switching, which is MAC's and frames on layer2.
Further you speak of vmbr0/1/30, but only 0 and 30 are shown in your config.
Shorewall does not matter for your vm connectivity (iptables is layer3, ebtables would be layer2 for contrast, but your frames should just fly by the shorewall, not getting to the HV but instead going to the VM's directly. shorewall is just a frontend using iptables in the background).
With that out of the way:
Usually you don't need any routing on the proxmox BRIDGES. A bridge is a switch, as far as you are concerned. vmbr0 is a virtual external bridge which you linked with eth0 (thus created an in-kernel link between a physical nic and your virtual interface, to get packets to flow at all). The bridge could also run without an IP attached to it at all. But to have the HV accessible, usually an external IP is attached to it. Otherwise you'd have to setup your firewall gateway plus a VPN tunnel, give vmbr30 an internal ip, and then you could access the internal IP of the HV from the internet after establishing a tunnel connnection, but that's just for illustration purposes for now.
Your ipsec connectivity issue sounds an awful lot like a misconfigured VPN, but also mobile IPSEC is just often a pain in the butt to work with due to protocol implementation differences, openvpn works a LOT better, but you should know your basics about PKI and certificates to implement that. Plus if opnsense is as counter-intuitive as pfsense when it comes to openvpns, you are possibly in for a week of stabbing at the dark easily. For pfsense there's a installable openvpn config export package which makes life quite easier, don't know wether this one is available for opnsense, too.
It does not so much look like what you call asynchronous routing but rather like a firewall issue you had, concerning the first picture.
For your tunnel firewall (interface IPSEC or interface openvpn on opnsense, depending on the tunnel you happen to use) just leave it at ipv4 any:any to any:any, you should only get into the LAN net anyway by the definition of the tunnel itself, opnsense will automatically send the packets out from the LAN interface only, on the second picture.
net.ipv4.ip_forward = 1 = enable routing in the kernel at all on the linux OS's interfaces where you activated it. You can do NAT-ting stuff via iptables, thus making it possible for getting into your LAN by using your external HV IP on vmbr0 in theory, but that's not something you should happen to achieve by accident, you might be able to disable forwarding again without loosing connectivity. At least to the HV, I am unsure about is your extra routes for the other external IPs, but these should be configurable the same way from within the opnsense directly (create the point-to-point links there, the frames will transparently flow through vmbr0 and eth0 to the hetzner gateway) and work, which would be cleaner.
Also you should not make the rancher-VM accessible externally directly and thus bypassing your firewall, I doubt this is what you want to achieve. Rather put the external ip onto the opnsense (as virtual ip of type ip alias), set up 1:1 NAT from IP3 to the internal ip of the rancher-vm, and do the firewalling via opnsense.
Some ascii art how things possibly should look from what I can discern from your information so far, for the sake of brevity only interfaces are used, no distinction is made between physical/virtual servers, and no point-to-point links are shown.
[hetzner gw]
|
|
|
[eth0] (everything below happens inside your server)
||
|| (double lines here to hint at the physical-virtual linking linux does)
|| (which is achieved by linking eth0 to vmbr0)
||
|| +-- HV acess to via IP1 -- shorewall/iptables for hv firewalling
|| |
[vmbr0] [vmbr30]
IP1 | | |
| | | |
| | | |
[wan nic opn] | | |
IP2 on wan directly, | | |
IP3 as virtual IP of type ip alias | | |
x | | |
x (here opn does routing/NAT/tunnel things) | | |
x | | |
x set up 1:1 NAT IP3-to-10.1.7.11 in opn for full access | | |
x set up single port forwardings for the 2nd vm if needed | | |
x | | |
[lan nic opn]-----------------------------------------------+ | |
10.1.7.1 | |
| |
+----------+ |
| |
[vm1 eth0] [vm2 eth0]
10.1.7.11 10.1.7.151
If you want to firewall the HV via opnsense, too, these would be the steps to do so while maintainting connectivity:
remove IP1 from [vmbr0]
put it on [wan nic opn]
put internal ip (IP_INT) from opn lan onto [vmbr30]
set up 1:1 NAT from IP
set all firewall rules
swear like hell when you break the firewall and cannot reach the hv anymore
see wether you can access the HV via an iKVM solution hoping to get a public IP onto it, so you can use the console window in proxmox hoping to fix or reinstall the firewall.

Related

Two gateway routing issue

I have two NICs.
On eth1 IP is 10.135.28.86/16.
On eth IP is 135.251.8.43/24.
My routing table is like below:
135.251.8.0/24 dev eth1 proto kernel scope link src 135.251.8.43
10.135.0.0/16 dev eth0 proto kernel scope link src 10.135.28.86
169.254.0.0/16 dev eth0 scope link metric 1002
169.254.0.0/16 dev eth1 scope link metric 1003
10.0.0.0/8 via 10.135.0.1 dev eth0
default via 135.251.8.1 dev eth1
Now if I ping 10.135.28.86 from 10.34.7.103, it's OK, while if I ping 135.251.8.43 from 10.34.7.10, it fails.
And if I ping my public IP 135.251.8.43 from 135.252.11.7, it's OK, if I ping 10.135.28.86, it fails.
However, on my other machines which have exactly the same subnet and gateway configured, I can ping both IP either from 10.34.7.103 or 135.252.11.7.
Any ideas on this?
I used tcpdump to capture icmp packet on other machines and found that echo request come in eth0 and echo reply out from eth1.
but on this machine no echo reply were captured.
When you ping from your other machines with IP's in both networks the machine uses the interface on the same network to send the packet (so private-to-private and public-to-public, since they are on directly connected subnets). That is why it reaches, they are on the same subnet.
I see 2 scenarios.
1.
The machine which only has IP on your private network (10.34.7.10) probobly sends its ping to dgw (IP?) which then forwards it to 135.251.8.43 (eth0).
But since the source adress (10.34.7.10) is on a network directly connected to it's other interface (eth1) the answer will be sent back there. I would say you have a flawed network architecture.
The machine 10.34.7.10 has a static route for 135.251.8.43 to 10.135.28.86, but your machine has not bridged the 2 networks.

Send traffic to self over physical network on Ubuntu

I have a dual port ethernet NIC and let's say I have connected 2 ports in a loop and assigned the following IPs to the 2 ethernet interfaces:
eth2 -> 192.168.2.1
eth3 -> 192.168.3.1
I want to send traffic from 1 port to another over the physical network, e.g. ping 192.168.3.1 from 192.168.2.1. However, the TCP/IP stack in the Linux kernel recognizes that these two addresses are local and instead sends the traffic to the loopback adapter, so the traffic never hits the physical network.
The closest I have to a solution is Anastasov's send-to-self patch, which unfortunately, has been discontinued since kernel 3.6 so it won't work on Ubuntu 13.10 (kernel 3.11) for me. I've tried finding rewriting the patch for 3.11, but I can't seem to locate these in the Ubuntu distro:
include/linux/inetdevice.h
net/ipv4/devinet.c
net/ipv4/fib_frontend.c
net/ipv4/route.c
Documentation/networking/ip-sysctl.txt
Is there a way I can get the send-to-self patch to work, or an alternative?
You can use network namespaces for this purpose.
As ip-netns's manpage says:
A network namespace is logically another copy of the network stack,
with its own routes, firewall rules, and network devices.
Following is just a copy of this answer:
Create a network namespace and move one of interfaces into it:
ip netns add test
ip link set eth1 netns test
Start a shell in the new namespace:
ip netns exec test bash
Then proceed as if you had two machines. When finished exit the shell and delete the namespace:
ip netns del test
you can try configuring route table, by running "ip" command:
ip route add to unicast 192.168.3.1 dev eth2
ip route add to unicast 192.168.2.1 dev eth3
new route would be added into route table, and it should be able to take effect before egress routing lookup hit the host-local route between "192.168.3.1" and "192.168.2.1", therefore, the traffic should be sent through physical interface "eth2" and "eth3", instead of loopback "lo"
Never tried myself, but should work.

OpenSIPs stun module require two IP addresses

I have to make a STUN server in OpenSIPs, and it says that I need to bind 2 IP addresses.
http://www.opensips.org/About/News0042
A STUN server uses 2 ips and 2 ports to create 4 sockets on which to listen or respond.
STUN requires 2 routable ip addresses
How can I enable two public IP addresses into one Linux server? I've searched all website, and failed to find the answer.
Several options.
Option 1.
You likely just need to use ifconfig from the command line to start
You can assign an additional static IP address to your NIC via the command line. Type ifconfig to get the name of your default adapter. It's typically "eth0". Then do add a secondary address to this adapter, the command is something like the following:
sudo ifconfig eth0:1 inet up netmask 255.255.255.0 192.168.1.55
Where 255.255.255.0 is the netmask of my 8-bit subnet and 192.16.1.55 is an existing IP address that no other device on my subnet is already using.
Option 2.
After you get your server up and running with Option 1, you likely need to find a way to get the IP address assigned by "ifconfig" to persist after a reboot. You could likely stick an ifconfig statement into one of your rc.init files. But most Linux skus have a formal way of configuring an interface with another /etc file. But this step varies between different flavors of Linux. On Ubuntu, this is all defined in the /etc/network/interfaces file. Add these three lines to the bottom of your existing file:
iface eth0:1 inet static
address 192.168.1.55
netmask 255.255.255.0
Option 3 (shameless plug)
Switch to Stuntman ( www.stunprotocol.org ) as your STUN server. Its default mode only requires one IP address to be present on the box. Most client usages of the STUN protocol don't require the second IP address unless to do NAT classification and behavior tests.

Multiple NIC card with different subnet

Am using Cent OS 6.2 (64bit), I have 4 NIC interface, in that am trying to connect two NIC with different subnet,
em1 with 10.30.2.x series
em4 with 10.30.4.x series
Also I added route with /sbin/route add -net 10.30.4.0 netmask 255.255.255.0 dev em4
When I make the network device up "ifup em4" am not able to ping both the interfaces.
There is no IPtables running and selinux also disabled.
The same setup is working in one more DELL server, in that server reverse IP and IP forwarding is not enabled, even then its working.
Reverse IP & IP Forwarding
cat /proc/sys/net/ipv4/conf/em2/rp_filter
1
cat /proc/sys/net/ipv4/ip_forward
0
Any comments would be appreciated.
Thanks in advance.
If you are sure that ip addresses are actually setted on the interfaces, everything should work out, i would suggest to check network equipment on the way.
easiest way to test this is to use tcpdump -i any icmp and see if you actually receive the packets, this will also show you if your pong is going on the wrong interface.
hope that helps

How to route TCP/IP responses through a different interface?

I have two machines each with two valid network interfaces, an Ethernet interface eth0 and a tun/tap interface gr0. The goal is to start a TCP connection on machine A using interface gr0 but then have the responses (ACKs, etc) from machine B come back over the Ethernet interface, eth0. So, machine A sends out a SYN on gr0 and machine B receives the SYN on its own gr0 but then sends its SYN/ACK back through eth0. The tun/tap device is a GNU Radio wireless link and we just want the responses to come through the Ethernet.
What's the easiest way to accomplish this? I need to research more on TCP/IP, but I was initially thinking that source-spoofing outgoing packets would tell the receiver to respond to the spoofed address (which should get routed to eth0). This would involve routing the IPs from the tun/tap interfaces through gr0 and leave the other traffic to eth0.
We are using Linux and a Python solution would be preferable.
Thanks for looking!
You could add an additional address to the lo interface on each system and use these new addresses as the TCP connection endpoints. You can then use static routes to direct which path each machine takes to get to the other machine's lo address.
For example:
Machine A:
ip addr add 1.1.1.1/32 dev lo
ip route add 2.2.2.2/32 dev eth0 via <eth0 default gateway>
Machine B:
ip addr add 2.2.2.2/32 dev lo
ip route add 1.1.1.1/32 dev gr0
Then bind to 1.1.1.1 on machine A and connect to 2.2.2.2.
You may be interested in enabling logging of martian packets net.ipv4.conf.all.log_martians, and disable reverse path filtering net.ipv4.conf.<interface>.rp_filter on the affected interfaces.
This sysctl vars are accesible via the sysctl utility and/or the /proc filesystem.

Resources