virtualbox vm can not access from outside - linux

I installed a vbox in ubuntu 18.04, and used bridged network by adding parameters:
--bridgeadapter2 eno1 --nicpromisc2 allow-all
Everything goes fine, vm can ping outside, and host can ping vm, but outside can not ping vm:
(outside in the same subnet can ping vm, for example: 10.124.214.x can ping vm)
# 10.124.214.116 is vm, 10.124.214.4 is host, 10.124.12.103 is outside IP
# From host to vm
traceroute 10.124.214.116
traceroute to 10.124.214.116 (10.124.214.116), 30 hops max, 60 byte packets
1 10.124.214.116 (10.124.214.116) 0.232 ms 0.197 ms 0.191 ms
# From vm to outside
ping 10.124.12.103
PING 10.124.12.103 (10.124.12.103) 56(84) bytes of data.
64 bytes from 10.124.12.103: icmp_seq=1 ttl=63 time=1.38 ms
The tricky thing is vbox interface does not like normal linux tun/tap interface, I can see interface in VM, but there is nothing I can operate from host, and there is no bridge on the host.
Is there any API I can trouble shooting vbox?

Cheers code farmer
You are right about bridge. Thing here is that your VM is currently behind NAT created by virtual box (see different subnets you mentioned)
What you can do here is to create new bridge on host machine (good instructions HERE)
Using this setup you will have to change networking setting slightly:
VM Host
+-----------------------------------------------------------------+
| -> VM A (10.124.214.5/24) |
Outside network (10.124.214.0/24) -> | eno1 (no IP) -> br0 (10.124.214.4/24) -> VM B (10.124.214.6/24) |
| -> VM C (10.124.214.7/24) |
+-----------------------------------------------------------------+
Then you can assign your VM to br0. Depends on your outside network setting you might need to set static IP to your VM

Finally, I got the root cause:
There are two interfaces in my VM:
First one is NAT, second one is bridge. By default, vbox set the NAT interface as the default route, when I send out packets, it use NAT interface. But HOST and VM are in the same subnet, when connect to HOST, it use the bridge interface. When I need to access this bridged interface from outside, I need to add another entry of default route by ip route command:
sudo ip route add default via 10.124.214.116

Related

How to setup tailscale as a transparent l2 switch

I have two machines, vm1, vm2, with tailscale installed on both.
each machine is running lxd with containers.
each machine has its own private subnet, 10.55.1.0/24 and 10.55.5.0/24 respectively.
Tailscale is setup to advertise routes, so that containers on either vm1 or vm2 can talk to each other.
Containers on either vm1 or vm2 can ping other containers on the other host, tcp and udp is working fine.
The problem is that once the packets jump through the tailscale tunnel, they lose their source ip but instead have the
ip address of the tailscale0 address of the machine from which they originated.
i.e. container1 (with address 10.55.1.20) pings container2 on vm2 (with address 10.55.5.20).
When the packet arrives on vm2, it looks like its from vm1 (100.64.x.x) instead of 10.55.1.20)
I can not seem to find the right combination of tailscale up flags for tailscale not to nat the source address.
--snat-subnet-routes=false looks like the right flag to be used, but I can't see any difference in my testing.
vm1 tailscale up command:
tailscale up --accept-routes --accept-dns=false --advertise-routes=10.55.5.0/24 --snat-subnet-routes=false
vm2 tailscale up command is the same other than the advertised subnet.
What I want to see:
on container2, any packets from container1 should have a source address 10.55.1.20, rather than 100.64.x.x of vm1.
vm1 and vm2 are debian linux boxes, and are running latest tailscale client (1.26.1)
I tried setting up a bridge with tailscale0 as outlined here:
Bridged interfaces and Tailscale "Raspberry"
but not have had any success - but that could be a different question.

Proxmox with OPNsense as Firewall/GW - routing issue

This setup should be based on a proxmox, being behind a opnsense VM hosted on the Proxmox itself which will protect proxmox, offer a firewall, a privat LAN and DHCP/DNS to the VMs and offer a IPsec connection into the LAN to access all VMs/Proxmox which are not NATed. The server is the typical Hetzner Server, so only on NIC but multiple IPs or/subnets on this NIC.
Due to the cluster-blocker with the PCI-passthrough setup this is my alternative
Proxmox Server with 1 NIC(eth0)
3 Public 1IPs, IP2/3 are routed by MAC in the datacenter (to eth0)
KVM bridged setup ( eth0 no ip, vmbr0 bridged to eth0 with IP1 )
A private LAN on vmbr30, 10.1.7.0/24
A shorewall on the proxmox server
To better outline the setup, i create this drawing: (not sure its perfect, tell me what to improve)
Textual description:
Network interfaces on Proxmox
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
pre-up sleep 2
auto vmbr0
# docs at
iface vmbr0 inet static
address External-IP1(148.x.y.a)
netmask 255.255.255.192
# Our gateway is reachable via Point-to-Point tunneling
# put the Hetzner gateway IP address here twice
gateway DATACENTER-GW1
pointopoint DATACENTER-GW1
# Virtual bridge settings
# this one is bridging physical eth0 interface
bridge_ports eth0
bridge_stp off
bridge_fd 0
pre-up sleep 2
bridge_maxwait 0
metric 1
# Add routing for up to 4 dedicated IP's we get from Hetzner
# You need to
# opnsense
up route add -host External-IP2(148.x.y.b)/32 dev vmbr0
# rancher
up route add -host External-IP2(148.x.y.c)/32 dev vmbr0
# Assure local routing of private IPv4 IP's from our
# Proxmox host via our firewall's WAN port
up ip route add 10.1.7.0/24 via External-IP2(148.x.y.b) dev vmbr0
auto vmbr30
iface vmbr30 inet static
address 10.1.7.2
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
pre-up sleep 2
metric 1
Shorewall on Proxmox
interfaces
wan eth0 detect dhcp,tcpflags,nosmurfs
wan vmbr0 detect bridge
lan vmbr30 detect bridge
policies:
lan lan ACCEPT - -
fw all ACCEPT - -
all all REJECT INFO -
OPNsense
WAN is ExternalIP2, attached to vmbr0 with MAC-XX
LAN is 10.1.7.1, attached to vmbr30
What is working:
The basic setup works fine, i can access opnsense with IP2, i can access proxmox on IP1 and i can access rancher-VM on ip3 - that is what does not need any routing.
i can connect with a IPSec mobile client to OPNsense, offering access to LAN (10.1.7.0/24) from a virtual ip range 172.16.0.0/24
i can access 10.1.7.1 ( opnsense ) while connected with OpenVPN
i can access 10.1.7.11 / 10.1.7.151 from OPNsense(10.1.7.1) (shell)
i can access 10.1.7.11 / 10.1.7.1 from othervm(10.1.7.151) (shell)
Whats not working:
a) connecting to 10.1.7.11/10.1.7.151 or 10.1.7.2 from the IPsec client
b) [SOLVED in UPDATE 1]connecting to 10.1.7.2 from 10.1.7.1 (opnsense)
c) Its seems like i have asynchron routing, and while i can access e.g. 10.1.7.1:8443 i see a lot if entries
d) IPSec LAN sharing would include i rule in IPSEC chain, "from * to LAN ACCEPT" - but that did not work for me, i had to add "from * to * ACCEPT"
Questions:
I) Of course i want to fix a)b)c)d), probably starting with understanding c) and d)
II) would it help, in this setup, to add a second NIC?
III) could it be an issue that i activated net.ipv4.ip_forward on the proxmox host ( shouldnt it be routed rather? )
When i got this straighten out i would love to place a comprehensive guide on how to run OPNsense as a Appliance with a private network in on Proxmox, passing some services to the outer world using HAproxe+LE and also accessing the private lan using IPsec
UPDATE1:
Removing up ip route add 10.1.7.0/24 via IP2 dev vmbr0 from vmbr0 on proxmox fixed the issue that neither proxmox could access 10.1.7.0/24 nor it could have been access from the LAN network.
UPDATE2:
I created an updated / changed setup where pci-passthrough is used. Goals are the same - it reduces the complexity - see here
Some direly needed rough basics first:
There's routing, which is IP's and packets on layer3.
There's switching, which is MAC's and frames on layer2.
Further you speak of vmbr0/1/30, but only 0 and 30 are shown in your config.
Shorewall does not matter for your vm connectivity (iptables is layer3, ebtables would be layer2 for contrast, but your frames should just fly by the shorewall, not getting to the HV but instead going to the VM's directly. shorewall is just a frontend using iptables in the background).
With that out of the way:
Usually you don't need any routing on the proxmox BRIDGES. A bridge is a switch, as far as you are concerned. vmbr0 is a virtual external bridge which you linked with eth0 (thus created an in-kernel link between a physical nic and your virtual interface, to get packets to flow at all). The bridge could also run without an IP attached to it at all. But to have the HV accessible, usually an external IP is attached to it. Otherwise you'd have to setup your firewall gateway plus a VPN tunnel, give vmbr30 an internal ip, and then you could access the internal IP of the HV from the internet after establishing a tunnel connnection, but that's just for illustration purposes for now.
Your ipsec connectivity issue sounds an awful lot like a misconfigured VPN, but also mobile IPSEC is just often a pain in the butt to work with due to protocol implementation differences, openvpn works a LOT better, but you should know your basics about PKI and certificates to implement that. Plus if opnsense is as counter-intuitive as pfsense when it comes to openvpns, you are possibly in for a week of stabbing at the dark easily. For pfsense there's a installable openvpn config export package which makes life quite easier, don't know wether this one is available for opnsense, too.
It does not so much look like what you call asynchronous routing but rather like a firewall issue you had, concerning the first picture.
For your tunnel firewall (interface IPSEC or interface openvpn on opnsense, depending on the tunnel you happen to use) just leave it at ipv4 any:any to any:any, you should only get into the LAN net anyway by the definition of the tunnel itself, opnsense will automatically send the packets out from the LAN interface only, on the second picture.
net.ipv4.ip_forward = 1 = enable routing in the kernel at all on the linux OS's interfaces where you activated it. You can do NAT-ting stuff via iptables, thus making it possible for getting into your LAN by using your external HV IP on vmbr0 in theory, but that's not something you should happen to achieve by accident, you might be able to disable forwarding again without loosing connectivity. At least to the HV, I am unsure about is your extra routes for the other external IPs, but these should be configurable the same way from within the opnsense directly (create the point-to-point links there, the frames will transparently flow through vmbr0 and eth0 to the hetzner gateway) and work, which would be cleaner.
Also you should not make the rancher-VM accessible externally directly and thus bypassing your firewall, I doubt this is what you want to achieve. Rather put the external ip onto the opnsense (as virtual ip of type ip alias), set up 1:1 NAT from IP3 to the internal ip of the rancher-vm, and do the firewalling via opnsense.
Some ascii art how things possibly should look from what I can discern from your information so far, for the sake of brevity only interfaces are used, no distinction is made between physical/virtual servers, and no point-to-point links are shown.
[hetzner gw]
|
|
|
[eth0] (everything below happens inside your server)
||
|| (double lines here to hint at the physical-virtual linking linux does)
|| (which is achieved by linking eth0 to vmbr0)
||
|| +-- HV acess to via IP1 -- shorewall/iptables for hv firewalling
|| |
[vmbr0] [vmbr30]
IP1 | | |
| | | |
| | | |
[wan nic opn] | | |
IP2 on wan directly, | | |
IP3 as virtual IP of type ip alias | | |
x | | |
x (here opn does routing/NAT/tunnel things) | | |
x | | |
x set up 1:1 NAT IP3-to-10.1.7.11 in opn for full access | | |
x set up single port forwardings for the 2nd vm if needed | | |
x | | |
[lan nic opn]-----------------------------------------------+ | |
10.1.7.1 | |
| |
+----------+ |
| |
[vm1 eth0] [vm2 eth0]
10.1.7.11 10.1.7.151
If you want to firewall the HV via opnsense, too, these would be the steps to do so while maintainting connectivity:
remove IP1 from [vmbr0]
put it on [wan nic opn]
put internal ip (IP_INT) from opn lan onto [vmbr30]
set up 1:1 NAT from IP
set all firewall rules
swear like hell when you break the firewall and cannot reach the hv anymore
see wether you can access the HV via an iKVM solution hoping to get a public IP onto it, so you can use the console window in proxmox hoping to fix or reinstall the firewall.

VMWare Guest Can't Connect to Host Server

I'm running OS X Sierra in VMWare Player on top of Linux Mint 18. I can ping Linux but it won't connect to my server through the browser. I have a separate machine with a test server set up on the same local network. I can reach that one via the browser but not the server on the host. I am trying to connect using IPv4 if that's relevant.
I have tried using 'Bridged', 'NAT' and 'Host Only' to no avail.
Is there some sort of Mac firewall keeping me from connecting with the host?
Any ideas of how to fix?
Edit:
A partial fix from this answer...
I can specify an IP address to the server in the source code (node) but this is obviously sub optimal as the IP addresses are dynamically assigned. This works. I can view it in the guest browser but I have to manually specify the IP address on both ends. How do I get the guest to see the 'localhost' of the host? Essentially I don't want to have to look up my IP address every time I reconnect to my network and change the code to suit.
Edit:
I have another VM guest with Windows 10 running in it with the same issue so it is at least not Mac specific. It is probably something directly related to VMWare.
If you use Bridged network type for the VM.
Try temporary disable local VM OS X firewall:
/usr/libexec/ApplicationFirewall/socketfilterfw --setglobalstate=off
Temorary disable local server firewall rules:
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
List if IP address of the VM On OS X used same network as your server:
ifconfig
List local server IP address used, it should be from same network as VM IP:
ifconfig
If all firewall rules disabled and both machines (VM OS X and local server) from the same subnet then you should able to ping VM IP address from local server. If addresses from different subnets then use statically assigned IP in the VM OS X or change DHCP assign into your router (that assign IPs if any), you can check MAC address of VM network interface and set it to assign right IP address in the router if any and possible there.

Is the IP address resolution wrong in my EC2 instance?

Hi,
The following is the result of netstat -a -o -n on my windows ec2 instance.
I see that port 80 is being used by different processes in both physical and foreign address. Does this mean that the NAT is not resolving the private and public IP address of the EC2 instance properly ?
What should I do to fix it ? On the private IP, port 80 is occupied by the node server while a chrome transaction is occupying port 80 on the foreign address.
Thanks.
Try setting DHCP Option set.
I had a problem with it. Windows Domains Machines were not resolved. At Windows networks, with AD, you must fill your domain at DHCP Option.
Look at http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_DHCP_Options.html

Send traffic to self over physical network on Ubuntu

I have a dual port ethernet NIC and let's say I have connected 2 ports in a loop and assigned the following IPs to the 2 ethernet interfaces:
eth2 -> 192.168.2.1
eth3 -> 192.168.3.1
I want to send traffic from 1 port to another over the physical network, e.g. ping 192.168.3.1 from 192.168.2.1. However, the TCP/IP stack in the Linux kernel recognizes that these two addresses are local and instead sends the traffic to the loopback adapter, so the traffic never hits the physical network.
The closest I have to a solution is Anastasov's send-to-self patch, which unfortunately, has been discontinued since kernel 3.6 so it won't work on Ubuntu 13.10 (kernel 3.11) for me. I've tried finding rewriting the patch for 3.11, but I can't seem to locate these in the Ubuntu distro:
include/linux/inetdevice.h
net/ipv4/devinet.c
net/ipv4/fib_frontend.c
net/ipv4/route.c
Documentation/networking/ip-sysctl.txt
Is there a way I can get the send-to-self patch to work, or an alternative?
You can use network namespaces for this purpose.
As ip-netns's manpage says:
A network namespace is logically another copy of the network stack,
with its own routes, firewall rules, and network devices.
Following is just a copy of this answer:
Create a network namespace and move one of interfaces into it:
ip netns add test
ip link set eth1 netns test
Start a shell in the new namespace:
ip netns exec test bash
Then proceed as if you had two machines. When finished exit the shell and delete the namespace:
ip netns del test
you can try configuring route table, by running "ip" command:
ip route add to unicast 192.168.3.1 dev eth2
ip route add to unicast 192.168.2.1 dev eth3
new route would be added into route table, and it should be able to take effect before egress routing lookup hit the host-local route between "192.168.3.1" and "192.168.2.1", therefore, the traffic should be sent through physical interface "eth2" and "eth3", instead of loopback "lo"
Never tried myself, but should work.

Resources