connecting open vswitch with two virtual machines - linux

I'm running an Open VSwitch on a VirtualBox VM,
i want to connect 2 VMs that are running on VirtualBox into OpenVswitch. i did these things:
1)first i made an VM running ubuntu (lubuntu), and installed ovs using the following command
sudo apt-get install openvswitch-switch
2)after that i defined 2 adapter on vm and determine them as Internal Network cause vms want to connect to these machine internally from virtual box
but how can i connect 2 virtualbox's VMs that are running on separate subnets (10.1.1.1 and 10.1.2.1) using this OVS?
the diagram is as follow:
http://www.gliffy.com/go/publish/image/10986491/L.png

I don't think you need to use OVS in this case, though you can achieve this by providing gateway ip.
Say suppose you have create internal network with subnet 192.170.10.0/24 as internal1 and other internal2 with subnet 192.170.20.0/24
Configuration on VM1:
auto eth0
iface eth0 inet static
address 192.170.10.10
network 192.170.10.0
netmask 255.255.255.0
broadcast 192.170.10.255
gateway 192.170.10.20
Configuration on VM2:
auto eth0
iface eth0 inet static
address 192.170.20.10
network 192.170.20.0
netmask 255.255.255.0
broadcast 192.170.20.255
gateway 192.170.20.20
Configuration on OVS:
auto eth0
iface eth0 inet static
address 192.170.10.20
network 192.170.10.0
netmask 255.255.255.0
broadcast 192.170.10.255
gateway 192.170.10.20
auto eth1
iface eth1 inet static
address 192.170.20.20
network 192.170.20.0
netmask 255.255.255.0
broadcast 192.170.20.255
gateway 192.170.20.20
Using above configuration you can ping between VMs on different subnet
However if you still want to use OVS, here is way to configure.
Configuration on VM1:
auto eth0
iface eth0 inet static
address 192.170.10.10
network 192.170.10.0
netmask 255.255.255.0
broadcast 192.170.10.255
Configuration on VM2:
auto eth0
iface eth0 inet static
address 192.170.20.10
network 192.170.20.0
netmask 255.255.255.0
broadcast 192.170.20.255
Configuration on OVS:
Set interface to load to manual in /etc/network/interfaces
auto eth0
iface eth0 inet manual
auto eth1
iface eth1 inet manual
Create Two bridges
sudo ovs-vsctl add-br vm1-br
sudo ovs-vsctl add-br vm2-br
Add respective ports.
sudo ovs-vsctl add-port vm1-br eth0
sudo ovs-vsctl add-port vm2-br eth1
Bridge the bridges using patch interface
sudo ovs-vsctl add-port vm1-br patch1
sudo ovs-vsctl set interface patch1 type=patch
sudo ovs-vsctl set interface patch1 options:peer=patch2
sudo ovs-vsctl add-port vm1-br patch2
sudo ovs-vsctl set interface patch2 type=patch
sudo ovs-vsctl set interface patch2 options:peer=patch1
Bring up the bridges
sudo ifconfig vm1-br up
sudo ifconfig vm-br up
Set the IP Address
sudo ifconfig vm1-br 192.170.10.20/24
sudo ifconfig vm2-br 192.170.20.20/24
Now you can ping between VMs

Related

Debian11 VM cannot ping gateway, but can ping Proxmox hypervisor through linux-bridge

I am encountering a network issue with linux-bridge in a small Proxmox cluster lab.
I have a Debian11 VM installed that can't ping the gateway, but can ping the Proxmox hypervisor.
The hypervisor uses the same bridge as the VM to be reachable.
The routes of the VM and hypervisor seem to be OK.
Debian IP: 192.168.3.240/24 via ens18 (card connected to vmbr0)
Hypervisor IP: 192.168.3.231/24 (indicated in vmbr0 configuration)
Gateway IP: 192.168.3.1/24
192.168.3.231 => 192.168.3.1: OK
192.168.3.240 => 192.168.3.231: OK
192.168.3.240 => 192.168.3.1: NOT OK
Proxmox hypervisor configuration:
/etc/network/interfaces :
auto lo
iface lo inet loopback
auto enp0s3
iface enp0s3 inet manual
auto enp0s8
iface enp0s8 inet manual
auto enp0s9
iface enp0s9 inet manual
auto enp0s10
iface enp0s10 inet manual
auto bond2
iface bond2 inet static
address 192.168.10.2/24
bond-slaves enp0s10 enp0s9
bond-miimon 100
bond-mode balance-rr
#CEPH
auto vmbr0
iface vmbr0 inet static
address 192.168.3.231/24
gateway 192.168.3.1
bridge-ports enp0s3
bridge-stp off
bridge-fd 0
edit routage :
brctl show:
bridge name bridge id STP enabled interfaces
vmbr0 8000.08002701da38 no enp0s3

Can't access website in virtualbox after Windows 10 update

Host: Windows 10 (updated)
Guess: Ubuntu 14.04.5
Virtualbox: 5.2.12 r122591 (Qt5.6.2)
After the windows update I tried to access my virtual machine and it kept giving me random errors. After of dozens of tutorials and guides my current settings are:
Hosts file (on windows): 192.168.56.2 devserver
/etc/network/interfaces file(on ubuntu)(couldn't paste):
auto eth1
iface eth1 inet static
address 192.168.56.2
netmask 255.255.255.0
broadcast 192.168.56.0
auto eth0
iface eth0 inet dhcp
auto eth2
iface eth2 inet dhcp
Virtualbox network:
Attacked to: Host-only Adapter
Name: VirtualBox Host-Only Ethernet Adapter #3
Adapter Type: PCnet-FAST III (am79C973)
Promiscuous Mode: Allow All
MAC Address: 0800275A1DBB
But I still can't connect to the website. It keeps giving me the "page not found" message.
Solution
/etc/network/interfaces file
auto eth0
iface eth0 inet static
address 192.168.56.2
netmask 255.255.255.0
broadcast 192.168.56.0
hosts file
192.168.56.2 devserver

VM can't ping host that's two switches and a router away through NAT

I have a Linux VM (Kali) that's connected to a host only switch
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.40 netmask 255.255.255.0 broadcast 192.168.0.255
The Interface is up, the interfaces file looks like this
auto eth0
iface eth0 inet static
address 192.168.0.40
netmask 255.255.255
gateway 192.168.0.254
dns-nameservers 8.8.8.8
the switch is connected to an Ubuntu Server VM that has a masquerade NAT enabled to the 192.168.0.0/24 network and is connected via a bridged switch to the actual host, which is running Ubuntu 16.04
The NAT rule is on the POSTROUTING chain and it goes like this
Chain POSTROUTING (policy ACCEPT 20 packets, 1440 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * ens33 192.168.0.0/24 0.0.0.0/0
and the interfaces file on the server machine looks like this
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto ens33
iface ens33 inet static
address 172.16.23.100
netmask 255.255.0.0
gateway 172.16.0.254
dns-nameservers 8.8.8.8
#iface ens33 inet dhcp
#Gateway for LAN1 - 192.168.0.0/24
auto ens38
iface ens38 inet static
address 192.168.0.254
netmask 255.255.255.0
The routing table on the host looks like this
default via 172.16.0.254 dev enp3s0
169.254.0.0/16 dev enp3s0 scope link metric 1000
172.16.0.0/16 dev enp3s0 proto kernel scope link src 172.16.0.6
Now i'm trying to ping the host from the Kali machine (from 192.168.0.40 to 172.16.0.6), but the ping isn't going through, i did tcpdump on the host machine on the only interface with 192.168.0.40 as the host but it doesn't pick up any traffic. the NAT rules aren't being used for some reason.
I can ping the default gateway and the server/router VM with Kali but the ping for the host doesn't go through. What am i doing wrong?
What i think should happen is that the packet goes through to the server through Kali's default gateway, once it's in the server machine it gets translated to ens33's address and from there it will go to the host and the host will send it back to ens33 because that should be the current source ip, but clearly that's not happening
I'm bad at paying attention to things, i put the NAT rule as eth33 instead of ens33, fixed it and it works now

SSH on Raspberry Pi3

I install "ubuntu-17.04-desktop-amd64" and "qt-opensource-linux-x64-5.8.0" on on my laptop.
I wrote an application with Qt 5.8 for windows. It works fine in windows and Ubuntu.
IP address of raspberry ("hostname -I"): 169.254.181.63
Enable SSH:
In raspberry: from Preferences menu of Rasbian.
In Ubuntu:
sudo service ssh status
....
.... Starting OpenBSD Secure Shell server.
.... Server listening on 0.0.0.0 port 22.
.... Server listening on :: port 22.
.... Started OpenBSD Secure Shell server.
I connect raspberry pi to laptop with Ethernet cable.
I create new device (Generic Linux Device) in "Tools -> Options…-> Devices tab"
Host name: 169.254.181.63
SSH port:22
Username: pi
Password: 1 (set by me)
Result test:
Device test: SSH connection: Network unreachable.
In Ubuntu:
ssh pi#169.254.181.63
ssh: connect to host 169.254.181.63 port 22: Network is unreachable
I edit the interface file to set the network configuration in raspberry:
sudo nano /etc/network/interfaces
Update:
source-directory /etc/network/interfaces.d
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 192.168.100.100
netmask 255.255.255.0
allow-hotplug wlan0
iface wlan0 inet manual
Update:
But after reboot raspberry and execute "hostname -I", I have "192.168.100.100 169.254.181.63"
You should configure a static IP in this way (see this link):
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 192.168.100.100
network 192.168.100.0
netmask 255.255.255.0
broadcast 192.168.200.25
You've to move address, netmask, etc. below the line iface eth0 inet static.
Check if your correct IP address is correct and use command in terminal. Open the terminal use this command and find the exact ip address:
cd /var/misc
cat misc
Copy this IP address and use this command:
ssh pi#ip_address
change the order. put the settings under iface eth0 not the lo.
iface eth0 inet manual
address 192.168.200.100
network 192.168.100.0
netmask 255.255.255.0
broadcast 192.168.200.25

Why are UDP packets sent from default interface address instead of the address where the client packet is received?

For a long time I had troubles using several software (early versions of Teamspeak 3, netcat, openvpn) communicating using UDP protocol. Today I identified the problem.
The main goal for me was to use openvpn over udp which did not seem to work on my server which has multiple ip addresses (runs Ubuntu Server Kernel 3.2.0-35-generic).
Using following config:
# ifconfig -a
eth0 Link encap:Ethernet HWaddr 11:11:11:11:11:11
inet addr:1.1.1.240 Bcast:1.1.1.255 Mask:255.255.255.224
...
# cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 1.1.1.240
broadcast 1.1.1.255
netmask 255.255.255.224
gateway 1.1.1.225
up ip addr add 1.1.1.249/27 dev eth0
down ip addr del 1.1.1.249/27 dev eth0
up ip addr add 2.2.2.59/29 dev eth0
down ip addr del 2.2.2.59/29 dev eth0
up route add -net 2.2.2.56 netmask 255.255.255.248 gw 2.2.2.57 eth0
# default route to access subnet
up route add -net 1.1.1.224 netmask 255.255.255.224 gw 1.1.1.225 eth0
Problem:
A simple tcpdump at the server reveals that udp packets (tested with netcat and openvpn) received at 2.2.2.59 are replied from 1.1.1.240 (client: 123.11.22.33)
13:55:30.253472 IP 123.11.22.33.54489 > 2.2.2.59.1223: UDP, length 5
13:55:36.826658 IP 1.1.1.240.1223 > 123.11.22.33.54489: UDP, length 5
Question:
Is this problem due to wrong configuration of the network interface or the application itself (OpenVPN, netcat)?
Is it possible for the/an application to listen on multiple ip addresses and reply from the interface address where it received the packet on UDP like it's doing when using TCP.
I know that you can bind applications for specific ip but that would not be the way to go.
I cannot see that this behaviour is due to the UDP protocol itself, since the application is possible to determine at which interface address the packet was received.
Specifically, openvpn has the --multihome option for handling this scenario correctly.

Resources