Forward multicast IP6 packet to another interface - multicast

I'm working on implementing multicasting for contiki.
I've got a couple of Ubuntu VMs on a NAT and an eth0 interface.
To one of these VMs is connected a Contiki Border-router (a zigduino) which makes its own interface, tun0. (see also http://anrg.usc.edu/contiki/index.php/RPL_Border_Router )
One of the VMs multicasts udp packets to ff1e:: . I can see on wireshark that every VM receives this multicast packet. My border-routers tun0 interface never sees this udp packet. I'd like to forward all multicast packets received by the VM connected to the border router from the eth0 interface to the tun0 interface, thereby allowing my border-router to see the packet and inject it in his network.
How can I do this in Ubuntu? I'm kinda stuck, tried adding routes but doesn't work.
Addendum:
My ifconfig of the vm with the router:
eth0 Link encap:Ethernet HWaddr 00:50:56:24:dd:45
inet addr:192.168.59.131 Bcast:192.168.59.255 Mask:255.255.255.0
inet6 addr: fe80::250:56ff:fe24:dd45/64 Scope:Link
inet6 addr: bbbb::2/64 Scope:Global
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:40205 errors:0 dropped:0 overruns:0 frame:0
TX packets:25436 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:25834566 (25.8 MB) TX bytes:4091267 (4.0 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:734 errors:0 dropped:0 overruns:0 frame:0
TX packets:734 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:51481 (51.4 KB) TX bytes:51481 (51.4 KB)
tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:127.0.1.1 P-t-P:127.0.1.1 Mask:255.255.255.255
inet6 addr: fe80::1/64 Scope:Link
inet6 addr: aaaa::1/64 Scope:Global
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:7 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:586 (586.0 B) TX bytes:1046 (1.0 KB)
this is for IPv6!
I've tried something like this:
sudo ip -6 route add ff1e::/64 via fe80::1 dev tun0
doesn't work.
Edit:
I tried the below suggestion.
My routing now looks like this:
sudo ip -6 route
aaaa::/64 dev tun0 proto kernel metric 256
aa00::/8 via bbbb::2 dev eth0 metric 1024
bbbb::/64 dev eth0 proto kernel metric 256
fe80::/64 dev eth0 proto kernel metric 256
fe80::/64 dev tun0 proto kernel metric 256
ff1e::/64 via fe80::1 dev eth0 metric 1024
Note that the second route is one that is used by my VMs (which have addresses bbbb::3, bbbb::4, ...) to contact my nodes in the network (which have addresses like aaaa::11:22ff:fe33:4402) and this works. The nodes are connected to the bbbb::2 VM.
However, when a VM publishes to, say, ff1e:101:a::4, my eth0 interface on bbbb::2 detects this but still doesn't forward it to the tun0.
Tun0 has a aaaa::1/64 global address but the "sudo ip -6 route add ff1e::/64 via aaaa::1 dev eth0" command gives "RTNETLINK answers: No route to host". Trying to add the full IPv6 multicast address (like so "sudo ip -6 route add ff1e:101:a::4/128 via fe80::1 dev eth0") also produces no results but does add the route.
Edit edit: adding tables after second suggestion:
looci#looci:~$ sudo ip -6 route add ff1e::/64 dev tun0 table local
looci#looci:~$ ip -6 route
aaaa::/64 dev tun0 proto kernel metric 256
aa00::/8 via bbbb::2 dev eth0 metric 1024
bbbb::/64 dev eth0 proto kernel metric 256
fe80::/64 dev eth0 proto kernel metric 256
fe80::/64 dev tun0 proto kernel metric 256
ff1e::/64 via fe80::1 dev eth0 metric 1024
ff1e:101:a::4 via fe80::1 dev eth0 metric 1024
looci#looci:~$ ip -6 route show table local
local ::1 via :: dev lo proto none metric 0
local aaaa:: via :: dev lo proto none metric 0
local aaaa::1 via :: dev lo proto none metric 0
local bbbb:: via :: dev lo proto none metric 0
local bbbb::2 via :: dev lo proto none metric 0
local fe80:: via :: dev lo proto none metric 0
local fe80:: via :: dev lo proto none metric 0
local fe80::1 via :: dev lo proto none metric 0
local fe80::250:56ff:fe24:dd45 via :: dev lo proto none metric 0
ff1e::/64 dev tun0 metric 1024
ff00::/8 dev eth0 metric 256
ff00::/8 dev tun0 metric 256
looci#looci:~$ ip -6 route show table main
aaaa::/64 dev tun0 proto kernel metric 256
aa00::/8 via bbbb::2 dev eth0 metric 1024
bbbb::/64 dev eth0 proto kernel metric 256
fe80::/64 dev eth0 proto kernel metric 256
fe80::/64 dev tun0 proto kernel metric 256
ff1e::/64 via fe80::1 dev eth0 metric 1024
ff1e:101:a::4 via fe80::1 dev eth0 metric 1024

I'd rather have written sudo ip -6 route add ff1e::/64 via fe80::1 dev eth0 which means every packets received on eth0 in destination to ff1e:: goes to the address fe80::1. But don't forget to enable the IP forwarding in Linux with sudo sysctl -w net.ipv6.conf.all.forwarding=1.
Furthermore, I guess that if you want to inject this multicast udp packet into the 6LoWPAN network you have tho activate the multicast forwarding with UIP_CONF_IPV6_MULTICAST in your rpl-border-router and choose one of the multicast engine with UIP_MCAST6_CONF_ENGINE.
UPDATE:
Then you may try somethings like this:
ip -6 route add ff1e::/64 dev tun0 table local
The complete explanation can be found here.

Related

How to change a DHCP interfaces to STATIC interfaces in OVH VPS on Ubuntu 16.04

Im new into LINUX and i need your help for my OVH VPS Ubuntu Server 16.04LTS interface actually on DHCP to STATIC
Actually my /etc/network/interfaces file is :
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# Source interfaces
# Please check /etc/network/interfaces.d before changing this file
# as interfaces may have been defined in /etc/network/interfaces.d
# See LP: #1262951
source /etc/network/interfaces.d/*.cfg
The source path /etc/network/interfaces.d/*.cfg have only one file named : 50-cloud-init.cfg and this file contain :
auto lo
iface lo inet loopback
auto ens3
iface ens3 inet dhcp
So my IP address is 149.xxx.xxx.61, I need to transform this to have an iface ens3 static for my IP address.
Actually ifconfig -a is :
ens3 Link encap:Ethernet HWaddr fa:16:3e:ae:e3:83
inet addr:149.xxx.xxx.61 Bcast:149.xxx.xxx.61 Mask:255.255.255.255
inet6 addr: fe80::xxxx:xxxx:feae:e383/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:894526 errors:0 dropped:0 overruns:0 frame:0
TX packets:297070 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:102187906 (102.1 MB) TX bytes:63602471 (63.6 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:14743 errors:0 dropped:0 overruns:0 frame:0
TX packets:14743 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:31459525 (31.4 MB) TX bytes:31459525 (31.4 MB)
How can I do this ?
Here is an example file with a guide from one of my servers:
Please note: not all OVH servers have IPv6 support or a routed IPv6 address. You can check this in your control panel and if you do not see an address to use or don't want IPv6 support do not include that section.
To find your link local address required for routing the IPv6 it can be found via the OVH API or by converting the interface's MAC address into an appropriate link-local address.
And of course, if you do not have any failover IP addresses do not include that section.
And last, please note that on some VMware based VMs the gateway's last octet may need to be .254 instead of .1.
auto lo
iface lo inet loopback
#auto ens3
#iface ens3 inet dhcp
#--static IPv4--
auto ens3
iface ens3 inet static
address <main server IP>
netmask 255.255.255.255
broadcast <main server IP>
gateway <main server IP's first three octets>.1
dns-nameservers 8.8.8.8 8.8.4.4
#iface ens3 inet6 dhcp #does not seem to work for OVH VPS 2016 range
#--static IPv6--
iface ens3 inet6 static
address <IPv6 address-should begin with 2001:41d0:>
netmask 128
post-up /sbin/ip -6 route add <IPv6 link local-should begin with fe80::> dev ens3
post-up /sbin/ip -6 route add default via <IPv6 link local-should begin with fe80::> dev ens3
pre-down /sbin/ip -6 route del default via <IPv6 link local-should begin with fe80::> dev ens3
pre-down /sbin/ip -6 route del <IPv6 link local-should begin with fe80::> dev ens3
dns-nameservers 2001:4860:4860::8888 2001:4860:4860::8844
#--failover IP address #1--
auto ens3:0
iface ens3:0 inet static
address <failover IP address #1>
netmask 255.255.255.255
broadcast <failover IP address #1>
#--failover IP address #2--
auto ens3:1
iface ens3:1 inet static
address <failover IP address #2>
netmask 255.255.255.255
broadcast <failover IP address #2>
First, you must find your IP and Gateway for IPv6. They can be found in your OVH manager.
Modify your network config file with this command (use sudo if you are not root) :
nano /etc/network/interfaces.d/50-cloud-init.cfg
Add these lines :
iface eth0 inet6 static
address YOUR_IPV6
netmask IPV6_PREFIX
post-up /sbin/ip -f inet6 route add IPV6_GATEWAY dev eth0
post-up /sbin/ip -f inet6 route add default via IPV6_GATEWAY
pre-down /sbin/ip -f inet6 route del IPV6_GATEWAY dev eth0
pre-down /sbin/ip -f inet6 route del default via IPV6_GATEWAY
YOUR_IPV6 must be replaced with your IPv6.
IPV6_PREFIX must be replaced with 128.
IPV6_GATEWAY must be replaced with your IPv6 gateway.
Reboot your VPS.
Reference : http://docs.ovh.ca/fr/guides-network-ipv6.html#debian-derivatives-ubuntu-crunchbang-steamos

why can't my vlan interface in linux network namespace ping the parent interface?

So I'm writing a program that isolates itself with namespaces, but I'm stuck on getting networking to work as I want it to. I plan to route my application over Tor, which is a Socks5 proxy that exposes a SocksPort on a network interface.
My application needs to work when it is already isolated with virtual machines and host only routing, with Tor on the host bound to say vboxnet1 (192.168.56.1:9051), eth0 inside of the VM is (192.168.56.101), then I have a network namespace. Because of this, I can't simply use a veth pair and bind Tor to the veth on the parent namespace, because Tor is not even in the virtual machine. So ultimately, I need to get a connection to the SocksPort on 192.168.56.1:9051, from a namespace, through eth0 (192.168.56.101) in the parent namespace. However, the solution I use also should work when I'm not in a virtual machine, and when Tor is in the parent namespace (as opposed to the host of the VM with the parent namespace).
This is just some background as to why veth pairs and such will not work for me, and in general what I'm trying to do, my question is more specific;
I followed the guide at the following link: http://blog.scottlowe.org/2014/03/21/a-follow-up-on-linux-network-namespaces/
on Linux 3.16
I will do it right now and show the exact commands I'm typing:
ip netns add blue
ip link add link eth0 name eth0.100 type vlan id 100
at this point "ip -d link", shows the following (with MAC edited out);
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether promiscuity 0
352: eth0.100#eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
promiscuity 0
vlan protocol 802.1Q id 100 <REORDER_HDR>
ip link set eth0.100 netns blue
I note that ip -d link from the blue namespace has the following output
ip netns exec blue ip -d link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0
352: eth0.100#if2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default
promiscuity 0
vlan protocol 802.1Q id 100 <REORDER_HDR>
Notice that it is eth0.100#if2 now rather than #eth0, I'm not sure if this is relevant.
I bring up loopback, which is actually not in the guide I mentioned but I believe is required:
ip netns exec blue ip link set dev lo up
ip netns exec blue ip addr add 192.168.56.102/24 dev eth0.100
ip netns exec blue ip link set eth0.100 up
ip netns exec blue ifconfig shows this now (slightly edited out)
eth0.100 Link encap:Ethernet
inet addr:192.168.56.102 Bcast:0.0.0.0 Mask:255.255.255.0
Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:33 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:5598 (5.5 KB)
from the parent namespace ifconfig shows
eth0 Link encap:Ethernet
inet addr:192.168.56.101 Bcast:192.168.56.255 Mask:255.255.255.0
But then when I try to ping from the namespace, as per the guide;
ip netns exec blue ping -c 4 192.168.56.101
PING 192.168.56.101 (192.168.56.101) 56(84) bytes of data.
From 192.168.56.102 icmp_seq=1 Destination Host Unreachable
It is always unreachable, the same if I try 192.168.56.1, which is ultimately what I want to get a connection to (it is the vboxnet on the host, which can be routed to from the eth0 in the vm, but I cannot even route to the eth0 from the namespace, even with the vlan interface added to the namespace).
thanks for any help I've been trying to do this for hours and I have pretty much exhausted every resource.

Raspberry Pi configured for static IP also gets a DHCP IP

I've configured my Raspberry Pi for static IP. My /etc/network/interfaces looks like this:
auto lo
iface lo inet loopback
auto eth0
allow-hotplug eth0
iface eth0 inet static
address 192.168.1.2
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
Yet for some strange reason, every time I reboot my Pi or my router, my Pi gets the requested IP (192.168.1.2) but ALSO a DHCP address (192.168.1.18). So my Pi has two addresses.
Of course, this isn't necessarily a problem, I just think it's strange. Am I doing something wrong? Or not enough? My router is almost completely locked down for management, but I can enter static IPs for devices - is this necessary, if I configure the Pi to do it?
The dynamic address isn't apparent in ifconfig:
eth0 Link encap:Ethernet HWaddr b8:27:eb:5d:87:71
inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:236957 errors:0 dropped:34 overruns:0 frame:0
TX packets:260738 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:35215632 (33.5 MiB) TX bytes:70023369 (66.7 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:27258 errors:0 dropped:0 overruns:0 frame:0
TX packets:27258 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3397312 (3.2 MiB) TX bytes:3397312 (3.2 MiB)
yet I can ping, ssh and everything on .18 as well.
Since you can add multiple IP addresses to the interface eth0 as noted above, I believe the solution to your problem is to remove the auto eth0 line from your /etc/network/interfaces file.
The IP address attached to interface eth0 can be viewed by ip addr. May be eth0 has two IP address configured 192.168.1.2 and 192.168.1.18.
Also you can add multiple IP address to interface eth0 through
sudo ip addr add <IP address> dev eth0
If you dont want IP address 192.168.1.18 you can remove it by
sudo ip addr del 192.168.1.18 dev eth0

wireshark doesn't display icmp traffic between tow logical interafce

I add tow logical interfaces for test with the following commands :
# set link on physical Device Up
sudo ip link set up dev eth0
# create logical Interfaces
sudo ip link add link eth0 dev meth0 address 00:00:8F:00:00:02 type macvlan
sudo ip link add link meth0 dev meth1 address 00:00:8F:00:00:03 type macvlan
# order IP Addresses and Link
sudo ip addr add 192.168.56.5/26 dev meth0
sudo ip addr add 192.168.56.6/26 dev meth1
sudo ip link set up dev meth0
sudo ip link set up dev meth1
ifconfig
meth0 Link encap:Ethernet HWaddr 00:00:8f:00:00:02
inet addr:192.168.56.5 Bcast:0.0.0.0 Mask:255.255.255.192
inet6 addr: fe80::200:8fff:fe00:2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:35749 errors:0 dropped:47 overruns:0 frame:0
TX packets:131 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3830628 (3.8 MB) TX bytes:15278 (15.2 KB)
meth1 Link encap:Ethernet HWaddr 00:00:8f:00:00:03
inet addr:192.168.56.6 Bcast:0.0.0.0 Mask:255.255.255.192
inet6 addr: fe80::200:8fff:fe00:3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:35749 errors:0 dropped:47 overruns:0 frame:0
TX packets:115 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3830628 (3.8 MB) TX bytes:14942 (14.9 KB)
I run "wireshark" to test traffic between meth0 and meth1 ,
so I execute ping 192.168.56.6 to generate icmp traffic but this traffic doesn't appear in wireshark .
there is a a problem in wireshark with logical interface ?
Is there a problem in wireshark with logical interface?
Probably not. You'll probably see the same problem with tcpdump, netsniff-ng, or anything else that uses PF_PACKET sockets for sniffing on Linux (Linux in general, not just Ubuntu in particular, or even Ubuntu, Debian, and other Debian-derived distributions).
Given that those are two logical interfaces on the same machine, traffic between them will not go onto the Ethernet - no Ethernet adapters I know of will receive packets that they transmit, so if the packet were sent on the Ethernet the host wouldn't see it, and there wouldn't be any point in wasting network bandwidth by putting that traffic on the network even if the Ethernet adapter would see its own traffic.
So if you're capturing on eth0, you might not see the traffic. Try capturing on lo instead.

Connect embedded system to host via ethernet over a switch

I have a arm platform with gigabit ethernet that I would like to connect to my ubuntu machine
to test the ethernet ports.
Networking is not my strong suit.
I've modified /etc/network/interfaces on the embedded system thusly:
# Configure Loopback
auto lo
iface lo inet loopback
#auto eth0
#iface eth0 inet dhcp
auto eth0
iface eth0 inet static
address 192.168.1.2
netmask 255.255.255.0
gateway 192.168.1.0
And on my ubuntu machine I have set (through the network connections window):
IP: 192.168.1.1
netmask: 255.255.255.0
gateway: 192.168.1.0
When I test the connection, no connection is recognized on the arm system.
The eth0 port produces this output:
eth0: link up, 10 Mb/s, half duplex, flow control disabled
ip: RTNETLINK answers: Invalid argument
ifconfig displays:
# ifconfig
eth0 Link encap:Ethernet HWaddr 02:50:43:C5:C5:75
inet addr:192.168.1.2 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Interrupt:11
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Can anyone point out my most likely obvious mistake?
Let me know if I need to provide more information.
EDIT: I'm running busybox 1.18.5 on the embedded system.
EDIT 2:
# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.1.0 * 255.255.255.0 U 0 0 0 eth0
This is bad
auto eth0
iface eth0 inet static
address 192.168.1.2
netmask 255.255.255.0
gateway 192.168.1.0
192.168.1.0 is your network address. For sure it cannot be your gateway. Usually you have configuration like this
auto eth0
iface eth0 inet static
address 192.168.1.2
netmask 255.255.255.0
gateway 192.168.1.1
network 192.168.1.0
broadcast 192.168.1.255
where the latter two can automatically be calculated from the address and the netmask and are therefore not written in the config file

Resources