I'm looking to add a set of filters that would drop packets that match parameters. It seems tc filters do not support drop action based on match, but based on qos parameters. Has anyone been able to place tc drop filters?
The most common method i've found thus far is to to mark it using tc and then us iptables to drop the marked packet, but that is not as efficient in my opinion.
tc filter supports drop action based on match. This is actually more straight forward than i anticipated
An example below would drop all IP GRE traffic on interface eth3
# add an ingress qdisc
tc qdisc add dev eth3 ingress
# filter on ip GRE traffic (protocol 47)
tc filter add dev eth3 parent ffff: protocol ip prio 6 u32 match ip protocol 47 0x47 flowid 1:16 action drop
Related
I'm using tc to drop outgoing packets on a specified ethernet port. Is there a way to apply a similar loss of packets but not on the transmitting direction? I'd want to drop n% of packets that come into my system from a selected ethernet port.
Something like
tc qdisc add dev eth0 root netem loss 10%
I’m doing some tests to try to understand the tc-htb arguments. I’m using VmWare Player (version 2.0.5) with Windows 7 as host and Ubuntu (version 4.4.0-93) as guest.
My plan is to use iperf to generate a known data stream(udp 100Mbits/sec) via localhost and then limit the bandwidth with tc-htb. Monitoring the result with Wireshark.
Iperf setup:
server:
iperf –s –u –p 12345
client:
perf –c 127.0.0.1 –u –p 12345 –t 30 –b 100m
Testing rate argument:
I start Wireshark and start sending data with iperf, after 10 sec I execute a script with the tc commands:
tc qdisc add dev lo root handle 1: htb
tc class add dev lo parent 1: classid 1:1 htb rate 50mbit ceil 75mbit
tc filter add dev lo protocol ip parent 1: prio 1 u 32 match ip dport 12345
0xffff flowid 1:1
The I/O Graph in Wireshark shows that the bandwidth drops from 100 Mbit/s to 50 Mbit/s. Ok.
Testing burst argument:
I’m starting with the same bandwidth limitation as above and after another 10 sec I run a script with the command:
tc class change dev lo parent 1: classid 1:1 htb rate 50mbit ceil 75mbit burst 15k
In the I/O Graph I’m expecting a peek from 50mbit (rate level) up to 75mbit (ceil level). The change command has no effect, the level is at 50mbit.
I have also tested with larger burst values, no effect. What am I doing wrong?
'ceil' specifies how much bandwidth a traffic class can borrow from a parent class if there is spare bandwidth available from peer classes. However ,when applied to the root qdisc there is no parent to borrow from - so specifying ceil different to rate is meaningless for a class on a root qdisc.
'burst' specifies the amount of packets that are sent (at full link speed) from one class before stopping to serve another class, & the rate shaping being achieved by averaging the bursts over time. If applied to root with no child classes, it will only affect the accuracy of the averaging (smoothing), & won't do anything to the true average rate.
try adding child classes:
tc qdisc add dev lo root handle 1: htb
tc class add dev lo parent 1: classid 1:1 htb rate 100mbit
tc class add dev lo parent 1:1: classid 1:10 htb rate 50mbit ceil 100mbit
tc class add dev lo parent 1:1: classid 1:20 htb rate 50mbit ceil 75mbit
tc filter add dev lo protocol ip parent 1: prio 1 u 32 match ip dport 12345 0xffff flowid 1:10
tc filter add dev lo protocol ip parent 1: prio 1 u 32 match ip dport 54321 0xffff flowid 1:20
an iperf session to port 12345 should hit 100mbps, then drop to 50mbps each when iperf session to 54321 is started. Stop iperf to port 12345, then traffic to 54321 should hit 75mbps.
For example, I have two tables:
table inet filter2 {
chain forward {
type filter hook forward priority 0; policy accept;
ct state established,related accept
iifname $wireif oifname $wirelessif accept
}
}
table inet filter {
chain forward {
type filter hook forward priority 0; policy accept;
drop
}
}
The filter is executed first, so all my forward packets are dropped, which is not what I want. How can I set a table/chain to be executed at last to make it work as the default option?
Nftables uses underlying netfilter framework which has 6 hooks points located at different places in linux kernel network stack.
When one or more rules are added at same hook point netfilter framework priorities the rules by their priority type.
For example in your case while adding chain you can use different priority type for each chain.
Lets say you want to give higher priority to rule defined forward chain of
filter table than forward chain of filter2 table
.
nft add table ip filter2
nft add chain ip filter2 forward {type filter hook forward priority 0 \;}
Now to give higher priority to forward chain of filter table assign priority less than 0.
nft add table inet filter
nft add chain inet filter '{type filter hook forward priority -1 }'
Here higher priority value means lower priority and vice-a-versa.
But be careful while using different priority type because it sometime may cause unintended behaviour to packet. For more information read this
PS: There is slight different syntax for negative priority.
I am currently facing a trouble in my code.
Mainly, I emulate the connection between two computers, connected via an ethernet bridge (Raspberry Pi, Raspbian). So I am able to influence parameters of this connection (like bandwidth, latency and much more) via tc qdisc.
This works out fine, as you can see in the code down below.
But now to my problem:
I am also trying to exclude specific port ranges, what means ports that aren't influenced by my given parameters (latency etc..).
For that I created two prio bands. The prio band 0 (higher priority) handles my port exclusion (already in the parent root).
Afterwards in prio band 1 (lower priority), I decline a latency via netem.
The whole data traffic will pass through my influenced prio band 1, the remaining (excluded data) will pass uninfluenced through prio band 0.
I don't get kernel errors while executing my code! But I only receive filter parent 1: protocol ip pref 1 basic after typing sudo tc filter show dev eth1.
My match is not even mentioned. What did I wrong?
Can you explain me why I don't get my excpected output?
THIS IS MY CODE (in right order of executioning):
PARENT ROOT
sudo tc qdisc add dev eth1 root handle 1: prio bands 2 priomap 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
This creates two priobands (1:1 and 1:2)
BAND 0 [PORT EXCLUSION | port 100 - 800]
sudo tc qdisc add dev eth1 parent 1:1 handle 10: tbf rate 512kbit buffer 1600 limit 3000
Creates a tbf (Token Bucket Filter) to set bandwidth
sudo tc filter add dev eth1 parent 1: protocol ip prio 1 handle 0x10 basic match "cmp(u16 at 0 layer transport lt 100) and cmp(u16 at 0 layer transport gt 800)" flowid 1:1
Creates a filter with specific handle, that excludes port 100 to 800 from the prioband 1 (the influenced data packets)
BAND 1 [NET EMULATION]
sudo tc qdisc add dev eth1 parent 1:2 handle 20: tbf rate 1024kbit buffer 1600 limit 3000
Compare with tbf above
sudo tc qdisc add dev eth1 parent 20:1 handle 21: netem delay 200ms
Creates via netem a delay of 200ms
Here you can see my hierarchy as an image
The question again:
My filter match is not even mentioned. What did I wrong?
Can you explain me why I don't get my excpected output?
I appreciate any kind of help! Thanks for your efforts!
~rotsechs
It seems like I have to neglect the missing output! Nevertheless, it works perfectly.
I established a SSH connection to my Ethernet Bridge (via MobaXterm). Afterwards I laid a delay of 400ms on it. The console inputs slowed down as expected.
Finally I created the filter and excluded the port range from 20 to 24 (SSH has port 22). The delay of my SSH connection disappeared immediately!
Is there an available utility to parse the content of /proc/net/route into more human readable format (i.e. dotted decimal for addresses)?
ROUTE(8) does exactly that if you invoke it with -n flag. Moreover, it could be used on systems without procfs support. For example:
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.2 0.0.0.0 UG 100 0 0 eth0
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
Nowadays you should be using the ip route list command instead of route.
Check out the /sbin/route command.
libnetlink may be the right way to go for this. I know it is the "proper" interface for a lot of low-level and physical network stuff, I'm not 100% sure about route tables, though.
See: libnetlink netlink rtnetlink