I’m doing some tests to try to understand the tc-htb arguments. I’m using VmWare Player (version 2.0.5) with Windows 7 as host and Ubuntu (version 4.4.0-93) as guest.
My plan is to use iperf to generate a known data stream(udp 100Mbits/sec) via localhost and then limit the bandwidth with tc-htb. Monitoring the result with Wireshark.
Iperf setup:
server:
iperf –s –u –p 12345
client:
perf –c 127.0.0.1 –u –p 12345 –t 30 –b 100m
Testing rate argument:
I start Wireshark and start sending data with iperf, after 10 sec I execute a script with the tc commands:
tc qdisc add dev lo root handle 1: htb
tc class add dev lo parent 1: classid 1:1 htb rate 50mbit ceil 75mbit
tc filter add dev lo protocol ip parent 1: prio 1 u 32 match ip dport 12345
0xffff flowid 1:1
The I/O Graph in Wireshark shows that the bandwidth drops from 100 Mbit/s to 50 Mbit/s. Ok.
Testing burst argument:
I’m starting with the same bandwidth limitation as above and after another 10 sec I run a script with the command:
tc class change dev lo parent 1: classid 1:1 htb rate 50mbit ceil 75mbit burst 15k
In the I/O Graph I’m expecting a peek from 50mbit (rate level) up to 75mbit (ceil level). The change command has no effect, the level is at 50mbit.
I have also tested with larger burst values, no effect. What am I doing wrong?
'ceil' specifies how much bandwidth a traffic class can borrow from a parent class if there is spare bandwidth available from peer classes. However ,when applied to the root qdisc there is no parent to borrow from - so specifying ceil different to rate is meaningless for a class on a root qdisc.
'burst' specifies the amount of packets that are sent (at full link speed) from one class before stopping to serve another class, & the rate shaping being achieved by averaging the bursts over time. If applied to root with no child classes, it will only affect the accuracy of the averaging (smoothing), & won't do anything to the true average rate.
try adding child classes:
tc qdisc add dev lo root handle 1: htb
tc class add dev lo parent 1: classid 1:1 htb rate 100mbit
tc class add dev lo parent 1:1: classid 1:10 htb rate 50mbit ceil 100mbit
tc class add dev lo parent 1:1: classid 1:20 htb rate 50mbit ceil 75mbit
tc filter add dev lo protocol ip parent 1: prio 1 u 32 match ip dport 12345 0xffff flowid 1:10
tc filter add dev lo protocol ip parent 1: prio 1 u 32 match ip dport 54321 0xffff flowid 1:20
an iperf session to port 12345 should hit 100mbps, then drop to 50mbps each when iperf session to 54321 is started. Stop iperf to port 12345, then traffic to 54321 should hit 75mbps.
Related
I'm using tc to drop outgoing packets on a specified ethernet port. Is there a way to apply a similar loss of packets but not on the transmitting direction? I'd want to drop n% of packets that come into my system from a selected ethernet port.
Something like
tc qdisc add dev eth0 root netem loss 10%
Apparently NETEM uses tfifo, which queues packets based on time to sent. This results in jitter causing packet reorder. For example the following line will cause packet reordering*:
tc qdisc add dev eth0 root handle 1: netem delay 10ms 100ms
NETEM manual suggests if you don't want reordering, then replace the internal queue discipline tfifo with a pure packet fifo (pfifo), and gives the following example too add lots of jitter without reordering:
tc qdisc add dev eth0 root handle 1: netem delay 10ms 100ms
tc qdisc add dev eth0 parent 1:1 pfifo limit 1000
But it doesn't work! Packets still get reordered! (and it looks like it's kernel dependent according to this)
So, does anyone know how to add jitter WITHOUT reordering packets?
One hacky option is to use constant delay (no jitter) and have a loop and change the delay value in the loop.
Say you want a 50ms delay with 5ms variance. You first add the base delay:
tc qdisc add dev eth0 root handle 1: netem delay 50ms
And can the have a loop that picks a random delay between 45ms and 55ms and change the delay as below:
tc qdisc change dev eth0 root handle 1: netem delay 53ms
There are two things to keep in mind though:
1- It takes some ticks to change the delay. I found a sleep of 0.1s in the loop is reasonable. So this means your limited by the jitter frequency.
2- When you decrease the delay, new packets get queued with a smaller delay (i.e. earlier send time) than packets already in the queue, which can cause reordering! You can mitigate this by decreasing the delay in a few steps, if the decrease is significant.
In my case (Linux 4.17), I got the same ofo issue if the variance > mean. By setting the variance < mean, ofo does not happen any more. Of course you still need to use the pfifo qdisc:
tc qdisc add dev ethBr2 root handle 1:0 netem delay 50ms 40ms 25%
tc qdisc add dev ethBr2 parent 1:1 pfifo limit 1000
I am currently facing a trouble in my code.
Mainly, I emulate the connection between two computers, connected via an ethernet bridge (Raspberry Pi, Raspbian). So I am able to influence parameters of this connection (like bandwidth, latency and much more) via tc qdisc.
This works out fine, as you can see in the code down below.
But now to my problem:
I am also trying to exclude specific port ranges, what means ports that aren't influenced by my given parameters (latency etc..).
For that I created two prio bands. The prio band 0 (higher priority) handles my port exclusion (already in the parent root).
Afterwards in prio band 1 (lower priority), I decline a latency via netem.
The whole data traffic will pass through my influenced prio band 1, the remaining (excluded data) will pass uninfluenced through prio band 0.
I don't get kernel errors while executing my code! But I only receive filter parent 1: protocol ip pref 1 basic after typing sudo tc filter show dev eth1.
My match is not even mentioned. What did I wrong?
Can you explain me why I don't get my excpected output?
THIS IS MY CODE (in right order of executioning):
PARENT ROOT
sudo tc qdisc add dev eth1 root handle 1: prio bands 2 priomap 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
This creates two priobands (1:1 and 1:2)
BAND 0 [PORT EXCLUSION | port 100 - 800]
sudo tc qdisc add dev eth1 parent 1:1 handle 10: tbf rate 512kbit buffer 1600 limit 3000
Creates a tbf (Token Bucket Filter) to set bandwidth
sudo tc filter add dev eth1 parent 1: protocol ip prio 1 handle 0x10 basic match "cmp(u16 at 0 layer transport lt 100) and cmp(u16 at 0 layer transport gt 800)" flowid 1:1
Creates a filter with specific handle, that excludes port 100 to 800 from the prioband 1 (the influenced data packets)
BAND 1 [NET EMULATION]
sudo tc qdisc add dev eth1 parent 1:2 handle 20: tbf rate 1024kbit buffer 1600 limit 3000
Compare with tbf above
sudo tc qdisc add dev eth1 parent 20:1 handle 21: netem delay 200ms
Creates via netem a delay of 200ms
Here you can see my hierarchy as an image
The question again:
My filter match is not even mentioned. What did I wrong?
Can you explain me why I don't get my excpected output?
I appreciate any kind of help! Thanks for your efforts!
~rotsechs
It seems like I have to neglect the missing output! Nevertheless, it works perfectly.
I established a SSH connection to my Ethernet Bridge (via MobaXterm). Afterwards I laid a delay of 400ms on it. The console inputs slowed down as expected.
Finally I created the filter and excluded the port range from 20 to 24 (SSH has port 22). The delay of my SSH connection disappeared immediately!
I'm looking to add a set of filters that would drop packets that match parameters. It seems tc filters do not support drop action based on match, but based on qos parameters. Has anyone been able to place tc drop filters?
The most common method i've found thus far is to to mark it using tc and then us iptables to drop the marked packet, but that is not as efficient in my opinion.
tc filter supports drop action based on match. This is actually more straight forward than i anticipated
An example below would drop all IP GRE traffic on interface eth3
# add an ingress qdisc
tc qdisc add dev eth3 ingress
# filter on ip GRE traffic (protocol 47)
tc filter add dev eth3 parent ffff: protocol ip prio 6 u32 match ip protocol 47 0x47 flowid 1:16 action drop
I would like to simulate packet delay and loss for UDP and TCP on Linux to measure the performance of an application. Is there a simple way to do this?
netem leverages functionality already built into Linux and userspace utilities to simulate networks. This is actually what Mark's answer refers to, by a different name.
The examples on their homepage already show how you can achieve what you've asked for:
Examples
Emulating wide area network delays
This is the simplest example, it just adds a fixed amount of delay to all packets going out of the local Ethernet.
# tc qdisc add dev eth0 root netem delay 100ms
Now a simple ping test to host on the local network should show an increase of 100 milliseconds. The delay is limited by the clock resolution of the kernel (Hz). On most 2.4 systems, the system clock runs at 100 Hz which allows delays in increments of 10 ms. On 2.6, the value is a configuration parameter from 1000 to 100 Hz.
Later examples just change parameters without reloading the qdisc
Real wide area networks show variability so it is possible to add random variation.
# tc qdisc change dev eth0 root netem delay 100ms 10ms
This causes the added delay to be 100 ± 10 ms. Network delay variation isn't purely random, so to emulate that there is a correlation value as well.
# tc qdisc change dev eth0 root netem delay 100ms 10ms 25%
This causes the added delay to be 100 ± 10 ms with the next random element depending 25% on the last one. This isn't true statistical correlation, but an approximation.
Delay distribution
Typically, the delay in a network is not uniform. It is more common to use a something like a normal distribution to describe the variation in delay. The netem discipline can take a table to specify a non-uniform distribution.
# tc qdisc change dev eth0 root netem delay 100ms 20ms distribution normal
The actual tables (normal, pareto, paretonormal) are generated as part of the iproute2 compilation and placed in /usr/lib/tc; so it is possible with some effort to make your own distribution based on experimental data.
Packet loss
Random packet loss is specified in the 'tc' command in percent. The smallest possible non-zero value is:
2−32 = 0.0000000232%
# tc qdisc change dev eth0 root netem loss 0.1%
This causes 1/10th of a percent (i.e. 1 out of 1000) packets to be randomly dropped.
An optional correlation may also be added. This causes the random number generator to be less random and can be used to emulate packet burst losses.
# tc qdisc change dev eth0 root netem loss 0.3% 25%
This will cause 0.3% of packets to be lost, and each successive probability depends by a quarter on the last one.
Probn = 0.25 × Probn-1 + 0.75 × Random
Note that you should use tc qdisc add if you have no rules for that interface or tc qdisc change if you already have rules for that interface. Attempting to use tc qdisc change on an interface with no rules will give the error RTNETLINK answers: No such file or directory.
For dropped packets I would simply use iptables and the statistic module.
iptables -A INPUT -m statistic --mode random --probability 0.01 -j DROP
Above will drop an incoming packet with a 1% probability. Be careful, anything above about 0.14 and most of you tcp connections will most likely stall completely.
Undo with -D:
iptables -D INPUT -m statistic --mode random --probability 0.01 -j DROP
Take a look at man iptables and search for "statistic" for more information.
iptables(8) has a statistic match module that can be used to match every nth packet. To drop this packet, just append -j DROP.
One of the most used tool in the scientific community to that purpose is DummyNet. Once you have installed the ipfw kernel module, in order to introduce 50ms propagation delay between 2 machines simply run these commands:
./ipfw pipe 1 config delay 50ms
./ipfw add 1000 pipe 1 ip from $IP_MACHINE_1 to $IP_MACHINE_2
In order to also introduce 50% of packet losses you have to run:
./ipfw pipe 1 config plr 0.5
Here more details.
An easy to use network fault injection tool is Saboteur. It can simulate:
Total network partition
Remote service dead (not listening on the expected port)
Delays
Packet loss
-TCP connection timeout (as often happens when two systems are separated by a stateful firewall)
Haven't tried it myself, but this page has a list of plugin modules that run in Linux' built in iptables IP filtering system. One of the modules is called "nth", and allows you to set up a rule that will drop a configurable rate of the packets. Might be a good place to start, at least.