Introduce delay between each packet - linux

So I know I can delay all the packets of a stream for a given delay using Linux tc and netem.
What is presented here http://www.linuxfoundation.org/collaborate/workgroups/networking/netem#Delay_distribution
just delays all of the packets for a given amount of time, not changing the intervals between the actual packets.
What I want to do is set the minimal interval time between each consecutive pair of packets to be say 100ms. And I don't want any reordering.
Any thought much appreciated.
Regards,
kravvcu

So, if I understood your requirement right, You want a constant interpacket delay of 100ms and no reordering. The command in the link you mentioned(linux foundation) introduces a delay of 100ms and a jitter of 20ms. This jitter creates reordering.
There are 2 approaches to meet your requirement.
if jitter is not required:-tc qdisc add/change/replace dev eth0 root netem delay 100ms
if jitter is required:-
The trick is to use a high rate parameter in your netem command. netem internally maintains a tfifo queue. with the rate parameter netem calculates the packet delay of the next packet based on the time-to-send of the last packet in its tfifo queue. Thus having delay and jitter but no reordering.
The command to the same is
tc qdisc add/change/replace dev eth0 root netem rate 1000mbit delay 100ms
rate 1000mbit or any rate which is very high does the job!
This feature is not documented anywhere. However, was discussed back in 2011/2012/2013 in the linux netdev mailing list. ATM I cannot find the link to the same. However, I can point to the linux source code which implements the above mentioned code.
http://lxr.free-electrons.com/source/net/sched/sch_netem.c#L495
Please vote if the answer was useful!

Related

How to emulate jitter WITHOUT packet reordering, using TC and NETEM?

Apparently NETEM uses tfifo, which queues packets based on time to sent. This results in jitter causing packet reorder. For example the following line will cause packet reordering*:
tc qdisc add dev eth0 root handle 1: netem delay 10ms 100ms
NETEM manual suggests if you don't want reordering, then replace the internal queue discipline tfifo with a pure packet fifo (pfifo), and gives the following example too add lots of jitter without reordering:
tc qdisc add dev eth0 root handle 1: netem delay 10ms 100ms
tc qdisc add dev eth0 parent 1:1 pfifo limit 1000
But it doesn't work! Packets still get reordered! (and it looks like it's kernel dependent according to this)
So, does anyone know how to add jitter WITHOUT reordering packets?
One hacky option is to use constant delay (no jitter) and have a loop and change the delay value in the loop.
Say you want a 50ms delay with 5ms variance. You first add the base delay:
tc qdisc add dev eth0 root handle 1: netem delay 50ms
And can the have a loop that picks a random delay between 45ms and 55ms and change the delay as below:
tc qdisc change dev eth0 root handle 1: netem delay 53ms
There are two things to keep in mind though:
1- It takes some ticks to change the delay. I found a sleep of 0.1s in the loop is reasonable. So this means your limited by the jitter frequency.
2- When you decrease the delay, new packets get queued with a smaller delay (i.e. earlier send time) than packets already in the queue, which can cause reordering! You can mitigate this by decreasing the delay in a few steps, if the decrease is significant.
In my case (Linux 4.17), I got the same ofo issue if the variance > mean. By setting the variance < mean, ofo does not happen any more. Of course you still need to use the pfifo qdisc:
tc qdisc add dev ethBr2 root handle 1:0 netem delay 50ms 40ms 25%
tc qdisc add dev ethBr2 parent 1:1 pfifo limit 1000

Why tc cannot do ingress shaping? Does ingress shaping make sense?

In my work, I found tc can do egress shaping, and can only do ingress policing. I wonder that why tc doesn't implement ingress shaping?
Code sample:
#ingress
tc qdisc add dev eth0 handle ffff: ingress
tc filter add dev eth0 parent ffff: protocol ip prio 50 \
u32 match ip src 0.0.0.0/0 police rate 256kbit \
burst 10k drop flowid :1
#egress
tc qdisc add dev eth0 root tbf \
rate 256kbit latency 25ms burst 10k
But I can't do this:
#ingress shaping, using tbf
tc qdisc add dev eth0 ingress tbf \
rate 256kbit latency 25ms burst 10k
I found a solution called IFB(updated IMQ) can redirect the traffic to egress. But it seems not a good solution because it's wasting CPU. So I don't want to use this.
Does ingress shaping make sense? And why tc doesn't support it?
Although tc shaping rules for ingress are very limited, you can create a virtual interface and apply egress rules to it, as described here:
https://serverfault.com/questions/350023/tc-ingress-policing-and-ifb-mirroring
(You may not need the virtual interface if your VMs already use virtual interfaces and you can apply tc to them.)
The caveat with ingress shaping is that it may take a long time for an incoming stream to respond to your shaping actions, due to all the buffers in routers between the stream source and your interface. And until the stream does respond to a reduced limit, it will continue to flood your downstream! Meanwhile you will be throwing away good packets, reducing your throughput.
Likewise when a high-priority stream ends or drops off, it will take some time for the low-priority stream to grow back to its full rate. This can be quite disruptive if it happens often!
The result of this is that dynamic shaping may work as desired for groups of steady rate long-lived streams, but will offer little advantage to short-lived or varying rate high-priority streams when your downstream is flooded: the low-priority streams will simply take too long to back off. However classifying and limiting low and medium-priority packets to a static rate somewhere below your maximum downrate could be helpful, to guarantee at least some space for high-priority data.
I don't have any figures on this, and latency has improved a lot since the ADSL days. So I think it may be worth testing, if low latency or high throughput of high-priority packets is something you desire more than overall throughput, and you can live with the limitations above.
As Janoszen and the ADSL HOWTO mention, streams could respond much more quickly if we could adjust the TCP window size as part of the shaping.
Search TLDP for further research.
Shaping works on the send buffer. Ingress shaping would require control over the remote send buffer.

Traffic shaping with tc is inaccurate with high bandwidth and delay

I'm using tc with kernel 2.6.38.8 for traffic shaping. Limit bandwidth works, adding delay works, but when shaping both bandwidth with delay, the achieved bandwidth is always much lower than the limit if the limit is >1.5 Mbps or so.
Example:
tc qdisc del dev usb0 root
tc qdisc add dev usb0 root handle 1: tbf rate 2Mbit burst 100kb latency 300ms
tc qdisc add dev usb0 parent 1:1 handle 10: netem limit 2000 delay 200ms
Yields a delay (from ping) of 201 ms, but a capacity of just 1.66 Mbps (from iperf). If I eliminate the delay, the bandwidth is precisely 2 Mbps. If I specify a bandwidth of 1 Mbps and 200 ms RTT, everything works. I've also tried ipfw + dummynet, which yields similar results.
I've tried using rebuilding the kernel with HZ=1000 in Kconfig -- that didn't fix the problem. Other ideas?
It's actually not a problem, it behaves just as it should. Because you've added a 200ms latency, the full 2Mbps pipe isn't used at it's full potential. I would suggest you study the TCP/IP protocol in more detail, but here is a short summary of what is happening with iperf: your default window size is maybe 3 packets (likely 1500 bytes each). You fill your pipe with 3 packets, but now have to wait until you get an acknowledgement back (this is part of the congestion control mechanism). Since you delay the sending for 200ms, this will take a while. Now your window size will double in size and you can next send 6 packets, but will again have to wait 200ms. Then the window size doubles again, but by the time your window is completely open, the default 10 second iperf test is close to over and your average bandwidth will obviously be smaller.
Think of it like this:
Suppose you set your latency to 1 hour, and your speed to 2 Mbit/s.
2 Mbit/s requires (for example) 50 Kbit/s for TCP ACKs. Because the ACKs take over a hour to reach the source, then the source can't continue sending at 2 Mbit/s because the TCP window is still stuck waiting on the first acknowledgement.
Latency and bandwidth are more related than you think (in TCP at least. UDP is a different story)

Is zero-copy UDP packing receiving possibly on Linux?

I would like to have UDP packets copied directly from the ethernet adapter into my userspace buffer
Some details on my setup:
I am receiving data from a pair of gigabit ethernet cameras. Combined I am receiving 28800 UDP packets per second (1 packet per line * 30FPS * 2 cameras * 480 lines). There is no way for me to switch to jumbo frames, and I am already looking into tuning driver level interrupts for reduced CPU utilization. What I am after here is reducing the number of times I am copying this ~40MB/s data stream.
This is the best source I have found on this, but I was hoping there was a more complete reference or proof that such an approach worked out in practice.
This article may be useful:
http://yusufonlinux.blogspot.com/2010/11/data-link-access-and-zero-copy.html
Your best avenues are recvmmsg and increasing RX interrupt coalescing.
http://lwn.net/Articles/334532/
You can move lower and match how Wireshark/tcpdump operate but it becomes futile to attempt any serious processing above it having to decode everything yourself.
At only 30,000 packets per second I wouldn't worry too much about copying packets, those problems arise when dealing with 3,000,000 messages per second.

Simulate delayed and dropped packets on Linux

I would like to simulate packet delay and loss for UDP and TCP on Linux to measure the performance of an application. Is there a simple way to do this?
netem leverages functionality already built into Linux and userspace utilities to simulate networks. This is actually what Mark's answer refers to, by a different name.
The examples on their homepage already show how you can achieve what you've asked for:
Examples
Emulating wide area network delays
This is the simplest example, it just adds a fixed amount of delay to all packets going out of the local Ethernet.
# tc qdisc add dev eth0 root netem delay 100ms
Now a simple ping test to host on the local network should show an increase of 100 milliseconds. The delay is limited by the clock resolution of the kernel (Hz). On most 2.4 systems, the system clock runs at 100 Hz which allows delays in increments of 10 ms. On 2.6, the value is a configuration parameter from 1000 to 100 Hz.
Later examples just change parameters without reloading the qdisc
Real wide area networks show variability so it is possible to add random variation.
# tc qdisc change dev eth0 root netem delay 100ms 10ms
This causes the added delay to be 100 ± 10 ms. Network delay variation isn't purely random, so to emulate that there is a correlation value as well.
# tc qdisc change dev eth0 root netem delay 100ms 10ms 25%
This causes the added delay to be 100 ± 10 ms with the next random element depending 25% on the last one. This isn't true statistical correlation, but an approximation.
Delay distribution
Typically, the delay in a network is not uniform. It is more common to use a something like a normal distribution to describe the variation in delay. The netem discipline can take a table to specify a non-uniform distribution.
# tc qdisc change dev eth0 root netem delay 100ms 20ms distribution normal
The actual tables (normal, pareto, paretonormal) are generated as part of the iproute2 compilation and placed in /usr/lib/tc; so it is possible with some effort to make your own distribution based on experimental data.
Packet loss
Random packet loss is specified in the 'tc' command in percent. The smallest possible non-zero value is:
2−32 = 0.0000000232%
# tc qdisc change dev eth0 root netem loss 0.1%
This causes 1/10th of a percent (i.e. 1 out of 1000) packets to be randomly dropped.
An optional correlation may also be added. This causes the random number generator to be less random and can be used to emulate packet burst losses.
# tc qdisc change dev eth0 root netem loss 0.3% 25%
This will cause 0.3% of packets to be lost, and each successive probability depends by a quarter on the last one.
Probn = 0.25 × Probn-1 + 0.75 × Random
Note that you should use tc qdisc add if you have no rules for that interface or tc qdisc change if you already have rules for that interface. Attempting to use tc qdisc change on an interface with no rules will give the error RTNETLINK answers: No such file or directory.
For dropped packets I would simply use iptables and the statistic module.
iptables -A INPUT -m statistic --mode random --probability 0.01 -j DROP
Above will drop an incoming packet with a 1% probability. Be careful, anything above about 0.14 and most of you tcp connections will most likely stall completely.
Undo with -D:
iptables -D INPUT -m statistic --mode random --probability 0.01 -j DROP
Take a look at man iptables and search for "statistic" for more information.
iptables(8) has a statistic match module that can be used to match every nth packet. To drop this packet, just append -j DROP.
One of the most used tool in the scientific community to that purpose is DummyNet. Once you have installed the ipfw kernel module, in order to introduce 50ms propagation delay between 2 machines simply run these commands:
./ipfw pipe 1 config delay 50ms
./ipfw add 1000 pipe 1 ip from $IP_MACHINE_1 to $IP_MACHINE_2
In order to also introduce 50% of packet losses you have to run:
./ipfw pipe 1 config plr 0.5
Here more details.
An easy to use network fault injection tool is Saboteur. It can simulate:
Total network partition
Remote service dead (not listening on the expected port)
Delays
Packet loss
-TCP connection timeout (as often happens when two systems are separated by a stateful firewall)
Haven't tried it myself, but this page has a list of plugin modules that run in Linux' built in iptables IP filtering system. One of the modules is called "nth", and allows you to set up a rule that will drop a configurable rate of the packets. Might be a good place to start, at least.

Resources