Linux tc (traffic control) tool - problebs with ssh connection after configuration - linux

I have a simple goal : to emulate 'bad traffic' between 2 vm-s A(server) and B(client).
My script at node B(client):
tc qdisc del dev eth4 root
tc qdisc add dev eth4 root handle 1: prio
tc qdisc add dev eth4 parent 1:1 handle 2: netem delay 300ms 300ms loss 10%
tc filter add dev eth4 parent 1:0 protocol ip pref 55 handle ::55 u32 match ip dport 4800 dst 172.29.49.115 flowid 1:1
tc filter add dev eth4 parent 1:0 protocol ip pref 55 handle ::55 u32 match ip src 172.29.49.115 flowid 1:1
That works fine, but the issue that I'm experiencing - the emulation impacts my connection with node B [ C(my machine) - B(client)] via ssh. When I set package loss up to 60%, that's almost impossible to go further...
How can I avoid that?
btw, filters seem to work fine, pingi google.com from B works fine, ping A from B - with package loss and delay. Pinging B from C(my machine) - either without delays.

The band for the high priority traffic (guaranteed maximum speed and priority 1)
tc qdisc add dev eth3 parent 1:5 handle 50:0 sfq perturb 1
ssh on the strip with a high priority
tc filter add dev eth3 parent 1:0 protocol ip prio 0 u32 match ip sport 22 0xffff flowid 1:5

Related

Linux: HTB + PRIO qdisc: sometimes empty PRIO 1 queue

I am facing a behaviour I can't explain myself at the moment.
Here is what I am trying to do: limit the sending rate through an interface (enp0s8) and apply a PRIO qdisc. Packets are sent to the appropriate band of the PRIO qdisc following their DSCP field.
To do this, I applied both a HTB (to shape trafic) and a PRIO qdisc.
For test purpose, I am sending trafic with iperf.
Here is the HTB and PRIO configurations:
#HTB qdisc
tc qdisc add dev enp0s8 root handle 1: htb default 1
tc class add dev enp0s8 parent 1: classid 1:1 htb rate 2000kbit ceil
2000kbit burst 2000kbit
# PRIO qdisc
tc qdisc add dev enp0s8 parent 1:1 handle 2: prio bands 4
tc qdisc add dev enp0s8 parent 2:1 pfifo
tc qdisc add dev enp0s8 parent 2:2 pfifo
tc qdisc add dev enp0s8 parent 2:3 pfifo
tc qdisc add dev enp0s8 parent 2:4 pfifo
# PRIO filters
tc filter add dev enp0s8 parent 2:0 prio 1 protocol ip u32 match ip tos 0x28 0xff flowid 2:1
tc filter add dev enp0s8 parent 2:0 prio 2 protocol ip u32 match ip tos 0x38 0xff flowid 2:2
tc filter add dev enp0s8 parent 2:0 prio 3 protocol ip u32 match ip tos 0x48 0xff flowid 2:3
tc filter add dev enp0s8 parent 2:0 prio 4 protocol ip u32 match ip tos 0x58 0xff flowid 2:4
Here is my iperf script for test purpose:
killall iperf
iperf -c 192.168.56.1 -u -b 2.4m -l 1458B -S 0x58 -t 1024 &
sleep 20
iperf -c 192.168.56.1 -u -b 2.4m -l 1458B -S 0x38 -t 1024 &
sleep 20
iperf -c 192.168.56.1 -u -b 2.4m -l 1458B -S 0x28 -t 1024 &
With this test, the following behaviour is expected:
The traffic with lower priority (0x58) is sent and shaped to 2Mbps
The traffic with more priority (0x38) is sent and shaped to 2Mbps. It is supposed to preempt bandwidth to the lower priority traffic.
The traffic with higher priority os sent and shaped to 2Mbps. It is supposed to preempt bandwidth to all other flows.
It seems to work fine most of the time but from time to time, I see the traffic with second highest priority reappearing for a few seconds whereas I should not (see wireshark screenshot).
I checked the iperf flow which is steady.
Do you have an idea to explain this behaviour or a way to troubleshoot?
Thank you in advance.
Alexandre
p.s
wireshark capture

Simulate network latency on specific port using tc

I'm trying to simulate a fixed time latency on tcp packets coming from source port 7000 using the tc command on ubuntu. The commands I'm using are:
sudo tc qdisc add dev eth1 root handle 1: prio
sudo tc qdisc add dev eth1 parent 1:1 handle 2: netem delay 3000ms
sudo tc filter add dev eth1 parent 1:0 protocol ip u32 match ip sport 7000 0xffff flowid 2:1
There doesn't appear to be any delay caused by this filter, could someone please point out where I'm going wrong? Also, is there any way I can ping a port or do something equivalent to test the latency?
Thanks!
Try this:
sudo tc qdisc add dev eth1 root handle 1: prio priomap 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
sudo tc qdisc add dev eth1 parent 1:2 handle 20: netem delay 3000ms
sudo tc filter add dev eth1 parent 1:0 protocol ip u32 match ip sport 7000 0xffff flowid 1:2
Explanation:
Add the all zeros priomap to prio so all regular traffic flows through a single band. By default prio assigns traffic to different band according to the DSCP value of the packet. This means that some traffic that doesn't match your filter might end up in the same class as the delayed traffic.
Assign netem to one of the classes - 1:2
Finally, add your filter so it assigns the flow id 1:2 to matching packets. This is probably where you went wrong. You need to assign the filter to 1:2 of the classful prio qdisc, not the classless netem.
To test this setup, I changed the filter to dport 80 instead of sport 7000, and ran wget against checkip.amazonaws.com, which took 6 seconds (3 second delay for the TCP Syn, 3 second delay for the HTTP GET):
malt#ubuntu:~$ wget -O - checkip.amazonaws.com
--2016-10-23 06:21:42-- http://checkip.amazonaws.com/
Resolving checkip.amazonaws.com (checkip.amazonaws.com)... 75.101.161.183, 54.235.71.200, 107.20.206.176, ...
Connecting to checkip.amazonaws.com (checkip.amazonaws.com)|75.101.161.183|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10
Saving to: ‘STDOUT’
- 0%[ ] 0 --.-KB/s X.X.X.X
- 100%[===========================================================>] 10 --.-KB/s in 0s
2016-10-23 06:21:48 (3.58 MB/s) - written to stdout [10/10]
Connections to other ports though (e.g. 443 - HTTPS, 22 - SSH, etc) were much quicker. You can also run sudo tc -s qdisc show dev eth1 to make sure that the number of packets handled by netem makes sense.

Priorities the traffic using TC

I would like to Traffic Control TC to priorities the traffic. For example I need the TCP goes via band0, UDP goes via band1, and other traffic goes via band3.
I create the qdisc as follows:
tc qdisc add dev eth0 root handle 1: prio
tc qdisc add dev eth0 parent 1:1 handle 10: sfq
tc qdisc add dev eth0 parent 1:1 handle 20: sfq
tc qdisc add dev eth0 parent 1:1 handle 30: sfq
but when I add filter, e.g.
tc filter add dev eth0 protocol ip parent 1:1 sfq 1 u32 match ip protocol 6 0xff flowid 1:10
There was a problem "Unknown filter "sfq", hence option "1" is unparsable"
Without 'sfq 1'.
tc filter add dev eth0 protocol ip parent 1:1 u32 match ip protocol 6 0xff flowid 1:10

How to limit network speed using tc?

I have three machines: client, server and proxy. On client(1.1.1.1) I have simple script, which using wget, on server(2.2.2.1) I have apache. And on proxy machine I want to limit speed by using tc. On proxy machine I have two interfaces eth0(1.1.1.2) connected to client and eth1(2.2.2.2) connected to server.
I tried adding this rules, but they are not working:
tc qdisc add dev eth1 root handle 1: cbq avpkt 1000 bandwidth 10mbit
tc class add dev eth1 parent 1: classid 1:1 cbq rate 1mbit \
allot 1500 prio 5 bounded isolated
tc filter add dev eth1 parent 1: protocol ip prio 16 u32 \
match ip dst 1.1.1.1 flowid 1:1
Found the answer. this is my script:
#!/bin/bash
set -x
tc qdisc del dev eth0 root
tc qdisc add dev eth0 root handle 1: cbq avpkt 1000 bandwidth 100mbit
tc class add dev eth0 parent 1: classid 1:1 cbq rate 1mbps \
allot 1500 prio 5 bounded isolated
tc filter add dev eth0 parent 1: protocol ip prio 16 u32 \
match ip dst 1.1.1.1 flowid 1:1

tc: attaching many netems to an interface

I'm looking to simulate delays for a set of services that run on different ports on a host. I would like to simulate different delays to different services, potentially many on a given host hopefully without any limits.
The only way I've found to do this is with a prio qdisc. Here's an example:
IF=eth0
tc qdisc add dev $IF root handle 1: prio bands 16 priomap 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
# this loop is only a sample to make the example runnable; values will vary in practice
for handle in {2..16}; do
# add a delay for the service on port 20000 + $handle
tc qdisc add dev $IF handle $handle: parent 1:$handle netem delay 1000ms # example; this will vary in practice
# add a filter to send the traffic to the correct netem
tc filter add dev $IF pref $handle protocol ip u32 match ip sport $(( 20000 + $handle )) 0xffff flowid 1:$handle
done
If you run the command above, you'll notice that handles 11-16 don't get created and fail with an error.
NB. Here's an undo for the above.
IF=eth0
for handle in {2..16}; do
tc filter del dev $IF pref $handle
tc qdisc del dev $IF handle $handle: parent 1:$handle netem delay 1000ms
done
tc qdisc del dev $IF root
Is there a way to add more than 10 netems to an interface?
Solved it using htb and classes:
IF=eth0
tc qdisc add dev $IF root handle 1: htb
tc class add dev $IF parent 1: classid 1:1 htb rate 1000Mbps
for handle in {2..32}; do
tc class add dev $IF parent 1:1 classid 1:$handle htb rate 1000Mbps
tc qdisc add dev $IF handle $handle: parent 1:$handle netem delay 1000ms
tc filter add dev $IF pref $handle protocol ip u32 match ip sport $(( 20000 + $handle )) 0xffff flowid 1:$handle
done

Resources