Priorities the traffic using TC - linux

I would like to Traffic Control TC to priorities the traffic. For example I need the TCP goes via band0, UDP goes via band1, and other traffic goes via band3.
I create the qdisc as follows:
tc qdisc add dev eth0 root handle 1: prio
tc qdisc add dev eth0 parent 1:1 handle 10: sfq
tc qdisc add dev eth0 parent 1:1 handle 20: sfq
tc qdisc add dev eth0 parent 1:1 handle 30: sfq
but when I add filter, e.g.
tc filter add dev eth0 protocol ip parent 1:1 sfq 1 u32 match ip protocol 6 0xff flowid 1:10
There was a problem "Unknown filter "sfq", hence option "1" is unparsable"

Without 'sfq 1'.
tc filter add dev eth0 protocol ip parent 1:1 u32 match ip protocol 6 0xff flowid 1:10

Related

How to mirror traffic with tc in Linux with multiple filters

I follow https://medium.com/swlh/traffic-mirroring-with-linux-tc-df4d36116119, try to to duplicate arp traffic to dummy0, and redirect the rest to eth1 . This sometimes work, sometimes not. Pretty frustrating, what I am missing to stabilised its behaviours. I am running Linux kernel 5.15:
tc qdisc add dev eth0 ingress
tc filter add dev eth0 protocol arp parent ffff: prio 10 u32 match u8 0 0 action mirred egress mirror dev dummy0
tc filter add dev eth0 parent ffff: protocol all u32 match u8 0 0 action mirred egress redirect dev eth1

Linux: HTB + PRIO qdisc: sometimes empty PRIO 1 queue

I am facing a behaviour I can't explain myself at the moment.
Here is what I am trying to do: limit the sending rate through an interface (enp0s8) and apply a PRIO qdisc. Packets are sent to the appropriate band of the PRIO qdisc following their DSCP field.
To do this, I applied both a HTB (to shape trafic) and a PRIO qdisc.
For test purpose, I am sending trafic with iperf.
Here is the HTB and PRIO configurations:
#HTB qdisc
tc qdisc add dev enp0s8 root handle 1: htb default 1
tc class add dev enp0s8 parent 1: classid 1:1 htb rate 2000kbit ceil
2000kbit burst 2000kbit
# PRIO qdisc
tc qdisc add dev enp0s8 parent 1:1 handle 2: prio bands 4
tc qdisc add dev enp0s8 parent 2:1 pfifo
tc qdisc add dev enp0s8 parent 2:2 pfifo
tc qdisc add dev enp0s8 parent 2:3 pfifo
tc qdisc add dev enp0s8 parent 2:4 pfifo
# PRIO filters
tc filter add dev enp0s8 parent 2:0 prio 1 protocol ip u32 match ip tos 0x28 0xff flowid 2:1
tc filter add dev enp0s8 parent 2:0 prio 2 protocol ip u32 match ip tos 0x38 0xff flowid 2:2
tc filter add dev enp0s8 parent 2:0 prio 3 protocol ip u32 match ip tos 0x48 0xff flowid 2:3
tc filter add dev enp0s8 parent 2:0 prio 4 protocol ip u32 match ip tos 0x58 0xff flowid 2:4
Here is my iperf script for test purpose:
killall iperf
iperf -c 192.168.56.1 -u -b 2.4m -l 1458B -S 0x58 -t 1024 &
sleep 20
iperf -c 192.168.56.1 -u -b 2.4m -l 1458B -S 0x38 -t 1024 &
sleep 20
iperf -c 192.168.56.1 -u -b 2.4m -l 1458B -S 0x28 -t 1024 &
With this test, the following behaviour is expected:
The traffic with lower priority (0x58) is sent and shaped to 2Mbps
The traffic with more priority (0x38) is sent and shaped to 2Mbps. It is supposed to preempt bandwidth to the lower priority traffic.
The traffic with higher priority os sent and shaped to 2Mbps. It is supposed to preempt bandwidth to all other flows.
It seems to work fine most of the time but from time to time, I see the traffic with second highest priority reappearing for a few seconds whereas I should not (see wireshark screenshot).
I checked the iperf flow which is steady.
Do you have an idea to explain this behaviour or a way to troubleshoot?
Thank you in advance.
Alexandre
p.s
wireshark capture

Linux tc (traffic control) tool - problebs with ssh connection after configuration

I have a simple goal : to emulate 'bad traffic' between 2 vm-s A(server) and B(client).
My script at node B(client):
tc qdisc del dev eth4 root
tc qdisc add dev eth4 root handle 1: prio
tc qdisc add dev eth4 parent 1:1 handle 2: netem delay 300ms 300ms loss 10%
tc filter add dev eth4 parent 1:0 protocol ip pref 55 handle ::55 u32 match ip dport 4800 dst 172.29.49.115 flowid 1:1
tc filter add dev eth4 parent 1:0 protocol ip pref 55 handle ::55 u32 match ip src 172.29.49.115 flowid 1:1
That works fine, but the issue that I'm experiencing - the emulation impacts my connection with node B [ C(my machine) - B(client)] via ssh. When I set package loss up to 60%, that's almost impossible to go further...
How can I avoid that?
btw, filters seem to work fine, pingi google.com from B works fine, ping A from B - with package loss and delay. Pinging B from C(my machine) - either without delays.
The band for the high priority traffic (guaranteed maximum speed and priority 1)
tc qdisc add dev eth3 parent 1:5 handle 50:0 sfq perturb 1
ssh on the strip with a high priority
tc filter add dev eth3 parent 1:0 protocol ip prio 0 u32 match ip sport 22 0xffff flowid 1:5

How to limit network speed using tc?

I have three machines: client, server and proxy. On client(1.1.1.1) I have simple script, which using wget, on server(2.2.2.1) I have apache. And on proxy machine I want to limit speed by using tc. On proxy machine I have two interfaces eth0(1.1.1.2) connected to client and eth1(2.2.2.2) connected to server.
I tried adding this rules, but they are not working:
tc qdisc add dev eth1 root handle 1: cbq avpkt 1000 bandwidth 10mbit
tc class add dev eth1 parent 1: classid 1:1 cbq rate 1mbit \
allot 1500 prio 5 bounded isolated
tc filter add dev eth1 parent 1: protocol ip prio 16 u32 \
match ip dst 1.1.1.1 flowid 1:1
Found the answer. this is my script:
#!/bin/bash
set -x
tc qdisc del dev eth0 root
tc qdisc add dev eth0 root handle 1: cbq avpkt 1000 bandwidth 100mbit
tc class add dev eth0 parent 1: classid 1:1 cbq rate 1mbps \
allot 1500 prio 5 bounded isolated
tc filter add dev eth0 parent 1: protocol ip prio 16 u32 \
match ip dst 1.1.1.1 flowid 1:1

tc: attaching many netems to an interface

I'm looking to simulate delays for a set of services that run on different ports on a host. I would like to simulate different delays to different services, potentially many on a given host hopefully without any limits.
The only way I've found to do this is with a prio qdisc. Here's an example:
IF=eth0
tc qdisc add dev $IF root handle 1: prio bands 16 priomap 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
# this loop is only a sample to make the example runnable; values will vary in practice
for handle in {2..16}; do
# add a delay for the service on port 20000 + $handle
tc qdisc add dev $IF handle $handle: parent 1:$handle netem delay 1000ms # example; this will vary in practice
# add a filter to send the traffic to the correct netem
tc filter add dev $IF pref $handle protocol ip u32 match ip sport $(( 20000 + $handle )) 0xffff flowid 1:$handle
done
If you run the command above, you'll notice that handles 11-16 don't get created and fail with an error.
NB. Here's an undo for the above.
IF=eth0
for handle in {2..16}; do
tc filter del dev $IF pref $handle
tc qdisc del dev $IF handle $handle: parent 1:$handle netem delay 1000ms
done
tc qdisc del dev $IF root
Is there a way to add more than 10 netems to an interface?
Solved it using htb and classes:
IF=eth0
tc qdisc add dev $IF root handle 1: htb
tc class add dev $IF parent 1: classid 1:1 htb rate 1000Mbps
for handle in {2..32}; do
tc class add dev $IF parent 1:1 classid 1:$handle htb rate 1000Mbps
tc qdisc add dev $IF handle $handle: parent 1:$handle netem delay 1000ms
tc filter add dev $IF pref $handle protocol ip u32 match ip sport $(( 20000 + $handle )) 0xffff flowid 1:$handle
done

Resources