Linux network driver and mtu - linux

I'm writing some network device driver for Linux. For some reasons I have to reduce dynamically MTU for my network interface while initializing.
When the driver calculates and sets MTU to for example value 892 - kernel trims outgoing frames to 310 bytes.
The question is: why the outgoing packets are 310 bytes long instead of 892 bytes?
Edit: (part of code calculating MTU)
bitrate = 250; // [kbit/s]
limit_us = upstream_timeslot = 5000; // [us]
mtu = (limit_us - 144) * bitrate * 2 / 15625 - fp->wdev->hard_header_len - sizeof(struct ether_header);
It just calculates what is maximum value of 1 frame that fits transmission timeslot for slow transmission 250kbps.

Related

dpdk-testpmd receives packets but does not send anything

Development setup:
AMD 3700X on a B450 motherboard
2 x intel T210 1Gb NICs (one port each, connected to one another)
Ubuntu 20.04
linux kernel 5.6.19-050619-generic
DPDK version stable-20.11.3
$ sudo usertools/dpdk-devbind.py --status
Network devices using DPDK-compatible driver
============================================
0000:06:00.0 'I210 Gigabit Network Connection 1533' drv=uio_pci_generic unused=vfio-pci
Network devices using kernel driver
===================================
0000:04:00.0 'I210 Gigabit Network Connection 1533' if=enigb1 drv=igb unused=vfio-pci,uio_pci_generic *Active*
0000:05:00.0 'RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller 8168' if=enp5s0 drv=r8169 unused=vfio-pci,uio_pci_generic *Active*
One NIC(00:04:00.0) provides traffic (88 packets captured from a live device) in linux mode (nothing to do with DPDK):
$sudo tcpreplay -q -p 1000 -l 5 -i enigb1 random_capture_realtek_dec_14.pcapng 2>&1 | grep -v "PF_PACKET\|Warning"
Actual: 440 packets (96555 bytes) sent in 0.440001 seconds
Rated: 219442.6 Bps, 1.75 Mbps, 999.99 pps
Statistics for network device: enigb1
Successful packets: 440
Failed packets: 5
Truncated packets: 0
Retried packets (ENOBUFS): 0
Retried packets (EAGAIN): 0
The other NIC (00:06:00.0) has dpdk-testpmd running on it:
sudo ./build/app/dpdk-testpmd -c 0xf000 -n 2 --huge-dir=/mnt/huge-2M -a 06:00.0 -- --portmask=0x3
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: 3 hugepages of size 1073741824 reserved, but no mounted hugetlbfs found for that size
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Probe PCI driver: net_e1000_igb (8086:1533) device: 0000:06:00.0 (socket 0)
EAL: No legacy callbacks, legacy socket not created
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
Configuring Port 0 (socket 0)
Port 0: 68:05:CA:E3:05:A2
Checking link statuses...
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 13 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=512 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=8 hthresh=1 wthresh=16
TX offloads=0x0 - TX RS bit threshold=0
Press enter to exit
Port 0: link state change event
Telling cores to stop...
Waiting for lcores to finish...
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 153 RX-dropped: 287 RX-total: 440
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 153 RX-dropped: 287 RX-total: 440
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
No matter how much packets I pump thru the pipeline, at most 153 packets are processed, the rest is dropped and nothing is sent. I'd guess those 153 packets clog the TX queue in the 00:06:00.0 and that's it.
I tried dpdk-pktgen and it sent nothing as well.
Note: the NIC used to send the packets (tcpreplay) is the same model as the one used for dpdk-testpmd.
What am I doing wrong, or what am I not doing (and I should)?
Update 1:
testpmd> set promisc all on
testpmd> start tx_first
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
(...)
testpmd> show port xstats 0
(...)
rx_good_packets: 153
tx_good_packets: 0
rx_good_bytes: 31003
tx_good_bytes: 0
rx_missed_errors: 287
(...)
rx_total_packets: 440
tx_total_packets: 285
rx_total_bytes: 97035
tx_total_bytes: 17100
tx_size_64_packets: 0
tx_size_65_to_127_packets: 0
tx_size_128_to_255_packets: 0
tx_size_256_to_511_packets: 0
tx_size_512_to_1023_packets: 0
tx_size_1023_to_max_packets: 0
testpmd> show fwd stats all
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 153 RX-dropped: 287 RX-total: 440
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------

AF-XDP: Is there a bug regarding small packets?

Is there a known (or maybe unknown) bug regarding the size of packets in the AF-XDP socket framework (+ libbpf)?
I am experiencing a strange packet loss for my application:
IPv4/UDP/RTP packet stream with all packets being the same size (1442 bytes): no packet loss
IPv4/UDP/RTP packet stream where pretty much all packets are the same size (1492 bytes) except a special "marker" packet (only 357 bytes but they are also IPv4/UDP-packets): all marker packets get lost
I added a bpf_printk statement in my XDP-Kernelprogram:
const int len = bpf_ntohs(iph->tot_len);
if(len < 400) {
bpf_printk("FOUND PACKET LEN < 400: %d.\n", len);
}
This output is never observed via sudo cat /sys/kernel/debug/tracing/trace_pipe. So these small RTP-marker packets aren't even received by my kernel filter - no wonder why I don't receive them in userspace.
ethtool -S <if> shows me this number: rx_256_to_511_bytes_phy. This number is increasing in a similar rate as marker-packets should come in (about 30/s). So this means that my NIC does receive the packets but my XDP-program doesn't - why?
Any idea what could be the cause of this problem?
First, bpf_printk() doesn't always work for me. You may want to take a look at this snippet (kernel-space code):
// Nicer way to call bpf_trace_printk()
#define bpf_custom_printk(fmt, ...) \
({ \
char ____fmt[] = fmt; \
bpf_trace_printk(____fmt, sizeof(____fmt), \
##__VA_ARGS__); \
})
// print:
bpf_custom_printk("This year is %d\n", 2020);
// output: sudo cat /sys/kernel/debug/tracing/trace_pipe
Second: May be the packet entered the other NIC queue. You may want to use vanilla code from xdp-tutorial and add the kernel tracing from the above snippet to print size of packet, then compile and run the example program with -q 1 for queue number 1 for example.
A way to get size of packet:
void *data_end = (void *)(long)ctx->data_end;
void *data = (void *)(long)ctx->data;
size_t size_pkt = data - data_end;
bpf_custom_printk("Packet size %d\n", size_pkt);

flowgraph fails with "LLLL..." for one network adapter, succeeds with other adapter

I want to execute the usrp_echotimer_dual_cw example from the GNURadio gr-radar OOT module.
The flowgraph works fine with the internal gigabit ethernet adapter but fails with the external PCI gigabit ethernet adapter.
Here is the output of the successful execution (eth0) of the flowgraph:
Executing: "/home/christophe/new/examples/usrp/usrp_echotimer_dual_cw.py"
linux; GNU C++ version 4.8.4; Boost_105400; UHD_003.010.git-101-g4a1cb1f2
Using Volk machine: avx_64_mmx_orc
-- Opening a USRP2/N-Series device...
-- Current recv frame size: 1472 bytes
-- Current send frame size: 1472 bytes
UHD Warning:
The recv buffer could not be resized sufficiently.
Target sock buff size: 50000000 bytes.
Actual sock buff size: 1000000 bytes.
See the transport application notes on buffer resizing.
Please run: sudo sysctl -w net.core.rmem_max=50000000
UHD Warning:
The recv buffer could not be resized sufficiently.
Target sock buff size: 50000000 bytes.
Actual sock buff size: 1000000 bytes.
See the transport application notes on buffer resizing.
Please run: sudo sysctl -w net.core.rmem_max=50000000
UHD Warning:
The send buffer could not be resized sufficiently.
Target sock buff size: 1048576 bytes.
Actual sock buff size: 1000000 bytes.
See the transport application notes on buffer resizing.
Please run: sudo sysctl -w net.core.wmem_max=1048576
Using USRP Device (TX):
Single USRP:
Device: USRP2 / N-Series Device
Mboard 0: N210r4
RX Channel: 0
RX DSP: 0
RX Dboard: A
RX Subdev: RFX2400 RX
TX Channel: 0
TX DSP: 0
TX Dboard: A
TX Subdev: RFX2400 TX
Setting TX Rate: 14250000
UHD Warning:
The requested interpolation is odd; the user should expect CIC rolloff.
Select an even interpolation to ensure that a halfband filter is enabled.
interpolation = dsp_rate/samp_rate -> 7 = (100.000000 MHz)/(14.250000 MHz)
UHD Warning:
The hardware does not support the requested TX sample rate:
Target sample rate: 14.250000 MSps
Actual sample rate: 14.285714 MSps
Actual TX Rate: 1.42857e+07
UHD Warning:
The requested interpolation is odd; the user should expect CIC rolloff.
Select an even interpolation to ensure that a halfband filter is enabled.
interpolation = dsp_rate/samp_rate -> 7 = (100.000000 MHz)/(14.285714 MHz)
-- Opening a USRP2/N-Series device...
-- Current recv frame size: 1472 bytes
-- Current send frame size: 1472 bytes
UHD Warning:
The recv buffer could not be resized sufficiently.
Target sock buff size: 50000000 bytes.
Actual sock buff size: 1000000 bytes.
See the transport application notes on buffer resizing.
Please run: sudo sysctl -w net.core.rmem_max=50000000
UHD Warning:
The recv buffer could not be resized sufficiently.
Target sock buff size: 50000000 bytes.
Actual sock buff size: 1000000 bytes.
See the transport application notes on buffer resizing.
Please run: sudo sysctl -w net.core.rmem_max=50000000
UHD Warning:
The send buffer could not be resized sufficiently.
Target sock buff size: 1048576 bytes.
Actual sock buff size: 1000000 bytes.
See the transport application notes on buffer resizing.
Please run: sudo sysctl -w net.core.wmem_max=1048576
Using USRP Device (RX):
Single USRP:
Device: USRP2 / N-Series Device
Mboard 0: USRP2 r3
RX Channel: 0
RX DSP: 0
RX Dboard: A
RX Subdev: RFX2400 RX
TX Channel: 0
TX DSP: 0
TX Dboard: A
TX Subdev: RFX2400 TX
Setting RX Rate: 14250000
UHD Warning:
The requested decimation is odd; the user should expect CIC rolloff.
Select an even decimation to ensure that a halfband filter is enabled.
decimation = dsp_rate/samp_rate -> 7 = (100.000000 MHz)/(14.250000 MHz)
UHD Warning:
The hardware does not support the requested RX sample rate:
Target sample rate: 14.250000 MSps
Actual sample rate: 14.285714 MSps
Actual RX Rate: 1.42857e+07
UHD Warning:
The requested decimation is odd; the user should expect CIC rolloff.
Select an even decimation to ensure that a halfband filter is enabled.
decimation = dsp_rate/samp_rate -> 7 = (100.000000 MHz)/(14.285714 MHz)
set_min_output_buffer on block 5 to 4194304
set_min_output_buffer on block 6 to 4194304
set_min_output_buffer on block 7 to 4194304
set_min_output_buffer on block 8 to 4194304
set_min_output_buffer on block 9 to 4194304
set_min_output_buffer on block 17 to 4194304
set_min_output_buffer on block 18 to 4194304
set_min_output_buffer on block 19 to 4194304
set_min_output_buffer on block 20 to 4194304
set_min_output_buffer on block 21 to 4194304
set_min_output_buffer on block 22 to 4194304
// Print results
rx_time: 2:0.0476682
velocity: -5.82422
range: 4.83493
// Print results
rx_time: 2:0.411017
velocity: 0.416016
range: 4.80168
// Print results
rx_time: 2:0.770647
velocity: 1.24805
range: 4.66541
// Print results
rx_time: 3:0.122836
velocity: 1.24805
range: 4.86308
// Print results
rx_time: 3:0.489277
velocity: 1.66406
range: 4.80136
// Print results
rx_time: 3:0.848081
velocity: -15.8086
range: 5.37644
// Print results
rx_time: 4:0.198079
velocity: 4.57617
range: 5.10404
// Print results
rx_time: 4:0.558055
velocity: 4.99219
range: 4.4827
// Print results
rx_time: 4:0.917475
velocity: -0.832031
range: 4.62831
// Print results
rx_time: 5:0.266247
velocity: 6.65625
range: 5.24577
// Print results
rx_time: 5:0.625892
velocity: -6.65625
range: 5.5386
The failed execution (eth1) looks like this:
Executing: "/home/christophe/new/examples/usrp/usrp_echotimer_dual_cw.py"
linux; GNU C++ version 4.8.4; Boost_105400; UHD_003.010.git-101-g4a1cb1f2
Using Volk machine: avx_64_mmx_orc
-- Opening a USRP2/N-Series device...
-- Current recv frame size: 1472 bytes
-- Current send frame size: 1472 bytes
UHD Warning:
The recv buffer could not be resized sufficiently.
Target sock buff size: 50000000 bytes.
Actual sock buff size: 1000000 bytes.
See the transport application notes on buffer resizing.
Please run: sudo sysctl -w net.core.rmem_max=50000000
UHD Warning:
The recv buffer could not be resized sufficiently.
Target sock buff size: 50000000 bytes.
Actual sock buff size: 1000000 bytes.
See the transport application notes on buffer resizing.
Please run: sudo sysctl -w net.core.rmem_max=50000000
UHD Warning:
The send buffer could not be resized sufficiently.
Target sock buff size: 1048576 bytes.
Actual sock buff size: 1000000 bytes.
See the transport application notes on buffer resizing.
Please run: sudo sysctl -w net.core.wmem_max=1048576
Using USRP Device (TX):
Single USRP:
Device: USRP2 / N-Series Device
Mboard 0: N210r4
RX Channel: 0
RX DSP: 0
RX Dboard: A
RX Subdev: RFX2400 RX
TX Channel: 0
TX DSP: 0
TX Dboard: A
TX Subdev: RFX2400 TX
Setting TX Rate: 14250000
UHD Warning:
The requested interpolation is odd; the user should expect CIC rolloff.
Select an even interpolation to ensure that a halfband filter is enabled.
interpolation = dsp_rate/samp_rate -> 7 = (100.000000 MHz)/(14.250000 MHz)
UHD Warning:
The hardware does not support the requested TX sample rate:
Target sample rate: 14.250000 MSps
Actual sample rate: 14.285714 MSps
Actual TX Rate: 1.42857e+07
UHD Warning:
The requested interpolation is odd; the user should expect CIC rolloff.
Select an even interpolation to ensure that a halfband filter is enabled.
interpolation = dsp_rate/samp_rate -> 7 = (100.000000 MHz)/(14.285714 MHz)
-- Opening a USRP2/N-Series device...
-- Current recv frame size: 1472 bytes
-- Current send frame size: 1472 bytes
UHD Warning:
The recv buffer could not be resized sufficiently.
Target sock buff size: 50000000 bytes.
Actual sock buff size: 1000000 bytes.
See the transport application notes on buffer resizing.
Please run: sudo sysctl -w net.core.rmem_max=50000000
UHD Warning:
The recv buffer could not be resized sufficiently.
Target sock buff size: 50000000 bytes.
Actual sock buff size: 1000000 bytes.
See the transport application notes on buffer resizing.
Please run: sudo sysctl -w net.core.rmem_max=50000000
UHD Warning:
The send buffer could not be resized sufficiently.
Target sock buff size: 1048576 bytes.
Actual sock buff size: 1000000 bytes.
See the transport application notes on buffer resizing.
Please run: sudo sysctl -w net.core.wmem_max=1048576
Using USRP Device (RX):
Single USRP:
Device: USRP2 / N-Series Device
Mboard 0: USRP2 r3
RX Channel: 0
RX DSP: 0
RX Dboard: A
RX Subdev: RFX2400 RX
TX Channel: 0
TX DSP: 0
TX Dboard: A
TX Subdev: RFX2400 TX
Setting RX Rate: 14250000
UHD Warning:
The requested decimation is odd; the user should expect CIC rolloff.
Select an even decimation to ensure that a halfband filter is enabled.
decimation = dsp_rate/samp_rate -> 7 = (100.000000 MHz)/(14.250000 MHz)
UHD Warning:
The hardware does not support the requested RX sample rate:
Target sample rate: 14.250000 MSps
Actual sample rate: 14.285714 MSps
Actual RX Rate: 1.42857e+07
UHD Warning:
The requested decimation is odd; the user should expect CIC rolloff.
Select an even decimation to ensure that a halfband filter is enabled.
decimation = dsp_rate/samp_rate -> 7 = (100.000000 MHz)/(14.285714 MHz)
set_min_output_buffer on block 5 to 4194304
set_min_output_buffer on block 6 to 4194304
set_min_output_buffer on block 7 to 4194304
set_min_output_buffer on block 8 to 4194304
set_min_output_buffer on block 9 to 4194304
set_min_output_buffer on block 17 to 4194304
set_min_output_buffer on block 18 to 4194304
set_min_output_buffer on block 19 to 4194304
set_min_output_buffer on block 20 to 4194304
set_min_output_buffer on block 21 to 4194304
set_min_output_buffer on block 22 to 4194304
DReceive timeout before all samples received...
ULLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLterminate called after throwing an instance of 'std::runtime_error'
what(): Receiver error ERROR_CODE_OVERFLOW (Out of sequence error)
These are my network cards:
[0] christophe:~ % lspci -nn | grep -i ethernet
00:19.0 Ethernet controller [0200]: Intel Corporation 82579LM Gigabit Network Connection [8086:1502] (rev 04)
02:05.0 Ethernet controller [0200]: Intel Corporation 82541PI Gigabit Ethernet Controller [8086:107c] (rev 05)
I don't think it has to do with the performance of the computer, because that flowgraph works fine on a much weaker laptop pc (intel core 2 duo), but fails on my i7 desktop pc.
00:19.0 Ethernet controller [0200]: Intel Corporation 82579LM Gigabit Network Connection [8086:1502] (rev 04)
The 82579LM is the single known non-USB gigabit ethernet adapter that randomly drops packets without giving notice to the Operating System.
I'm afraid you will have to use a different PCIe-Gigabit adapter.
By the way, L means that your packet (containing some command for the USRP to execute, normally) is later than the time that was specified for the command to happen.
The source code where the L get's printed:
else if (metadata.event_code &
async_metadata_t::EVENT_CODE_TIME_ERROR
) UHD_MSG(fastpath) << "L";
Now, EVENT_CODE_TIME_ERROR means that
Packet had time that was late.
Which might be an effect of your network card dropping packets earlier, so that somehow the timing commands got out of order, or your application got confused. Normally, it's an indication of an application misdesign, but as your network hardware definitely is buggy (seen that controller fail many times, sorry), I would fix that first before investigating further.

How do I get packet length and ip addresses in libpcap

From this example
void process_packet(u_char *args, const struct pcap_pkthdr *header, const u_char *buffer)
{
int size = header->len;
//Get the IP Header part of this packet , excluding the ethernet header
struct iphdr *iph = (struct iphdr*)(buffer + sizeof(struct ethhdr));
++total;
switch (iph->protocol) //Check the Protocol and do accordingly...
{
case 1: //ICMP Protocol
++icmp;
print_icmp_packet( buffer , size);
break;
case 2: //IGMP Protocol
++igmp;
break;
case 6: //TCP Protocol
++tcp;
print_tcp_packet(buffer , size);
break;
case 17: //UDP Protocol
++udp;
print_udp_packet(buffer , size);
break;
default: //Some Other Protocol like ARP etc.
++others;
break;
}
printf("TCP : %d UDP : %d ICMP : %d IGMP : %d Others : %d Total : %d\r", tcp , udp , icmp , igmp , others , total);
}
variable size is, I guess, the size of the header. How do I get the size of the whole packet?
Also, how do I convert uint32_t IP addresses to human readable IP addresses of the form xxx.xxx.xxx.xxx?
variable size is, I guess, the size of the header.
You have guessed incorrectly.
To quote the pcap man page:
Packets are read with pcap_dispatch() or pcap_loop(), which
process one or more packets, calling a callback routine for each
packet, or with pcap_next() or pcap_next_ex(), which return the
next packet. The callback for pcap_dispatch() and pcap_loop() is
supplied a pointer to a struct pcap_pkthdr, which includes the
following members:
ts a struct timeval containing the time when the packet
was captured
caplen a bpf_u_int32 giving the number of bytes of the
packet that are available from the capture
len a bpf_u_int32 giving the length of the packet, in
bytes (which might be more than the number of bytes
available from the capture, if the length of the
packet is larger than the maximum number of bytes to
capture).
so "len" is the total length of the packet. However, there may not be "len" bytes of data available; if the capture was done with a "snapshot length", for example with tcpdump, dumpcap, or TShark using the -s option, the packet could have been cut short, and "caplen" would indicate how many bytes of data you actually have.
Note, however, that Ethernet packets have a minimum length of 60 bytes (not counting the 4-byte FCS at the end, which you probably won't get in your capture), including the 14-byte Ethernet header; this means that short packets must be padded. 60-14 = 46, so if a host sends, over Ethernet, an IP packet that's less than 46 bytes long, it must pad the Ethernet packet.
This means that the "len" field gives the total length of the Ethernet packet, but if you subtract the l4 bytes of Ethernet header from "len", you won't get the length of the IP packet. To get that, you'll need to look in the IP header at the "total length" field. (Don't assume it'll be less than or equal to the value of "len" - 14 - a machine might have sent an invalid IP packet.)
Also, how do I convert uint32_t IP addresses to human readable IP addresses of the form xxx.xxx.xxx.xxx?
By calling routines such as inet_ntoa(), inet_ntoa_r(), or inet_ntop().
No, header->len is length of this packet, just what you want.
see header file pcap.h
struct pcap_pkthdr {
struct timeval ts; /* time stamp */
bpf_u_int32 caplen; /* length of portion present */
bpf_u_int32 len; /* length this packet (off wire) */
};
you can use sprintf() to convert uint32_t ip field to xxx.xxx.xxx.xxx

How to correctly calculate address spaces?

Below is an example of a question given on my last test in a Computer Engineering course. Anyone mind explaining to me how to get the start/end addresses of each? I have listed the correct answers at the bottom...
The MSP430F2410 device has an address space of 64 KB (the basic MSP430 architecture). Fill in the table below if we know the following. The first 16 bytes of the address space (starting at the address 0x0000) is reserved for special function registers (IE1, IE2, IFG1, IFG2, etc.), the next 240 bytes is reserved for 8-bit peripheral devices, and the next 256 bytes is reserved for 16-bit peripheral devices. The RAM memory capacity is 2 Kbytes and it starts at the address 0x1100. At the top of the address space is 56KB of flash memory reserved for code and interrupt vector table.
What Start Address End Address
Special Function Registers (16 bytes) 0x0000 0x000F
8-bit peripheral devices (240 bytes) 0x0010 0x00FF
16-bit peripheral devices (256 bytes) 0x0100 0x01FF
RAM memory (2 Kbytes) 0x1100 0x18FF
Flash Memory (56 Kbytes) 0x2000 0xFFFF
For starters, don't get thrown off by what's stored in each segment - that's only going to confuse you. The problem is just asking you to figure out the hex numbering, and that's not too difficult. Here are the requirements:
64 KB total memory
The first 16 bytes of the address space (starting at the address 0x0000) is reserved for special function registers (IE1, IE2, IFG1, IFG2, etc.)
The next 240 bytes is reserved for 8-bit peripheral devices
The next 256 bytes is reserved for 16-bit peripheral devices
The RAM memory capacity is 2 Kbytes and it starts at the address 0x1100
At the top of the address space is 56KB of flash memory reserved for code and interrupt vector table.
Since each hex digit in your memory address can handle 16 values (0-F), you'll need 4 digits to display 64KB of memory (16 ^ 4 = 65536, or 64K).
You start with 16 bytes, and that covers 0x0000 - 0x000F (one full digit of your address). That means that the next segment, which starts immediately after it (8-bit devices), begins at 0x0010 (the next byte), and since it's 240 bytes long, it ends at byte 256 (240 + 16), or 0x00FF.
The next segment (16-bit devices) starts at the next byte, which is 0x0100, and is 256 bytes long - that puts the end at 0x01FF.
Then comes 2KB (2048 bytes) of RAM, but it starts at 0x1100, as the description states, instead of immediately after the previous segment, so that's your starting address. Add 2048 to that, and you get 0x18FF.
The last segment covers the upper section of the memory, so you'll have to work backwards, You know it ends at 0xFFFF (the end of the available memory), and it's 56KB long. If you convert the 56KB to hex, it's 0xDFFF. If you imagine that this segment starts at 0, That leaves 2000 unused (0xE000-0xEFFF and 0xF000-0xFFFF), so you know that this segment has to start at 0x2000 to end at the upper end of the memory space.
I hope that's more clear, though when I read over it, I don't know that it's any help at all :( Maybe that's why I'll leave teaching that concept to somebody more qualified...
#define NUM_SIZES 5
uint16_t sizes[5] = {16, 240, 256, 2 * 1024, 56 * 1024};
uint16_t address = 0;
printf("Start End\n");
for (int i = 0; i < NUM_SIZES; i++)
{
printf("0x%04X 0x%04X\n", address, address + sizes[i] - 1);
address += sizes[i];
}

Resources