Thread Networks: why have a Full End Device? - multithreading

The Thread Specification defines two types of devices with their respective sub-types:
Full Thread Devices (FTD)
1.1. Router
1.2. Router Elegible End Device (REED)
1.3. Full End Device (FED)
Minimal Thread Devices (MTD)
2.1. Minimal End Device (MED)
2.2. Sleepy End Device (SED)
2.3. Synchronized Sleepy End Device (SSED)
From my understanding of the specification, a FED cannot forward messages and is not capable of becoming a Router. This sounds to me like a power-hungry MTD, since it has its radio always on.
Then why does it even exist?

openthread.io has a good definition of the differences between FED & MTD:
A Full Thread Device (FTD) always has its radio on, subscribes to the all-routers multicast address, and maintains IPv6 address mappings...Full End Device (FED) — cannot be promoted to a Router
A Minimal Thread Device does not subscribe to the all-routers multicast address and forwards all messages to its Parent...Minimal End Device (MED) — transceiver always on, does not need to poll for messages from its parent
An MTD saves power compared to an FTD at the expense of some functionality. Being a router or router-eligible requires more RAM. So if you have an always-on and line powered device but need to save RAM (or don't have any to spare,) it makes sense to configure it as a FED.

Related

Gigabit Ethernet takes ~370ms to report link down in MII register

To give you some context, I am trying to configure some network redundancy on Ubuntu 20.04.
I decided to use NIC bonding with:
2 Gigabit interfaces,
active-backup mode,
link state monitoring with MII (polling every 10ms).
It works as expected, when the active slave fails (wire unplugged) the bonding switches to the other slave. However this transition is too slow in my opinion as it takes up to 400ms.
In order to investigate I had a look at the MII register using this command: mii-tool -vv ens33 (-vv is to print raw MII registers content). It generates this kind of output:
Doing a basic bash script I print current time + value of the 2nd MII register which contains the link state. As this is a "while true" loop with no pause, I get a status every 20ms, so it allows me monitoring with enough reactivity.
Observed behavior:
when the interfaces are configured with a 100Mbit/s speed, Full Duplex => the MII register is almost immediately updated (no "visible" delay between the wire unplugging and the update of the register value on the screen).
however when configured at 1Gbit/s, Full Duplex (via auto-negotiation) => the update of MII register takes much more time (it is "visible" with the bash script). By sending pings every 10ms using the bond interface, and logging packets with Wireshark on receiver side, the observed delay is around 370ms.
My questions:
The Ethernet controler is an Intel I210, and I have not found anything relevant in the datasheet.
Do you see any reason for such a long time to detect link down when interface speed is 1Gbit/s vs 100Mbit/s (370 vs 50ms) ?
Any advice to improve the reactivity?
Thanks for your help :)
Every generation of Ethernet speed has it's own method and requirements.
10BaseT:
"link pulses" are used every 16 ±8 ms
100BaseT: "link pulses" are not used but Idle ("/I/") 4B/5B code words monitoring is instead.
1000BaseT: link failure is actually
determined by maxwait_timer which is defined in IEEE 802.3 to be 750 ±10 ms for Master and 350 ±10
ms for Slave ports.
As you can see, your measured 370ms are exactly what you can expect from a slave port.
Sources:
https://www.microchip.com/forums/FindPost/417307
https://ww1.microchip.com/downloads/en/Appnotes/VPPD-03240.pdf

What peer-to-peer protocol has the shortest specification?

I want to implement a P2P protocol in C for personal education purposes.
What would be the protocol with the shortest specification that is still used today?
I have already implemented a web and IRC client and server.
I agree with Mark, that point to point over a serial link would be a good exercise.
In particular, I would recommend the following programme of stuff...
Implement basic transmission over a "Serial Port" (using RS-232 if you have some Arduinos/embedded processors lying around, or using a null modem emulator if you don't (see com0com on Windows, or this on linux/mac).
I.e. send lower case letters from A->B, and echo them back as upper case from B->A
Implement SLIP as a way to reliably frame messages
i.e. you can send any string (e.g. "hello") and it is returned in upper case with "WORLD" appended ("HELLOWORLD").
Implement the "Read Multiple Holding Registers" and "Write Multiple Holding Registers" part of the Modbus protocol, using SLIP to frame the messages.
I.e. you have one follower (slave) device, and one leader (master) device. The follower has 10 bytes of memory that are exposed over modbus with the initial value "helloworld".
Just hard-code the follower / leader device Ids for now.
The leader reads the value, and then sets it to be "worldhello".
At the end of this you would start to have an understanding of the roles of network layers, ie:
The physical layer - Serial/RS-232
A "link layer" of sorts - SLIP
An "application" layer - Modbus
Serial. The answer is serial. You're not going to get any leaner than simple RX/TX communication but you'll lack a lot of convenience methods. If you want to explore more than simple bidirectional comms, I2C or modbus open up a lot of options.

how to find out which ioports be assigned to my devices

has linux reserved io port numbers for all manufactured devices.
I have devices like intel built-in network card. or another device I have for wifi (usb) from realtek.
On linux repository on github, device drivers use specific io ports to register. And kernel assign those ports to device driver. device drivers normally request for ports using call to request_region function. so for some ethernet device it requests like following
for (id_port = 0x110 ; id_port < 0x200; id_port += 0x10)
{
if (!request_region(id_port, 1, "3c509-control"))
continue;
outb(0x00, id_port);
outb(0xff, id_port);
if (inb(id_port) & 0x01)
break;
else
release_region(id_port, 1);
}
above starts with 0x110 to 0x200, any port can be assigned in this range by kernel to driver and appear in /proc/ioports file means driver is using that io port by the time of success return from request_region.
Question : So my question is has linux assigned io ports to all manufactured devices usable with kernel 5.7 or latest kernel version?
Question : What if I want to write device driver for any device. How can I find the io ports number range to request to. I dont not expect that I have to look into kernel code and find similer driver port range. so How can I find that io port number range. how to achieve this first step required in writing device driver (any device. be it wifi internet device or ethernet device)
Question : So my question is has linux assigned io ports to all manufactured devices usable with kernel 5.7 or latest kernel version?
No.
Question : What if I want to write device driver for any device. How can I find the io ports number range to request to.
You ask the user for it. After all it's the user who set them using jumpers on the ISA card.
Here's a picture of an old Sound Blaster card (taken from Wikipedia, I'm too lazy to rummage around in my basement now). I've highlighted a specific area in the picture:
That jumper header I highlighted: That's the port configuration jumper. As a user you literally connect two of the pins with a jumper connector and that connects a specific address line that comes from the card connectors to the circuitry on the rest of the card. This address line is part of the AT bus port I/O scheme. The user sets this jumper, writes down the number and then tells the driver, which number it was set to. That's how AT style I/O ports.
Or the driver uses one of the well known port numbers for specific hardware (like network controllers) that dates back to the era, where ISA style ports were still are thing. Also there's old ISA-P'n'P where the BIOS and the add-in cards would negotiate the port assignments at power up, before the OS even started. You can read those port numbers with the ISA-P'n'P API provided by the kernel.
We no longer use this kind of hardware in practice! Except for legacy and retro computing purposes.
Over a quarter of century ago, the old AT / ISA bus was superseeded with PCI. Today we use PCIe which, from the point of view of software still looks like PCI. One of the important things about PCI was, that it completely dropped the whole concept of ports.
With ISA what you had were 8 data lines and 16 address lines, plus two read/write enable lines, one for memory mapped I/O and one for port I/O. You can find the details here https://archive.is/3jjZj. But what happens when you're reading from say, port 0x0104, it would physically set the bit pattern of 0x0104 to the address lines on the ISA bus, pull low the read enable line, and then read the voltage level on the data lines. And all of that is implemented as an actual set of instructions of the x86: https://c9x.me/x86/html/file_module_x86_id_139.html
Now look at the PCI bus: There's no longer separate data and address lines. Instead read/write commands would be sent, and everything happens through memory mappings. PCI devices have something called a BAR: a Base Address Register. This is configured by the PCI root complex and assigns the hardware the region of actual physical bus addresses where it appears. The OS has to get those BAR information from the PCI root complex. The driver uses the PCI IDs to have the hardware discovered and the BAR information told to it. It can then do memory reads/writes to talk to the hardware. No I/O ports involved. And that is just the lowest level. USB and Ethernet happen a lot further up. USB is quite abstract, as is Ethernet.
Your other question Looking for driver developer datasheet of Intel(R) Core(TM) i5-2450M CPU # 2.50GHz suggests, that you have some serious misconceptions of what is actually going on. You were asking about USB devices, and Ethernet ports. Neither of those in any way directly interact with this part of the computer.
Your question per se is interesting. But we're also running into a massive XYZ problem here; it's worse than an XY problem; you're asking about X, although you want to solve Y. But Y isn't even the problem you're dealing with in the first place.
You're obviously smart, and curious, and I applaud that. But I have to tell you, that you've to backtrack quite a bit, to clear up some of the misconceptions you have.

Is there possible to stream infinte data over SPI using DMA on STM32F3?

I'm developing a RF modem based on a new protocol, which has a feature of streaming 96 Bytes in one frame - but they are sent on and on, before communication ends. I plan using two 96 Bytes buffers in STM32 - in next lines I will explain why.
I want to send first 96 Byte frames by the USB-CDC to STM32 - then external modem chip will generate a "9600bps" clock and STM will have to write Payload bits by bits on specified output pin(at the trailing edge of the each clock pulse).
When STM32 will notice that it had sent a half of 96 Byte frame - that it sent to PC notification to send more data - PC will refill second 96 Byte buffer by USB-CDC immediately. When STM32 will end sending first buffer - immediately starts sending second buffer content. When it will send half of second buffer - as previous will ask PC for another 96Byte frame.
And that way all the time, before PC will sent command to stop tx.
This transfer mode - a serial, with using a "trigger clock".
Is this possible using DMA, and how could I set it?
I want to use DMA to have ability to use USB while already streaming data to the radio modem chip. Is this the right approach?
I'm working in project building an opensource radiocommunication system project with both packet and stream capatibilities & digital voice. I'm designing and electronics for PC radiomodem. Project is called M17 and is maintained by Wojtek SP5WWP.
Re. general architecture. Serial communication over USB ACM does not have to use buffer of the same size and be synchronized with the downstream communication over SPI. You could use buffers as big as practically possible so PC can send data in advance. This will reduce the chance of buffer underflow if PC does not provide data fast enough. Use a circular buffer and fill it when a packet arrives from USB.
DMA is the right approach. Although people often say that DMA is only necessary for high bandwidth operations, it may be actually easier to work with DMA than handling interrupts per every byte, even when you only handle 9600 bits per second.
DMA controller in STM32F3 has a Half-Transfer Complete (HTIF in DMA_ISR) bit that you can poll or make it generate and interrupt. In conjunction with the Transfer Complete status (TCIF) and the Circular bit (CIRC in DMA_CCR) you can organize a double-buffered data pipe so that transfers can overlap with whatever else the MCU is doing. The application will reload the first half of the DMA buffer on the HTIF event. When the TCIF event happens, it reloads the second half. It has to be done quickly, before the other half is also completed. However, you need a double buffered pipeline only when you need to constantly stream data, i.e. overall amount is larger than can the size of the DMA buffer.
Stopping a circular DMA may be tricky. I suppose both the STM32 and external chip know how many bytes to send. In that case, after this amount is received, disable the DMA.
It seems you need a slave SPI in STM32 as the external chip generates SPI clock.
DMA is not difficult to set up, however, it needs multiple things to work properly. I assume register-level programming, if you use some kind of framework, you'll need to find out how it implements these features. Enable clocks for SPI, GPIO port for SPI pins, and DMA, configure the pins as AF. Find the right DMA channel for the SPI peripheral. In case of SPI DMA you usually need two channels: TX and RX, but with the slave SPI, you may get away with one. Configure SPI, pay attention to clock polarity and phase, and set it to generate a DMA request for each TX and/or RX. Set the DMA CPAR channel register pointing to the SPI DR register in channel(s) and program all other DMA channel registers appropriately. Enable the DMA channel(s). Enable SPI in slave mode. When the SPI master clocks data on the MOSI/SCK pins, the DMA controller will put them in memory. When the buffer is half-full and full-full, the channel will set the HTIF and TCIF bits and generate and interrupt, if you told it to. Use these events to implement flow control.

How to programmatically select the BT device to push a file to?

I am designing an information kiosk and need a BT application which can automatically push a file to the nearest BT enabled device assuming that this would be the phone of the person currently standing in front of the kiosk.
Is there any other ways of doing this except by checking the RSSI (Received Singal Strength Indicator)?
Do all Bluetooth stacks support accessing this property?
How accurate is RSSI as the basis for the decision to which device to push to? Can it be that other phones which are further away from the kiosk can emit a stronger signal than the signal coming from the phone of the guy standing right in front of the kiosk?
Not all stacks support RSSI.
There's an alternate way: the device who first answers to Inquiries should have a stronger signal.
Your guess is true, it only depends on signal strength, not distance.
Also, the device with the stronger signal is not necessarily the one which answers first, since implementations of the protocol are different among devices. Thus you would have to test all target devices separately.

Resources