How does UART communication work between two devices? - multithreading

In one of my project, I have a nano-computer (embedded Linux) which is connected to a microcontroller with an UART connection.
Both do some process on their own, but sometimes the nano-computer needs to send data on the UART and vice versa.
I suppose that if A wants to communicate with B, B needs to be listening, right ? How do I know when to listen, and when to talk ?
Do I need to have a special thread running in parallel in both of my devices only responsible for UART communication, while they do other stuff ?
If I missed a message, is there a buffer which is filled that I can read when I am ready ?
Thanks for your advices. :)

'A' and 'B' are listening all time. You have to enable the UART receive interupt.
Maybe this link will explain the basics: UART basics

Connected and initialized correctly the hardware has a tx and rx on both sides tx to rx. So both sides are listening all the time from a hardware perspective. The operating system likely has a driver and a buffer that is accumulating that input all the time. BUT if you have no software that is asking for the data coming in then you wont see it. You do need some software monitoring the uart if you will (through drivers and operating system usually) so that you can see what the other side sends at any given time. You do this on both ends of the connection if that is what is required.

There are two approaches that are used.
In the past, it was common to use hardware flow control. This uses an additional wire in each direction. The sender waits until the wire indicates the receiver is ready. When the receiver is not ready to receive data, it signals the other side. Hardware would buffer at least one byte and, if the buffer was full, signal the other side not to send over this wire.
This is less common today. UARTs are so slow relative to modern hardware and large buffers are so cheap and easy to provide that there is no longer an issue. The sender just fills the receiver's hardware buffer and the receiver empties the hardware buffer periodically. Software would have to ignore the buffer for a long time for it to overflow.
An intermediate solution is to use flow control in the data flow. Generally, two characters are reserved, one to stop the flow and one to resume it. The receiver sends a flow control character to the sender if its buffer is getting close to full and another one if its buffer is getting close to empty. This is really only useful if the data flow doesn't need to handle binary data. This is extremely rare and was traditionally used primarily for connections that had a human on one end. You could also pause the flow if the information was coming faster than you could read it.
Generally, the protocols used are tolerant of overflow and include some form of high-level acknowledgement and/or retransmission as appropriate. One device might wait for the other side to send some kind of response to its command and, if it doesn't get one, retry the command. The protocol is designed not to do anything terrible if a command is received twice since it might be the reply that's lost.

Related

Linux FTDI USB-to-Serial overlflow errors FTDI_RS_OE on 1Mbaud

I am trying to use a genuine FTDI USB-to-serial in Linux at 1Mbaud (instead of the usual 115200). This all works fine, but sometimes I seem to lose data. When checking the counters with the TIOCGICOUNT ioctl call, I can see that I get overrun errors (not buf_overrun).
When looking in the source code of the FTDI linux driver (https://github.com/torvalds/linux/blob/master/drivers/usb/serial/ftdi_sio.c), this is done when the chip sends a USB packet containing a FTDI_RS_OE error flag.
(For reference, when I use the exact same userspace application, but with a totally different serial device (the imx6 mxc), I do not get these errors. It's really an FTDI-driver specific thing)
I find very little regarding this, and strangely enough, the windows driver does not seems to suffer from this problem. If anybody has gotten these FTDI chips working at high speeds in linux, feel free to help me out!
Kind regards,
Arnout
I think I figured it out. Bottom line was that this was a genuine RX overrun due to my code not reading the uart buffers fast enough. When I make sure to "read" the serial port fast enough, I can attain the constant Mbaud data rate. The reason I got totally misguided (and that the weird and unexpected FTDI_RS_OE is sent) I will explain below.
A few notes on the protocol I was using. I was sending some "request" packet on the serial line, and expecting a large reply. (and doing this in loop). "My bug" was that I expected the remote device to reply very quickly, and if not stop processing. This timeout was too short, but the actual reply did still come in. "Some" buffer then overflowed, causing the RX overruns.
But, this was not so clear. A few subtleties:
The rx overrun counter was only incremented upon the NEXT uart read syscall (e.g. minutes later if my code went to some idle state) (NOT immediately after the actual issue happened, which is very confusing)
I was under the assumption that, just like the imx6 driver, the linux kernel would always simply service the USB device if incoming data was available. And that the data would be sent into a 640kB buffer (defined https://elixir.bootlin.com/linux/v4.9.192/source/drivers/tty/tty_buffer.c). In the imx6 driver it can then clearly be seen what happens if that buffer overflows
But that turns out not to be the case. Instead, my best guess at what happens here (I haven't profiled/debugged the kernel to verify this) is that serial "throtteling" happens. When this 640kB is "getting" full, linux will issue a "throttle" callback to the FTDI driver. That then simply uses the generic usb_serial_generic_throttle, which sets a flag, and discards incoming urb data in https://elixir.bootlin.com/linux/latest/ident/usb_serial_generic_read_bulk_callback. This would explain why no overruns are "detected" when the incident actually occurs, but suddenly is detected when (e.g. after 1 minute of inactivity) I restart a read operation. The FTDI chip's internal buffer must be overflowing due to this mechanism, casuing this FTDI_RS_OE flag to be set, which is then only actually correctly parsed when throtteling is disabled again.
So conclusion: The main issue was at my side, but the FTDI driver does not correctly implement its overrun counters (they only show up 'late' or even never depending on the usecase) due to most likely the linux throtteling feature.

SPI: Linux driver model

I'm new with SPI; the Linux kernel provides an API for declaring SPI busses and devices, and managing them according to the standard Linux driver model.
You can find the description of the struct spi_master here: https://www.kernel.org/doc/htmldocs/device-drivers/API-struct-spi-master.html
The description at the link above says that "Each device may be configured to use a different clock rate, since those shared signals are ignored unless the chip is selected". To put the sentence in a contest, I have to say that with "device" they mean SPI slave device, and with "those shared signals" they mean MOSI, MISO and SCK signals.
In fact, in the struct spi_device (https://www.kernel.org/doc/htmldocs/device-drivers/API-struct-spi-device.html) there is an attribute named max_speed_hz that is not present in the struct spi_master. So I can understand on the first part of the statement above: "Each device may be configured to use a different clock rate".
But, what does mean the second part? Does "since those shared signals are ignored unless the chip is selected" mean that I'm allowed to used different clock rates but only one at time by enabling/disabling the slaves with different rates?
Thank you for your help! Regards,
--
Matteo
#Matteo M.: I think you actually are not allowed to simultaeously setting SS1, SS2 and SS3 to zero and in this way enabling all three SPI slaves at the same moment in time. The reason is, that the SPI slaves, while receiving data on the MOSI line, simultaneosly send back data on the MISO line. If actually all three slaves would put data on the (shared) MOSI line, really bad things could happen, both regarding the data and the electrical flow.
SPI is a very loose "standard", there are not many rules to follow, which is good (and bad I guess). It is good because it is flexible. It is bad because it can be implemented differently depending on the specific hardware you are dealing with. Some devices support only half-duplex communication, which as you know requires coordination of when the bus can be driven. Select lines (chip enable, slave select, whatever you want to call them) provide a handy way to do this without using bits to identify which slave should get a message off the bus.
In the full-duplex mode, where data is dropped onto the bus from the master and from the slave on each clock pulse, the select line could be very much required to prevent bad things, as Wolfgang stated. I want to stress could be required; it is completely reasonable to have perhaps a master processor communicating with other processors that only drive the bus when responding to some specific bit pattern (like for instance, an "address")... More software/firmware? Yeah, but it is not stopping you.
So, if your 8-bit slaves are say, for instance 8-bit DACs you could indeed write the values chunks of the master data register. Independant select lines will enable you to do this without all those slaves driving the bus at once. Yeah, you have to shift the values from each slave into the master register one at a time, but that too is a completely reasonable design.
Unlike some more complex serial protocols SPI is actually really flexible; because it doesn't lock you into max word size or require any of the data you write to the bus to be comprised of things like addresses and offsets and stuff.

capturing network packets with accurate timestamp

i'm capturing network packets( a transport stream) along with its arrival time using winpcap library. But I'm facing some issues.Whenever I play audio on my machine or copy a large file from network, the timing information of my captured packets gets distorted.Some packets timestamp are very close to each other while others are a bit far.Is there any solution (software/hardware) to rectify this.I need accurate timestamping of network packets.
You could raise the process priority of the capture application to High using the Task Manager.
But you really need to consider what you are trying to achieve and why. Do you want to know when the packet arrives at the NIC, when it is processed by the kernel, when the kernel places it in the capture program's socket buffer, when the capture program reads it out of its buffer, when the kernel places it in some other programs socket buffer, or when some other program reads it from its socket buffer?
All those time stamps are different, and when the system is under load the differences will necessarily become larger. Timing information from capture program will most likely reflect that time when the capture program read the packet out of its own socket buffer. Increasing the capture application's process priority will make that happen more smoothly, but it will make the handling of packets by any other applications less reliable.

Linux serial port priority

At present we are using Fedora Core 3 for a system we are working on. This system needs to communicate via serial. The timing of communications is timing critical. At present it seems that the serial driver has delays in pushing the data from the 4k fifo into the 16byte hardware uart.
Is there any way in force Linux to treat this action with a higher priority?
Try the using setserial to set the low_latency option.
By default serial ports are optimised for throughput not latency, this option I think lets you change it.
If you have a hard real-time processing requirement, you may be better off using a distribution that's built with that in mind, for example RTLinux.
Consider getting the device vendor to change the protocol to something less stupid where timing doesn't matter.
Having a timing-critical serial protocol, or indeed one which requires you to acknowledge one message before sending the next, is really stupid.
rs232-style serial ports are really slow, and anything which makes them worse is a bad idea.
I wrote a program to control a device which had a stupid protocol - each byte of data was individually acknowledged (duuh!) and the next one wasn't sent until the ack arrived - which meant that the data transfer rate was a tiny fraction of what it should have been.
Look at the zmodem protocol, for example, which is less stupid.
Better still, get the vendor to enter the 1990s and use USB.

Is forcing I2C communication safe?

For a project I'm working on I have to talk to a multi-function chip via I2C. I can do this from linux user-space via the I2C /dev/i2c-1 interface.
However, It seems that a driver is talking to the same chip at the same time. This results in my I2C_SLAVE accesses to fail with An errno-value of EBUSY. Well - I can override this via the ioctl I2C_SLAVE_FORCE. I tried it, and it works. My commands reach the chip.
Question: Is it safe to do this? I know for sure that the address-ranges that I write are never accessed by any kernel-driver. However, I am not sure if forcing I2C communication that way may confuse some internal state-machine or so.(I'm not that into I2C, I just use it...)
For reference, the hardware facts:
OS: Linux
Architecture: TI OMAP3 3530
I2C-Chip: TWL4030 (does power, audio, usb and lots of other things..)
I don't know that particular chip, but often you have commands that require a sequence of writes, first to one address to set a certain mode, then you read or write another address -- where the function of the second address changes based on what you wrote to the first one. So if the driver is in the middle of one of those operations, and you interrupt it (or vice versa), you have a race condition that will be difficult to debug. For a reliable solution, you better communicate through the chip's driver...
I mostly agree with #Wim. But I would like to add that this can definitely cause irreversible problems, or destruction, depending on the device.
I know of a Gyroscope (L3GD20) that requires that you don't write to certain locations. The way that the chip is setup, these locations contain manufacturer's settings which determine how the device functions and performs.
This may seem like an easy problem to avoid, but if you think about how I2C works, all of the bytes are passed one bit at a time. If you interrupt in the middle of the transmission of another byte, results can not only be truly unpredictable, but they can also increase the risk of permanent damage exponentially. This is, of course, entirely up to the chip on how to handle the problem.
Since microcontrollers tend to operate at speeds much faster than the bus speeds allowed on I2C, and since the bus speeds themselves are dynamic based on the speeds at which devices process the information, the best bet is to insert pauses or loops between transmissions that wait for things to finish. If you have to, you can even insert a timeout. If these pauses aren't working, then something is wrong with the implementation.

Resources