UART SW and HW flow control, linux - linux

Currently, I'm testing flow control between 2 RS485 UART Port (Just connect Rx and RX, Tx and Tx, DTS/CTS is not connected).
Flag setting (between get and set attribute)
HW Flow control:
tty.c_cflag |= CRTSCTS; // RTS/CTS
tty.c_iflag &= ~(IXOFF|IXON|IXANY);
SW Flow control:
tty.c_cflag &= ~CRTSCTS;
tty.c_iflag |= (IXOFF|IXON|IXANY);
I assume that if I set both of UART1 and UART2 are Hardware flow control and baudrate is high (for eg. 460800 bps) or write into UART1 with higher baud-rate, read() from UART2 with lower baud-rate, FIFO (currently is 64byte) will be overflow as same as kernel send some notification.
But actually, it is always write() and read() successful. Could anyone share me suggestion how to observer buffer overflow?
Sorry if my question is a little dump cuz I'm a new linux leaner.
Thanks so much.

There should be no hardware flow control in the RS485 standard.
Since the API is shared with the RS232C standard, it can be called but it will not work effectively.
Also, the 64-byte FIFO you wrote is a hardware (interface chip) buffer, and the device driver also has a software buffer. Buffers often exist in kilobytes.
It is no wonder that even with high speed, transmission and reception of short data size ends normally.
Perform judgments such as overflow by checking the format of received data, and checking the balance and sequence of commands and responses.

Related

Is there possible to stream infinte data over SPI using DMA on STM32F3?

I'm developing a RF modem based on a new protocol, which has a feature of streaming 96 Bytes in one frame - but they are sent on and on, before communication ends. I plan using two 96 Bytes buffers in STM32 - in next lines I will explain why.
I want to send first 96 Byte frames by the USB-CDC to STM32 - then external modem chip will generate a "9600bps" clock and STM will have to write Payload bits by bits on specified output pin(at the trailing edge of the each clock pulse).
When STM32 will notice that it had sent a half of 96 Byte frame - that it sent to PC notification to send more data - PC will refill second 96 Byte buffer by USB-CDC immediately. When STM32 will end sending first buffer - immediately starts sending second buffer content. When it will send half of second buffer - as previous will ask PC for another 96Byte frame.
And that way all the time, before PC will sent command to stop tx.
This transfer mode - a serial, with using a "trigger clock".
Is this possible using DMA, and how could I set it?
I want to use DMA to have ability to use USB while already streaming data to the radio modem chip. Is this the right approach?
I'm working in project building an opensource radiocommunication system project with both packet and stream capatibilities & digital voice. I'm designing and electronics for PC radiomodem. Project is called M17 and is maintained by Wojtek SP5WWP.
Re. general architecture. Serial communication over USB ACM does not have to use buffer of the same size and be synchronized with the downstream communication over SPI. You could use buffers as big as practically possible so PC can send data in advance. This will reduce the chance of buffer underflow if PC does not provide data fast enough. Use a circular buffer and fill it when a packet arrives from USB.
DMA is the right approach. Although people often say that DMA is only necessary for high bandwidth operations, it may be actually easier to work with DMA than handling interrupts per every byte, even when you only handle 9600 bits per second.
DMA controller in STM32F3 has a Half-Transfer Complete (HTIF in DMA_ISR) bit that you can poll or make it generate and interrupt. In conjunction with the Transfer Complete status (TCIF) and the Circular bit (CIRC in DMA_CCR) you can organize a double-buffered data pipe so that transfers can overlap with whatever else the MCU is doing. The application will reload the first half of the DMA buffer on the HTIF event. When the TCIF event happens, it reloads the second half. It has to be done quickly, before the other half is also completed. However, you need a double buffered pipeline only when you need to constantly stream data, i.e. overall amount is larger than can the size of the DMA buffer.
Stopping a circular DMA may be tricky. I suppose both the STM32 and external chip know how many bytes to send. In that case, after this amount is received, disable the DMA.
It seems you need a slave SPI in STM32 as the external chip generates SPI clock.
DMA is not difficult to set up, however, it needs multiple things to work properly. I assume register-level programming, if you use some kind of framework, you'll need to find out how it implements these features. Enable clocks for SPI, GPIO port for SPI pins, and DMA, configure the pins as AF. Find the right DMA channel for the SPI peripheral. In case of SPI DMA you usually need two channels: TX and RX, but with the slave SPI, you may get away with one. Configure SPI, pay attention to clock polarity and phase, and set it to generate a DMA request for each TX and/or RX. Set the DMA CPAR channel register pointing to the SPI DR register in channel(s) and program all other DMA channel registers appropriately. Enable the DMA channel(s). Enable SPI in slave mode. When the SPI master clocks data on the MOSI/SCK pins, the DMA controller will put them in memory. When the buffer is half-full and full-full, the channel will set the HTIF and TCIF bits and generate and interrupt, if you told it to. Use these events to implement flow control.

How to set Inter-Byte Delay Timeout to milliseconds?

I'm currently working with termios for serial communication in Linux.
I need to set an intercharacter timeout to 5ms.
I found a way to set intercharacter timeout using VMIN and VTIME where VMIN has to be VMIN > 0 and VTIME > 0.
The problem is that i need to set the VTIME to 5ms, but the VTIME is expressed in tenths of a second.
VTIME data type is unsigned char, so i can't just set it to 0.05.
Does anyone know if there is some way around this?
I need to set an intercharacter timeout to 5ms.
...
Does anyone know if there is some way around this?
No, there is no way to set a shorter termios timeout than 100 ms.
Depending on your hardware and kernel configuration, this timeout may not be reliable at all, especially if you are trying to detect time-separated messages.
The termios handling is at least a full layer above the UART device driver (see
Linux serial drivers).
Unless your kernel is configured to ensure that the bottom-half of the UART driver and the kworker threads for termios are high priority and low latency, then short intercharacter intervals cannot be accurately or reliably determined.
If the UART utilizes a FIFO to buffer incoming data, then that hardware obscures the intercharacter spacing that the software can detect.
Similarly when the UART driver is using DMA to store the received data, intercharacter timing will be obscured.
With DMA the CPU is not involved with handling the received data until the DMA operation is complete, and all temporal information about any intercharacter separation is gone.
(Crucial information such as framing error and/or parity error is difficult/impossible to pinpoint to a specific byte when using DMA.)
Even without DMA, termios will only be able to use timing based on the transfer of data through the tty flip buffers (which is a layer removed from the timing on the wire).
Some UARTs do have hardware that assist in detecting the end-of-message by idle line.
For example Atmel/Microchip ATSAMA5 and AT91SAM9 SoCs have USARTs with a Receiver Timeout feature that measures the idle time after each received frame.
When this idle line time exceeds a specified value, an interrupt can be generated.
The Linux driver for the Atmel USART typically uses the receiver-timeout interrupt to (prematurely) terminate the current DMA receive operation, and copy the contents of the DMA buffer to the tty flip buffer.
In summary you cannot or should not rely solely on VMIN and VTIME settings to detect time-separated messages. See Parsing time-delimited UART data.
The message packets need to have delimiter/sentinel characters/bytes so that messages can be reliably parsed and validated.
See parsing complete messages from serial port for an example of efficient use of syscalls with a local buffer.

using CTS line on a ch341 usb serial adapter for flow control

I am trying to communicate with a serial device via a ch341 based usb-serial adapter (wemos.cc / ch340). Since the target device has limited buffer size it uses a signal line to indicate if it is safe to send bytes.
I have confirmed that the level of the CTS# pin is connected correctly: the state of the pin is visible with the "statserial" utility and is propagated to the "Clear To Send" flag (logically inverted, which matches the "datasheet" of the ch341).
I am enabling RTS/CTS flow control via the CRTSCTS flag in tcsetattr(), also the CLOCAL flag is set to avoid waiting for the DCD signal.
However, this does not seem to have an effect on the behaviour of the ch341. When connecting the CTS# pin to either VCC or GND the ch341 happily accepts all the data sent to the chip via the /dev/ttyUSB0 device. I'd expect it to block when the chip-internal buffers are filled up until CTS# gets low, allowing it to send the data to the TX pin...
Is there something I am missing? Or is CTS not implemented on the ch341?
Thanks,
Simon.

How to modify timing in readyRead of QextSeriaport [duplicate]

I'm implementing a protocol over serial ports on Linux. The protocol is based on a request answer scheme so the throughput is limited by the time it takes to send a packet to a device and get an answer. The devices are mostly arm based and run Linux >= 3.0. I'm having troubles reducing the round trip time below 10ms (115200 baud, 8 data bit, no parity, 7 byte per message).
What IO interfaces will give me the lowest latency: select, poll, epoll or polling by hand with ioctl? Does blocking or non blocking IO impact latency?
I tried setting the low_latency flag with setserial. But it seemed like it had no effect.
Are there any other things I can try to reduce latency? Since I control all devices it would even be possible to patch the kernel, but its preferred not to.
---- Edit ----
The serial controller uses is an 16550A.
Request / answer schemes tends to be inefficient, and it shows up quickly on serial port. If you are interested in throughtput, look at windowed protocol, like kermit file sending protocol.
Now if you want to stick with your protocol and reduce latency, select, poll, read will all give you roughly the same latency, because as Andy Ross indicated, the real latency is in the hardware FIFO handling.
If you are lucky, you can tweak the driver behaviour without patching, but you still need to look at the driver code. However, having the ARM handle a 10 kHz interrupt rate will certainly not be good for the overall system performance...
Another options is to pad your packet so that you hit the FIFO threshold every time. It will also confirm that if it is or not a FIFO threshold problem.
10 msec # 115200 is enough to transmit 100 bytes (assuming 8N1), so what you are seeing is probably because the low_latency flag is not set. Try
setserial /dev/<tty_name> low_latency
It will set the low_latency flag, which is used by the kernel when moving data up in the tty layer:
void tty_flip_buffer_push(struct tty_struct *tty)
{
unsigned long flags;
spin_lock_irqsave(&tty->buf.lock, flags);
if (tty->buf.tail != NULL)
tty->buf.tail->commit = tty->buf.tail->used;
spin_unlock_irqrestore(&tty->buf.lock, flags);
if (tty->low_latency)
flush_to_ldisc(&tty->buf.work);
else
schedule_work(&tty->buf.work);
}
The schedule_work call might be responsible for the 10 msec latency you observe.
Having talked to to some more engineers about the topic I came to the conclusion that this problem is not solvable in user space. Since we need to cross the bridge into kernel land, we plan to implement an kernel module which talks our protocol and gives us latencies < 1ms.
--- edit ---
Turns out I was completely wrong. All that was necessary was to increase the kernel tick rate. The default 100 ticks added the 10ms delay. 1000Hz and a negative nice value for the serial process gives me the time behavior I wanted to reach.
Serial ports on linux are "wrapped" into unix-style terminal constructs, which hits you with 1 tick lag, i.e. 10ms. Try if stty -F /dev/ttySx raw low_latency helps, no guarantees though.
On a PC, you can go hardcore and talk to standard serial ports directly, issue setserial /dev/ttySx uart none to unbind linux driver from serial port hw and control the port via inb/outb to port registers. I've tried that, it works great.
The downside is you don't get interrupts when data arrives and you have to poll the register. often.
You should be able to do same on the arm device side, may be much harder on exotic serial port hw.
Here's what setserial does to set low latency on a file descriptor of a port:
ioctl(fd, TIOCGSERIAL, &serial);
serial.flags |= ASYNC_LOW_LATENCY;
ioctl(fd, TIOCSSERIAL, &serial);
In short: Use a USB adapter and ASYNC_LOW_LATENCY.
I've used a FT232RL based USB adapter on Modbus at 115.2 kbs.
I get about 5 transactions (to 4 devices) in about 20 mS total with ASYNC_LOW_LATENCY. This includes two transactions to a slow-poke device (4 mS response time).
Without ASYNC_LOW_LATENCY the total time is about 60 mS.
With FTDI USB adapters ASYNC_LOW_LATENCY sets the inter-character timer on the chip itself to 1 mS (instead of the default 16 mS).
I'm currently using a home-brewed USB adapter and I can set the latency for the adapter itself to whatever value I want. Setting it at 200 µS shaves another mS off that 20 mS.
None of those system calls have an effect on latency. If you want to read and write one byte as fast as possible from userspace, you really aren't going to do better than a simple read()/write() pair. Try replacing the serial stream with a socket from another userspace process and see if the latencies improve. If they don't, then your problems are CPU speed and hardware limitations.
Are you sure your hardware can do this at all? It's not uncommon to find UARTs with a buffer design that introduces many bytes worth of latency.
At those line speeds you should not be seeing latencies that large, regardless of how you check for readiness.
You need to make sure the serial port is in raw mode (so you do "noncanonical reads") and that VMIN and VTIME are set correctly. You want to make sure that VTIME is zero so that an inter-character timer never kicks in. I would probably start with setting VMIN to 1 and tune from there.
The syscall overhead is nothing compared to the time on the wire, so select() vs. poll(), etc. is unlikely to make a difference.

UART initialisation: Prevent UART to pull RTS high

I'm writing a RS485 driver for an ARM AT91SAM9260 board on Linux.
When I initialise the UART, the RTS signal line gets high (1). I guess this would and should be the standard behaviour in RS232 operation mode. In RS485 mode however this is not wanted.
I'm using the standard functions provided by the arm-arch section to initialise the UART. Therefore the significant steps are:
at91_register_uart(AT91SAM9260_ID_US2, 3, ATMEL_UART_CTS | ATMEL_UART_RTS);
//consisting of:
// >> configure/mux the pins
at91_set_A_periph(AT91_PIN_PB10, 1); /* TXD */
at91_set_A_periph(AT91_PIN_PB11, 0); /* RXD */
if (pins & ATMEL_UART_RTS)
at91_set_B_periph(AT91_PIN_PC8, 0); /* RTS */
if (pins & ATMEL_UART_CTS)
at91_set_B_periph(AT91_PIN_PC10, 0); /* CTS */
// >> associate the clock
axm_clock_associate("usart3_clk", &pdev->dev, "usart");
// >> et voilà
As you can see with
at91_set_B_periph(AT91_PIN_PC8, 0);
the pull-up on the RTS pin isn't activated.
Why does the UART set the RTS high?
Just because this would be the standard
behaviour in RS232 mode?
Wouldn't it be a better standard for
the UART to keep silent till the
operation mode is explicitly set?
A high RTS signal after initialisation seems to be the standard behaviour on many platforms. It manly depends which serial operation mode the start-up routines anticipates for the interface.
To prevent RTS-high on a ATMEL AT91SAM9260 board running Linux, you have to put the UART into the right mode BEFORE muxing at91_set_X_periph() the Pins and register the device.
Since Linux Kernel Version 2.6.35, the ATMEL serial driver supports the RS485 mode. In this driver the UART is properly configured before setting the Pins (GPIO) to there role.
For my embedded device which is running an older Linux version, I solved the problem with the following line of codes:
/* write control flags */
control |= ATMEL_US_RTSEN;
mode |= ATMEL_US_USMODE_RS485;
UART_PUT(uartbaseaddr, ATMEL_US_CR, control);
UART_PUT(uartbaseaddr, ATMEL_US_MR,mode);
Now the Pins can be muxed the their role
at91_set_X_periph(RTS_PIN, 0);

Resources