from this I copied the example for serial port configuration:
tcgetattr (serialfd, &tty);
cfsetospeed(&tty,B115200);
cfsetispeed(&tty,B115200);
tty.c_cflag = (tty.c_cflag & ~CSIZE) | CS8;
tty.c_iflag &= ~IGNBRK;
tty.c_lflag = 0;
tty.c_oflag = 0;
tty.c_cc[VMIN] = 0;
tty.c_cc[VTIME] = 5;
tty.c_iflag &= ~(IXON | IXOFF | IXANY);
tty.c_cflag |= (CLOCAL | CREAD);
tty.c_cflag &= ~(PARENB | PARODD);
tty.c_cflag |= 0;
tty.c_cflag &= ~CSTOPB;
tty.c_cflag &= ~CRTSCTS;
My actual code is like this:
char buf[100];
write(serialfd, "PING", strlen("PING"));
fsync(serialfd);
while (1)
{
read(serialfd, buf, sizeof(buf));
printf("length: %d\n", strlen(buf));
}
In this case it is printing length: 6 infinitely without stopping. when I change tty.c_cc[VMIN] = 1 and tty.c_cc[VTIME] = 0 it doesn't read.(it blocks in read())
I'm using debian 6.0.5 with usb to serial converter. I open serial port like this:
serialfd = open("/dev/ttyUSB0", O_RDWR | O_NOCTTY | O_SYNC);
Look at your code
while (1)
{
read(serialfd, buf, sizeof(buf));
printf("length: %d\n", strlen(buf));
}
You have written a packet prior to this loop, then on the first iteration you read the available data which gets read into your buffer. You need to either memset your buffer to zeros each time OR zero terminate your buffer using the read byte count given in the return value of your call to read. You then loop infinitely, each time reading again - but subsequent reads will not copy any more data as there is none to read. However, your buffer remains unchanged by the call to read therefore and your printed output remains the same every iteration as the buffer remains the same every iteration.
As for the blocking aspect, you should read the following guide (which has been recommended on SO before and is very good as an introduction to serial port programming)
http://www.easysw.com/~mike/serial/serial.html
This section describes the behaviour you get when setting VMIN and VTIME to various values. In particular the last paragraph explains the blocking behaviour you see.
VMIN specifies the minimum number of characters to read. If it is set
to 0, then the VTIME value specifies the time to wait for every
character read. Note that this does not mean that a read call for N
bytes will wait for N characters to come in. Rather, the timeout will
apply to the first character and the read call will return the number
of characters immediately available (up to the number you request).
If VMIN is non-zero, VTIME specifies the time to wait for the first
character read. If a character is read within the time given, any read
will block (wait) until all VMIN characters are read. That is, once
the first character is read, the serial interface driver expects to
receive an entire packet of characters (VMIN bytes total). If no
character is read within the time allowed, then the call to read
returns 0. This method allows you to tell the serial driver you need
exactly N bytes and any read call will return 0 or N bytes. However,
the timeout only applies to the first character read, so if for some
reason the driver misses one character inside the N byte packet then
the read call could block forever waiting for additional input
characters.
VTIME specifies the amount of time to wait for incoming characters in
tenths of seconds. If VTIME is set to 0 (the default), reads will
block (wait) indefinitely unless the NDELAY option is set on the port
with open or fcntl.
Related
I have a situation where one process is writing 512 bytes of data to a pipe every 100ms, and another process is continuously reading from the same pipe. It took three read operation to read complete 512 bytes of data from pipe. The first read operation returns only a portion of the data (e.g. 100 bytes), and the next read returns zero bytes without any error set and the last read operation reads the remaining data left in the pipe (e.g. 412 bytes).
I assume that the write operation is atomic as the 512 bytes is lesser than PIPE_BUF bytes. If so, is it possible for a read operation to return partial data, especially it took three reads for reading 512 bytes and the second read returned zero bytes.
// Create the named pipe
if (mkfifo("/tmp/my_pipe", 0666) == -1) {
perror("mkfifo");
return 1;
}
// Open the named pipe for reading and writing with non-
// blocking behavior
if ((fd = open("/tmp/my_pipe", O_RDWR | O_NONBLOCK)) == -1) {
perror("open");
return 1;
}
// Open the named pipe for reading with non-blocking behavior
if ((fd = open("/tmp/my_pipe", O_RDONLY | O_NONBLOCK)) == -1) {
perror("open");
return 1;
}
The man page for pipe (i.e., man 7 pipe) says
The communication channel provided by a pipe is a byte stream: there
is no concept of message boundaries.
This terse statement implies that reads can see what you call 'partial data' from the pipe; rather that there is no such thing as partial data, because it's a byte stream.
For what it's worth, TCP has the same feature.
I am testing a serial communication protocol based on printable characters only.
The setup has a pc connected to an arduino board by USB. The PC USB serial is operated in canonical mode, with no echo, no flow control, 9600 baud.
Since a read timeout is requested, pselect is called before the serial read. The arduino board simply echoes back every received character without any processing. The PC OS is Linux Neon with kernel 5.13.0-40-generic.
When lines of a specific length are transmitted from the PC and echoed back by the arduino, they are received correctly except for the final new line that is missing.
A further read, returns an empty line (the previously missing NL).
Lines with different length are transmitted and received correctly, including the trailing NL.
This behavior is fully repeatable and stable. The following code reproduce the problem for a line transmitted with a length of 65 characters (including NL) and received with a length of 64 (NL missing). Other line lengths work fine.
Thanks for any hints.
/* remote serial loop test 20220626 */
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <sys/select.h>
#include <string.h>
#include <termios.h>
#include <time.h>
#include <unistd.h>
#define TX_MINLEN 63
#define TX_MAXLEN 66
#define DATA_MAXLEN 128
#define LINK_DEVICE "/dev/ttyUSB0"
#define LINK_SPEED B9600
#define RECEIVE_TIMEOUT 2000
int main()
{
int wlen;
int retval;
int msglen;
uint8_t tx_data[257] = {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'};
for (int i=16; i < 256; i++) tx_data[i] = tx_data[i & 0xf];
uint8_t rx_data[257];
/* serial interface */
char * device;
int speed;
int fd;
fd_set fdset;
struct timespec receive_timeout;
struct timespec *p_receive_timeout = &receive_timeout;
struct termios tty;
/* open serial device in blocking mode */
fd = open(LINK_DEVICE, O_RDWR | O_NOCTTY);
if (fd < 0) {
printf("Error opening %s: %s\n",LINK_DEVICE,strerror(errno));
return -1;
}
/* prepare serial read by select to have read timeout */
FD_ZERO(&(fdset));
FD_SET(fd,&(fdset));
if (RECEIVE_TIMEOUT >= 0) {
p_receive_timeout->tv_sec = RECEIVE_TIMEOUT / 1000;
p_receive_timeout->tv_nsec = RECEIVE_TIMEOUT % 1000 * 1000000;
}
else
p_receive_timeout = NULL;
/* get termios structure */
if (tcgetattr(fd, &tty) < 0) {
printf("Error from tcgetattr: %s\n", strerror(errno));
return -1;
}
/* set tx and rx baudrate */
cfsetospeed(&tty, (speed_t)LINK_SPEED);
cfsetispeed(&tty, (speed_t)LINK_SPEED);
/* set no modem ctrl, 8 bit, no parity, 1 stop */
tty.c_cflag |= (CLOCAL | CREAD); /* ignore modem controls */
tty.c_cflag &= ~CSIZE;
tty.c_cflag |= CS8; /* 8-bit characters */
tty.c_cflag &= ~PARENB; /* no parity bit */
tty.c_cflag &= ~CSTOPB; /* only need 1 stop bit */
tty.c_cflag &= ~CRTSCTS; /* no hardware flowcontrol */
/* canonical mode: one line at a time (\n is line terminator) */
tty.c_lflag |= ICANON | ISIG;
tty.c_lflag &= ~(ECHO | ECHOE | ECHONL | IEXTEN);
/* input control */
tty.c_iflag &= ~IGNCR; /* preserve carriage return */
tty.c_iflag &= ~INPCK; /* no parity checking */
tty.c_iflag &= ~INLCR; /* no NL to CR traslation */
tty.c_iflag &= ~ICRNL; /* no CR to NL traslation */
tty.c_iflag &= ~IUCLC; /* no upper to lower case mapping */
tty.c_iflag &= ~IMAXBEL;/* no ring bell at rx buffer full */
tty.c_iflag &= ~(IXON | IXOFF | IXANY);/* no SW flowcontrol */
/* no output remapping, no char dependent delays */
tty.c_oflag = 0;
/* no additional EOL chars, confirm EOF to be 0x04 */
tty.c_cc[VEOL] = 0x00;
tty.c_cc[VEOL2] = 0x00;
tty.c_cc[VEOF] = 0x04;
/* set changed attributes really */
if (tcsetattr(fd, TCSANOW, &tty) != 0) {
printf("Error from tcsetattr: %s\n", strerror(errno));
return -1;
}
/* wait for serial link hardware to settle, required by arduino reset
* triggered by serial control lines */
sleep(2);
/* empty serial buffers, both tx and rx */
tcflush(fd,TCIOFLUSH);
/* repeat transmit and receive, each time reducing data length by 1 char */
for (int l=TX_MAXLEN; l > TX_MINLEN - 1; l--) {
/* prepare data: set EOL and null terminator for current length */
tx_data[l] = '\n';
tx_data[l+1] = 0;
/* send data */
int sent = write(fd,tx_data,l+1);
/* receive data */
/* wait for received data or for timeout */
retval = pselect(fd+1,&(fdset),NULL,NULL,p_receive_timeout,NULL);
/* check for error or timeout */
if (retval < 0)
printf("pselect error: %d - %s\n",retval,strerror(errno));
else if (retval == 0)
printf("serial read timeout\n");
/* there is enough data for a non block read: do read */
msglen = read(fd,&rx_data,DATA_MAXLEN);
/* check rx data length */
if (msglen != l+1)
printf("******** RX ERROR: sent %d, received %d\n",l+1,msglen);
else
continue;
/* check received data, including new line if present */
for (int i=0; i < msglen; i++) {
if (tx_data[i] == rx_data[i])
continue;
else {
printf("different rx data:|%s|\n",rx_data);
break;
}
}
/* clear RX buffer */
for (int i=0; i < msglen + 1; i++) rx_data[i] = 0;
}
}
When performing stream-based communication, be it via pipes, serial ports, or TCP sockets, you can never rely on reads to always return a full "unit of transmission" (in this case a line, but could also be a fixed-size block). The reason is that stream-based communication can always be split by any part of the transmission stack (even potentially the sender) into multiple blocks, and there is never a guarantee that a single read will always read a full block.
For example, you could always run into the race condition that your microcontroller is still sending parts of the message when you call read(), so not all characters are read exactly. In your case, that's not what you're seeing, because that would be more of a stochastic phenomenon (that would be worse with an increased interrupt load on the computer) and not so easily reproducible. Instead, because you're talking about the number 64 here, you're running into the static buffer size used in the kernel's tty driver that will only ever return at most 64 bytes at once, regardless of what the specified read size actually is. However, in other cases it could still be that you'll see additional failures by the kernel returning only the first couple of characters of a line in the first read(), and the rest in the second, depending on precise timing details -- you've probably not seen that yet, but it's bound to happen at some point.
The only reliable way to properly implement communication protocols in streaming situations (serial port, pipes, TCP sockets, etc.) is to consider the following:
For fixed-size data (e.g. communication units that are always N bytes in size) to loop around a read() call until you've read exactly the right amount of bytes (reads that follow an incomplete read would obviously ask for less bytes than the original read, just to make up the difference)
For variable-size data (for example communication units that are separated by a line end character) you have two options: either you read only one character at a time until you reach the end-of-line character (inefficient, uses lots of syscalls), or you keep track of the communication state via a large enough buffer that you constantly fill with read() operations until the buffer contains a line-end character, at which point you remove that line from the buffer (but keep the rest) and process that.
As a complete aside, if you're doing anything with serial communication, I can very much recommend the excellent libserialport library (LGPLv3 license) that makes working with serial ports a lot easier -- and has the benefit of being cross-platform. (Doesn't help with your issue, just thought that I'd mention it.)
Upgrading from linux kernel version 5.13.0-40-generic to 5.13.0-51-generic solved the problem.
I need to understand the code in "Real time clock" function rtc_interrupt. Code is
rtc_irq_data += 0x100;
rtc_irq_data &= ~0xff;
rtc_irq_data |= (CMOS_READ(RTC_INTR_FLAGS) & 0xF0);
I am unable to understand why it is += 0x100 and the rest of code.
From the Book "Linux kernel development", from Robert Love, that snippet of code has the following comment(s):
/*
* Can be an alarm interrupt, update complete interrupt,
* or a periodic interrupt. We store the status in the
* low byte and the number of interrupts received since
* the last read in the remainder of rtc_irq_data.
*/
As for rtc_irq_data += 0x100; So, we know there is a counter for the interrupts received in the high byte. Hence the 0x100. If a 16 bit hexadecimal number representation, where the highest byte is being added +1 (more one interrupt on the counter).
As for the second line, rtc_irq_data &= ~0xff; rtc_irq_data is being logically ANDED with the negation of 0xff, eg, with possibly 0xff00. The high part of the integer is being kept, and the low part being discarded. So supposing this was the first time being called, the value would now guaranteed to be 0x0100.
The last part rtc_irq_data |= (CMOS_READ(RTC_INTR_FLAGS) & 0xF0); is doing a logical OR |= of the low byte (that is now 0 / 0x00) with as the RTC current status. Hence the comment "We store the status in the low byte".
As for doing a logical AND with 0xF0 in (CMOS_READ(RTC_INTR_FLAGS) & 0xF0) , consulting the original AT compatible RTC datasheet, INTR_FLAGS is REGISTER C, a register byte where only the 4 upwards bits are used. b7 = IRQF, b6 = FP, b5 = AF, b4 = UF,
b3 to b0
The unused bits of Status Register 1 are read as "0s". They cannot be
writen.
From RTC datasheet
Hence then as a good standard coding practice, making sure with the AND logical 0xF0 that the lower 4 bits are ignored.
I'm writing an embedded Linux application that (1) opens a serial connection to another device, (2) sends a known command, (3) checks port for incoming characters (response) until expected response phrase or character is detected, (4) repeats step 2 and 3 until a series of commands are sent and responses received, (5) then closes the port.
My app would go through some cycles of the above sequence and every now and then it would be waiting for a response (reading) when all of a sudden the communication stops and my software faults out because of my built-in timeout logic.
Is there anything in my port configuration that would cause the port to be blocked due to specific byte sent (possibly due to electrical noise)?
Here is how I'm opening my ports (showing configurations via termios.h):
struct termios options;
fd = open("/dev/ttyUSB0", O_RDWR | O_NOCTTY | O_NONBLOCK);
if (fd == -1) {
debug() << "Port open failed!"
return FAIL;
}
debug() << "Port Opened Successful"
fcntl(fd, F_SETFL, 0); // This setting interacts with VMIN and VTIME below
// Get options
tcgetattr(fd, &options);
// Adjust Com port options
options.c_cflag |= (CLOCAL | CREAD); // Program will not "own" port, enable reading on port
options.c_lflag &= ~(ICANON | ECHO | ECHOE | ISIG); // Sets RAW input mode (does not treat input as a line of text with CR/LF ending)
options.c_oflag &= ~ OPOST; // Sets RAW ouput mode (avoids newline mapping to CR+LF characters)
options.c_iflag &= ~(IXON | IXOFF | IXANY); // Turns off SW flow c
options.c_cc[VMIN] = 0;
options.c_cc[VTIME] = 10;
// Set options
tcsetattr(fd, TCSANOW, &options);
//return fd;
return SUCCEED;
I can't figure out why the communication all of a sudden just freezes up and then goes away when I cycle power to my device. Thanks all!
More info - here are my read and write functions:
int Comm::Receive(unsigned char* rBuf)
{
int bytes;
ioctl(fd, FIONREAD, &bytes);
if (bytes >= 1)
{
bytes = read(fd, rBuf, 1);
if (bytes < 0)
return READ_ERR;
return SUCCEED;
}
else
return NO_DATA_AVAILABLE;
}
int Comm::Send(int xCt, unsigned char* xBuf)
{
int bytes;
if (fd == -1)
return FAIL;
bytes = write(fd, xBuf, xCt);
if (bytes != xCt)
return FAIL;
else
return SUCCEED;
}
Welcome to the joys of serial ports...
Thought 1: wrap your read calls with a select ()
Thought 2: Unset the ICANON flag in tcsetattr, and set a VTIME attribute for a deliberate timeout (and then, obviously, handle it)
Thought 3: Nothing about serial comms ever works perfectly.
I also had a similar problem with sending command to devices and reading responses from the device. Please refer to below SOF post and the answer this was working for my problem.
In these cases, We have to care about the protocol which we are going to use for device communication (send and receive). If we can send commands successfully and we didn't receive a response with a noise from device implies there is something wrong in the data packet which was sent to the device. First of all check the protocol specification first and create a byte array for a simple command (like make a beep sound) and send it.
Send data to a barcode scanner over RS232 serial port
I can do something for you If you can post your complete source code with the output.
Enjoy the code. Thanks.
We've been bashing our heads off of this one all morning. We've got some serial lines setup between an embedded linux device and an Ubuntu box. Our reads are getting screwed up because our code usually returns two (sometimes more, sometimes exactly one) message reads instead of one message read per actual message sent.
Here is the code that opens the serial port. InterCharTime is set to 4.
void COMClass::openPort()
{
struct termios tio;
this->fd = -1;
int tmpFD;
tempFD = open( port, O_RDWR | O_NOCTTY);
if (tempFD < 0)
{
cerr<< "the port is not opened"<< port <<"\n";
portOpen = 0;
return;
}
tio.c_cflag = BaudRate | CS8 | CLOCAL | CREAD ;
tio.c_oflag = 0;
tio.c_iflag = IGNPAR;
newtio.c_cc[VTIME] = InterCharTime;
newtio.c_cc[VMIN] = readBufferSize;
newtio.c_lflag = 0;
tcflush(tempFD, TCIFLUSH);
tcsetattr(tempFD,TCSANOW,&tio);
this->fd = tempFD;
portOpen = true;
}
The other end is configured similarly for communication, and has one small section of particular iterest:
while (1)
{
sprintf(out, "\r\nHello world %lu", ++ulCount);
puts(out);
WritePort((BYTE *)out, strlen(out)+1);
sleep(2);
} //while
Now, when I run a read thread on the receiving machine, "hello world" is usually broken up over a couple messages. Here is some sample output:
1: Hello
2: world 1
3: Hello
4: world 2
5: Hello
6: world 3
where number followed by a colon is one message recieved. Can you see any error we are making?
Thank you.
Edit:
For clarity, please view section 3.2 of the Linux Serial Programming HOWTO. To my understanding, with a VTIME of a couple seconds (meaning vtime is set anywhere between 10 and 50, trial-and-error), and a VMIN of 1, there should be no reason that the message is broken up over two separate messages.
I don't see why you are surprised.
You are asking for at least one byte. If your read() is asking for more, which seems probable since you are surprised you aren't getting the whole string in a single read, it can get whatever data is available up to the read() size. But all the data isn't available in a single read so your string is chopped up between reads.
In this scenario the timer doesn't really matter. The timer won't be set until at least one byte is available. But you have set the minimum at 1. So it just returns whatever number of bytes ( >= 1) are available up to read() size bytes.
If you are still experiencing this problem (realizing the question is old), and your code is accurate, you are setting your VTIME and VMIN in the newtio struct, and the rest of the other parameters in the tio struct.