I'm trying to receive messages from a device that uses mark parity for an address byte and space parity for the message body. The device is a "master" of a multi-drop serial bus. Based on the termios man page, I am using CMSPAR, PARENB, ~PARODD, INPCK, ~IGNPAR, and PARMRK. I expect to get a 3-byte sequence on each address byte: '\377' '\0' . It doesn't happen... I always get the address byte (and the body bytes) but no leading '\377' '\0' chars.
I tried to get PARMRK to work with odd and even parity setups just in case CMSPAR was not supported. Still no 3-byte sequences in the data stream. I'm using Ubuntu 12.04 LTS.
n_tty.c: n_tty_receive_parity_error() has the logic that implements PARMRK. 8250_core.c has the logic to flag parity errors. dmesg | grep ttyS0 shows serail8250: ... is a 16550A. Hmmm... a subsequent message shows 00:0a: ... is a 16550A. Perhaps the 8250 driver is not actually processing ttyS0?
Any ideas? Even if you don't see what I've done wrong but have gotten PARMAR to work, comments about your situation might help me.
UPDATE:
My Linux is running in a VMware VM so I tried a non-VM config and now it works! I case someone knows, I'd still like to know why parity errors are not detected in a VM.
Here is my configuration code:
struct termios tio;
bzero(&tio, sizeof(tio));
tcgetattr(fd, &tio);
// Frame bus runs at 38,400 BAUD
const int BAUD_Rate = B38400;
cfsetispeed(&tio, BAUD_Rate);
cfsetospeed(&tio, BAUD_Rate);
// Initialize to raw mode. PARMRK and PARENB will be over-ridden before calling tcsetattr()
cfmakeraw(&tio);
// Ignore modem lines and enable receiver
tio.c_cflag |= (CLOCAL | CREAD);
// No flow control
tio.c_cflag &= ~CRTSCTS; // No HW flow control
tio.c_iflag &= ~(IXON | IXOFF); // Set the input flags to disable in-band flow control
// Set bits per byte
tio.c_cflag &= ~CSIZE;
tio.c_cflag |= CS8;
// Use space parity to get 3-byte sequence (0xff 0x00 <address>) on address byte
tio.c_cflag |= CMSPAR; // Set "stick" parity (either mark or space)
tio.c_cflag &= ~PARODD; // Select space parity so that only address byte causes error
// NOTE: The following block overrides PARMRK and PARENB bits cleared by cfmakeraw.
tio.c_cflag |= PARENB; // Enable parity generation
tio.c_iflag |= INPCK; // Enable parity checking
tio.c_iflag |= PARMRK; // Enable in-band marking
tio.c_iflag &= ~IGNPAR; // Make sure input parity errors are not ignored
// Set it up now
if (tcsetattr(fd, TCSANOW, &tio) == -1)
{
cout << "Failed to setup the port: " << errno << endl;
return -1;
}
I was having a similar issue (but from the opposite side):
The master of a serial protocol should be sending the 1st byte of a frame with parity mark and all the rest with parity space, while the slave would respond only with parity space.
Many serial comms drivers will ignore the "CMSPAR" bit without returning an error, so you may think that you've setup parity Mark/Space, while you've actually have selected parity Odd/Even instead.
I had to use a protocol analyser to realise that.
So I ended up checking the data of each byte before sending it and switching between odd/even parity in order to simulate the Mark/Space parity that I needed.
Most USB to Serial adapters will need a similar approach because they don't support parity mark/space.
For example, let's say we want to send the following data:
01 03 07 0F 1F
The 1st byte should be sent with parity Mark and the rest with parity space
We could do the following:
Send 01 with odd parity (parity bit=1)
Send 03 with odd parity (parity bit=0)
Send 07 with even parity (parity bit=0)
Send 0F with odd parity (parity bit=0)
Send 1F with even parity (parity bit=0)
That way we can simulate the needed result.
The catch here is that when you are switching parity, the driver is performing a lot of checks that are time consuming and this can affect the final rate of data transfer.
I was using a hacked version of the serial comms driver on an embedded device that could switch parity very fast by omitting some of the unnecessary checks for the application (like baud rate changes for example).
If your inter-character delay is critical, you may need a different solution.
Related
The following code configures UART port.
const char *UART2_path="/dev/ttymxc2";
int UART2;
void UART2_open(const char *UART2_path)
{
int flags = O_RDWR | O_NOCTTY ;
UART2 = open(UART2_path,flags);
tcgetattr(UART2, &ttyurt); //Get the current attributes of the serial port //
//Setting baud rate (input and output)
cfsetispeed(&ttyurt, B115200);
cfsetospeed(&ttyurt, B115200);
ttyurt.c_cflag &= ~PARENB; // Disables the Parity Enable bit(PARENB) //
ttyurt.c_cflag &= ~CSTOPB; // Clear CSTOPB, configuring 1 stop bit //
ttyurt.c_cflag &= ~CSIZE; // Using mask to clear data size setting //
ttyurt.c_cflag |= CS8; // Set 8 data bits //
ttyurt.c_cflag &= ~CRTSCTS; // Disable Hardware Flow Control //
tcsetattr(UART2, TCSANOW, &ttyurt); // Write the configuration to the termios structure//
tcflush(UART2, TCIFLUSH);
}
---------
--------
--------
buffer[8]={0x1f,0x0a,0x1a,0x89,0x85,0xbf,0x36,0x40};
write(UART2,&buffer,strlen(buffer));//sending on uart
expected output==>1f0a8985bf3640
actual output ==>1f0d0a8985bf3640
I'm able to send data, but for some reason 0x0A sent characters are received as 0x0D 0x0A. I'm fairly sure something in this port configuration is doing this.
extra byte 0d before 0a?
writting 0D 0A insted of 0A when I tried to write into uart
That appears to caused by a termios (mis)configuration that is inappropriate for your situation. The termios layer is capable of translating/expanding each occurrence of \n to \r\n for output (i.e. the ONLCR attribute, which is typically enabled by default).
The following code configures UART port.
Your program accesses a serial terminal (i.e. /dev/tty...) rather than a "UART port". There are several layers of processing in between your program and the UART hardware. See Linux serial drivers
Your initialization code is properly implemented (i.e. per Setting Terminal Modes Properly), but it is just the bare minimum that sets only the serial line parameters to 115200 8N1 and no HW flow control. Absolutely no other termios attributes are specified, which means that your program will use whatever previous (random?) settings (such as the ONLCR attribute), and may occasionally misbehave.
The most important consideration when using a serial terminal and termios configuration is determining whether the data should be handled in canonical (as lines of text) or non-canonical (aka raw or binary) mode. Canonical mode provides additional processing to facilitate the reading/writing of text as lines, delimited by End-of-Line characters. Otherwise syscalls are performed for an arbitrary number of bytes. See this answer for more details.
Your output data appears to be not (ASCII) text, so presumably you want to use non-canonical (aka raw) mode. For raw output, your program should specify:
ttyurt.c_oflag &= ~OPOST;
This will inhibit any data conversion on output by termios.
But your termios initialization is also incomplete for reading.
For a proper and concise termios initialization for non-canonical mode, see this answer.
If instead you need canonical mode, then refer to this answer.
You seem to be another victim of UNIX/Linux versus Windows "newline"/"line feed" handling: UNIX/Linux uses single character, like 0A (line feed) or 0D (newline) for going to another line, while Windows uses a combination 0D0A, so most probably you have some program that converts your "I-believe-the-data-to-be-UNIX-like" into "Windows-like".
That might go far: I had the situation where UNIX files were sent to a Windows computer, and the user was using a Windows file viewer to see the content of the files, and it was the file viewer itself which was doing that conversion. Therefore I advise you to check all intermediate programs.
Currently, I'm testing flow control between 2 RS485 UART Port (Just connect Rx and RX, Tx and Tx, DTS/CTS is not connected).
Flag setting (between get and set attribute)
HW Flow control:
tty.c_cflag |= CRTSCTS; // RTS/CTS
tty.c_iflag &= ~(IXOFF|IXON|IXANY);
SW Flow control:
tty.c_cflag &= ~CRTSCTS;
tty.c_iflag |= (IXOFF|IXON|IXANY);
I assume that if I set both of UART1 and UART2 are Hardware flow control and baudrate is high (for eg. 460800 bps) or write into UART1 with higher baud-rate, read() from UART2 with lower baud-rate, FIFO (currently is 64byte) will be overflow as same as kernel send some notification.
But actually, it is always write() and read() successful. Could anyone share me suggestion how to observer buffer overflow?
Sorry if my question is a little dump cuz I'm a new linux leaner.
Thanks so much.
There should be no hardware flow control in the RS485 standard.
Since the API is shared with the RS232C standard, it can be called but it will not work effectively.
Also, the 64-byte FIFO you wrote is a hardware (interface chip) buffer, and the device driver also has a software buffer. Buffers often exist in kilobytes.
It is no wonder that even with high speed, transmission and reception of short data size ends normally.
Perform judgments such as overflow by checking the format of received data, and checking the balance and sequence of commands and responses.
I have a C++ program on a Raspberry Pi Model B that receives data from a remote sensor via an Xbee and then writes a message back via the Xbee. When I connect the Xbee via a Sparkfun XBee Explorer USB, it works perfectly every time. But if I run the exact same code using the RPi serial port, the incoming message is always received, but the output message is written from the serial port to the Xbee a few times after reboot and then never again. I know there's no output from the serial port to the Xbee because I have a logic probe connected to the GND, TXD and RXD pins and I can see the incomming and outgoing packets. Also, the RPi program writes debug messages for incoming and outgoing packets and both happen when they should. I'm connecting just the 3.3V, GND, TXD, and RXD pins on the RPi GPIO to the corresponding Xbee pins. RPi is running the 2013-09-10-wheezy-raspbian release. Baud rate is 38400.
This is the serial port initialization:
fcntl(fd, F_SETFL, 0); // Clear the file status flags
struct termios options;
tcgetattr(fd, &options); // Get the current options for the port
options.c_iflag &= ~(IGNBRK | BRKINT | ICRNL | INLCR | PARMRK | INPCK | ISTRIP | IXON);
options.c_oflag &= ~(OCRNL | ONLCR | ONLRET | ONOCR | OFILL | OLCUC | OPOST);
options.c_lflag &= ~(ECHO | ECHONL | ICANON | IEXTEN | ISIG);
options.c_cflag |= (CLOCAL | CREAD);
options.c_cflag &= ~PARENB;
options.c_cflag &= ~CSTOPB;
options.c_cflag &= ~CSIZE;
options.c_cflag |= CS8;
if (cfsetispeed(&options, B38400) < 0 || cfsetospeed(&options, baudRate) < 0) {
throw runtime_error("Unable to set baud rate");
}
if (tcsetattr(fd, TCSAFLUSH, &options) < 0) {
throw runtime_error("Could not set tty options");
}
FD_ZERO(&set); // clear the set
FD_SET(fd, &set); // add file descriptor to the set
sleep(1);
In looking closely at the logic analyzer, I can see what's happening. The RPi TxD line (GPIO pin 8) suddenly goes low and stays low. Then no further output is possible without a reboot. The RxD line continues to work perfectly. I have two RPis and this happens on both of them, anywhere between a minute and half an hour. Can anybody please give me a clue as to why this might happen and, more importantly, what I can do about it? I'm desperate. This is driving me crazy after way too many days of testing everything I can think of.
Problem solved! It's not the problem I thought it was, and of course it's my own fault. First off, I was looking in the wrong place. There was nothing wrong with the C++ code. My project is built using Ruby on Rails and I use the Wiring Pi (http://wiringpi.com) GPIO Interface library for the Raspberry Pi to control external relays. When I coded the pins to initially turn off, I accidentally (stupidly) used the header pin I wanted (15) as the Wiring Pi pin number. Wiring Pi pin 15 is actually GPIO pin 8, and GPIO pin 8 is the UART transmit (TxD) pin. So the result was that the Raspberry Pi did exactly what I told it to do: force TxD to LOW. Apparently once you do that, it doesn't matter what you tell the serial driver to do, the pin will stay low.
So many hours of work to find such a dumb mistake. Thanks to all those who took the time to look at my question.
I'm writing a console application under Ubuntu that uses the serial port. It needs to read and write from the serial port at 60 Hz.
I find that the call to read() is often, but not always, slow. I have set O_NDELAY, so often it returns immediately (great). Sometimes it takes up to 50 ms to finish, and that is too slow for my application. Before calling read(), I check the number of chars available, so it should not be waiting for data.
What is read() doing that takes so long? How can I speed it up?
Options on the port are:
options.c_cflag |= (CLOCAL | CREAD);
options.c_cflag &= ~PARENB;
options.c_cflag &= ~CSTOPB;
options.c_cflag &= ~CSIZE;
options.c_cflag |= CS8;
options.c_lflag &= ~(ICANON | ECHO | ECHOE | ISIG);
options.c_iflag &= ~IXON;
options.c_oflag = 0;
edit: I'd been using select() earlier but it turned out to be orthogonal to the question. Updated with my latest information.
The solution is to set the low_latency flag on the serial port.
See High delay in RS232 communication on a PXA270
and
http://osdir.com/ml/serial/2003-11/msg00020.html
It's not what select is doing, it's what the system is doing. Your thread eventually uses up its timeslice and the system allows other code to run. If you use a sensible timeout, rather than trying to return immediately, the system should treat your process as interactive and the delays should go away.
If there's a point to selecting on a single descriptor with a 0 timeout, I can't figure out what it is. Why not just try the operation and see if you get a EWOULDBLOCK error?
Why not use a sensible timeout so the system lets other processes run when you have nothing to do?
a bit exotic question :D
I'm programming c++ in ubuntu 10 and i need to code a mdb(multi drop bus) protocol which uses 9 data bits in serial communication (YES 9 data bits :D)
Some drivers do support 9 data bits on some uart chips, but mostly they do not.
To explain briefly:
mdb uses 8 data bits for data and 9th data bit for mode set.
So when master sends first BYTE it sets mode=9thbit to 1 which means that all devices on the bus are interrupted and are looking for this first byte that holds the address of a device.
If the listening device (one of many) finds its address in this first byte it knows that the following bytes will be data bytes for it. data bytes have bit 9 = mode bit set to 0
example in bits: 000001011 000000010 000000100 000000110 (1stbyte address and 3 data bytes)
The return situation from slave -> master mode bit is used for end of transmission.
So the master reads from serial so long until it finds a 9bit packet that has 9th bit = 1 usualy the last 9bit sequence is a chk byte + mode = 1
So finally my question:
I know how to user CMPAR flag in termios to use parity bit for mode bit eg. setting it to either MARK(1) or SPACE(0)
example FOR ALL that don't know how:
First check if this is defined if not probably no support in termios:
# define CMSPAR 010000000000 /* mark or space (stick) parity */
And the code for sending with mark or space parity eg. simulating 9th data bit
struct termios tio;
bzero(&tio, sizeof(tio));
tcgetattr(portFileDescriptor, &tio);
if(useMarkParity)
{
// Send with mark parity
tio.c_cflag |= PARENB | CMSPAR | PARODD;
tcsetattr(portFileDescriptor, TCSADRAIN, &tio);
}
else
{
// Send with space parity
tio.c_cflag |= PARENB | CMSPAR;
tio.c_cflag &= ~PARODD;
tcsetattr(portFileDescriptor, TCSADRAIN, &tio);
}
write(portFileDescriptor,DATA, DATALEN);
Now what i don't know HOW to set the parity checking on receive, i have tried almost all combinations and i cannot get that error parity byte sequence.
Can anyone help me how to set parity checking on receive that it does not ignore parity and does not strip bytes but it adds DEL before "bad" received byte:
As it says in the POSIX Serial help
INPCK and PARMRK If IGNPAR is enabled, a NUL character (000 octal) is
sent to your program before every character with a parity error.
Otherwise, a DEL (177 octal) and NUL character is sent along with
the bad character.
So how to correctly set PARMRK AND INPCK that it will detect mode bit = 1 as parity bit error and insert DEL 177 octal in the return stream.
Thank you :D
It sounds to me like you want to set space parity on the receiver and don't enable IGNPAR. That way when a byte with mark parity is received it should generate the parity error with the DEL.
I was having the same problem running in a Linux guest OS. Running the same program on another machine with Linux as the host OS works. I suspect that the virtual serial port does not pass on the parity error. See PARMRK termios behavior not working on Linux. It is still possible that the VM is not the problem because it was a completely different computer. I was able to get parity errors using Realterm in Windows (the host OS on the computer where Linux was the guest), however.
Also, note the code in n_tty.c shows it inserts '\377' '\0' rather than '\177' '\0'. This was also verified on the working configuration.