I have a C++ program on a Raspberry Pi Model B that receives data from a remote sensor via an Xbee and then writes a message back via the Xbee. When I connect the Xbee via a Sparkfun XBee Explorer USB, it works perfectly every time. But if I run the exact same code using the RPi serial port, the incoming message is always received, but the output message is written from the serial port to the Xbee a few times after reboot and then never again. I know there's no output from the serial port to the Xbee because I have a logic probe connected to the GND, TXD and RXD pins and I can see the incomming and outgoing packets. Also, the RPi program writes debug messages for incoming and outgoing packets and both happen when they should. I'm connecting just the 3.3V, GND, TXD, and RXD pins on the RPi GPIO to the corresponding Xbee pins. RPi is running the 2013-09-10-wheezy-raspbian release. Baud rate is 38400.
This is the serial port initialization:
fcntl(fd, F_SETFL, 0); // Clear the file status flags
struct termios options;
tcgetattr(fd, &options); // Get the current options for the port
options.c_iflag &= ~(IGNBRK | BRKINT | ICRNL | INLCR | PARMRK | INPCK | ISTRIP | IXON);
options.c_oflag &= ~(OCRNL | ONLCR | ONLRET | ONOCR | OFILL | OLCUC | OPOST);
options.c_lflag &= ~(ECHO | ECHONL | ICANON | IEXTEN | ISIG);
options.c_cflag |= (CLOCAL | CREAD);
options.c_cflag &= ~PARENB;
options.c_cflag &= ~CSTOPB;
options.c_cflag &= ~CSIZE;
options.c_cflag |= CS8;
if (cfsetispeed(&options, B38400) < 0 || cfsetospeed(&options, baudRate) < 0) {
throw runtime_error("Unable to set baud rate");
}
if (tcsetattr(fd, TCSAFLUSH, &options) < 0) {
throw runtime_error("Could not set tty options");
}
FD_ZERO(&set); // clear the set
FD_SET(fd, &set); // add file descriptor to the set
sleep(1);
In looking closely at the logic analyzer, I can see what's happening. The RPi TxD line (GPIO pin 8) suddenly goes low and stays low. Then no further output is possible without a reboot. The RxD line continues to work perfectly. I have two RPis and this happens on both of them, anywhere between a minute and half an hour. Can anybody please give me a clue as to why this might happen and, more importantly, what I can do about it? I'm desperate. This is driving me crazy after way too many days of testing everything I can think of.
Problem solved! It's not the problem I thought it was, and of course it's my own fault. First off, I was looking in the wrong place. There was nothing wrong with the C++ code. My project is built using Ruby on Rails and I use the Wiring Pi (http://wiringpi.com) GPIO Interface library for the Raspberry Pi to control external relays. When I coded the pins to initially turn off, I accidentally (stupidly) used the header pin I wanted (15) as the Wiring Pi pin number. Wiring Pi pin 15 is actually GPIO pin 8, and GPIO pin 8 is the UART transmit (TxD) pin. So the result was that the Raspberry Pi did exactly what I told it to do: force TxD to LOW. Apparently once you do that, it doesn't matter what you tell the serial driver to do, the pin will stay low.
So many hours of work to find such a dumb mistake. Thanks to all those who took the time to look at my question.
Related
Currently, I'm testing flow control between 2 RS485 UART Port (Just connect Rx and RX, Tx and Tx, DTS/CTS is not connected).
Flag setting (between get and set attribute)
HW Flow control:
tty.c_cflag |= CRTSCTS; // RTS/CTS
tty.c_iflag &= ~(IXOFF|IXON|IXANY);
SW Flow control:
tty.c_cflag &= ~CRTSCTS;
tty.c_iflag |= (IXOFF|IXON|IXANY);
I assume that if I set both of UART1 and UART2 are Hardware flow control and baudrate is high (for eg. 460800 bps) or write into UART1 with higher baud-rate, read() from UART2 with lower baud-rate, FIFO (currently is 64byte) will be overflow as same as kernel send some notification.
But actually, it is always write() and read() successful. Could anyone share me suggestion how to observer buffer overflow?
Sorry if my question is a little dump cuz I'm a new linux leaner.
Thanks so much.
There should be no hardware flow control in the RS485 standard.
Since the API is shared with the RS232C standard, it can be called but it will not work effectively.
Also, the 64-byte FIFO you wrote is a hardware (interface chip) buffer, and the device driver also has a software buffer. Buffers often exist in kilobytes.
It is no wonder that even with high speed, transmission and reception of short data size ends normally.
Perform judgments such as overflow by checking the format of received data, and checking the balance and sequence of commands and responses.
Can anyone tell me why the characters are not getting printed properly in the serial monitor of Arduino? I am pasting the arduino code.
#include <SoftwareSerial.h>
#include <LiquidCrystal.h>
// initialize the library with the numbers of the interface pins
LiquidCrystal lcd(12,11,5,4,3,2);
int bluetoothTx = 15;
int bluetoothRx = 14;
SoftwareSerial bluetooth(bluetoothTx, bluetoothRx);
int incomingByte;
void setup() {
pinMode(53, OUTPUT);
Serial.begin(9600);
lcd.begin(16, 2);
lcd.clear();
bluetooth.begin(115200); // The Bluetooth Mate defaults to 115200bps
delay(320); // IMPORTANT DELAY! (Minimum ~276ms)
bluetooth.print("$$$"); // Enter command mode
delay(15); // IMPORTANT DELAY! (Minimum ~10ms)
bluetooth.println("U,9600,N"); // Temporarily Change the baudrate to 9600, no parity
bluetooth.begin(9600); // Start bluetooth serial at 9600
lcd.print("done setup");
}
void loop()
{
lcd.clear();
Serial.print("in loop");
//Read from bluetooth and write to usb serial
if(bluetooth.available()) {
Serial.print("BT here");
char toSend = (char)bluetooth.read();
Serial.print(toSend);
lcd.print(toSend);
delay(3000);
}delay(3000);
}
Can anyone take a look into it. It does not print the character that I provide instead it prints something else like 'y' with 2 dots on top etc. Tried almost all the available solution.
Your issues could be one of a couple things. First and easiest to check is COMMON GROUND. Did you connect just the RX and TX pins or also the GND (ground) pin? Make sure that the ground from the BT mate is connected to the Arduino ground.
If you have done that, then your issue is with the baud rate. I'm pretty sure that SoftwareSerial can't read at baud rates beyond 57600. Arduino.cc docs say it can read at 115200, but other places say it will only write up to 115200.
To test this, you will either need to change the settings for this on the Bluetooth Mate or use a Mega or Leonardo which will have a hardware serial port (other than the one used for USB) which you should be able to configure for 115200.
If you try it with hardware serial either on a Mega or just using an FTDI or something and the messages still look garbled then perhaps the bluetooth mate is not actually configured to talk at 115200 as it claims. Trying reading the docs or testing with other baud rates.
Check whether error is present due to one of the following reasons:-
1) You haven't given any command to exit from the data mode. After setting the baudrate to 9600, you are directly switching to loop. You haven't given the command to exit the command mode.
2) I too had the same problem when I was using RN171 Wi-Fi module. The cause of the problem in my case was because I was sending data to Wi-Fi module in integer format instead of uint_8. While reading from the Wi-Fi module serially with arduino mega, I was reading it in the format of characters.
You have to remember that int is actually signed 16 bit integer. So while sending data to your Bluetooth module you have to send it as uint_8 or ASCII values of the characters that you want to send. You should also read it in the same format as you sent it.
3) If these are not the error then as calumb said, there can be error in setting the bluetooth module in command mode. You haven't checked for reply from bluetooth module whether it is really in command mode or not. You must read an CMD reply from bluetooth module and at the end of every command a reply of ack to conform that its really done what you want it to do.
This may be because of Bluetooth parsing data simultaneously. when sending two different data at the same time this may happens. try to control your data flow.
I'm trying to receive messages from a device that uses mark parity for an address byte and space parity for the message body. The device is a "master" of a multi-drop serial bus. Based on the termios man page, I am using CMSPAR, PARENB, ~PARODD, INPCK, ~IGNPAR, and PARMRK. I expect to get a 3-byte sequence on each address byte: '\377' '\0' . It doesn't happen... I always get the address byte (and the body bytes) but no leading '\377' '\0' chars.
I tried to get PARMRK to work with odd and even parity setups just in case CMSPAR was not supported. Still no 3-byte sequences in the data stream. I'm using Ubuntu 12.04 LTS.
n_tty.c: n_tty_receive_parity_error() has the logic that implements PARMRK. 8250_core.c has the logic to flag parity errors. dmesg | grep ttyS0 shows serail8250: ... is a 16550A. Hmmm... a subsequent message shows 00:0a: ... is a 16550A. Perhaps the 8250 driver is not actually processing ttyS0?
Any ideas? Even if you don't see what I've done wrong but have gotten PARMAR to work, comments about your situation might help me.
UPDATE:
My Linux is running in a VMware VM so I tried a non-VM config and now it works! I case someone knows, I'd still like to know why parity errors are not detected in a VM.
Here is my configuration code:
struct termios tio;
bzero(&tio, sizeof(tio));
tcgetattr(fd, &tio);
// Frame bus runs at 38,400 BAUD
const int BAUD_Rate = B38400;
cfsetispeed(&tio, BAUD_Rate);
cfsetospeed(&tio, BAUD_Rate);
// Initialize to raw mode. PARMRK and PARENB will be over-ridden before calling tcsetattr()
cfmakeraw(&tio);
// Ignore modem lines and enable receiver
tio.c_cflag |= (CLOCAL | CREAD);
// No flow control
tio.c_cflag &= ~CRTSCTS; // No HW flow control
tio.c_iflag &= ~(IXON | IXOFF); // Set the input flags to disable in-band flow control
// Set bits per byte
tio.c_cflag &= ~CSIZE;
tio.c_cflag |= CS8;
// Use space parity to get 3-byte sequence (0xff 0x00 <address>) on address byte
tio.c_cflag |= CMSPAR; // Set "stick" parity (either mark or space)
tio.c_cflag &= ~PARODD; // Select space parity so that only address byte causes error
// NOTE: The following block overrides PARMRK and PARENB bits cleared by cfmakeraw.
tio.c_cflag |= PARENB; // Enable parity generation
tio.c_iflag |= INPCK; // Enable parity checking
tio.c_iflag |= PARMRK; // Enable in-band marking
tio.c_iflag &= ~IGNPAR; // Make sure input parity errors are not ignored
// Set it up now
if (tcsetattr(fd, TCSANOW, &tio) == -1)
{
cout << "Failed to setup the port: " << errno << endl;
return -1;
}
I was having a similar issue (but from the opposite side):
The master of a serial protocol should be sending the 1st byte of a frame with parity mark and all the rest with parity space, while the slave would respond only with parity space.
Many serial comms drivers will ignore the "CMSPAR" bit without returning an error, so you may think that you've setup parity Mark/Space, while you've actually have selected parity Odd/Even instead.
I had to use a protocol analyser to realise that.
So I ended up checking the data of each byte before sending it and switching between odd/even parity in order to simulate the Mark/Space parity that I needed.
Most USB to Serial adapters will need a similar approach because they don't support parity mark/space.
For example, let's say we want to send the following data:
01 03 07 0F 1F
The 1st byte should be sent with parity Mark and the rest with parity space
We could do the following:
Send 01 with odd parity (parity bit=1)
Send 03 with odd parity (parity bit=0)
Send 07 with even parity (parity bit=0)
Send 0F with odd parity (parity bit=0)
Send 1F with even parity (parity bit=0)
That way we can simulate the needed result.
The catch here is that when you are switching parity, the driver is performing a lot of checks that are time consuming and this can affect the final rate of data transfer.
I was using a hacked version of the serial comms driver on an embedded device that could switch parity very fast by omitting some of the unnecessary checks for the application (like baud rate changes for example).
If your inter-character delay is critical, you may need a different solution.
I'm trying to describe my problem from the beginning. I'm a newbie in linux diver development, so please point out my problems about my consideration in my project.
I'm now developing a linux driver for a modem, but not a typical one. I hope it can work as a net adapter, rather than a modem. It is connected with computer with serial port. Because of the nature of my problem, I have to use a USB-serial converter. Most of the answer suggest to use a user space driver instead. However, as I want to provide a networking interface like eth0, I have to make it in the kernel. Besides open/read/write ttyUSB0 file, I do not have any other idea solving this problem.
I'm now using code like this
struct file *f;
mm_segment_t oldfs;
struct tty_struct *tty;
struct termios term;
unsigned char buffer[255];
f=filp_open("/dev/ttyS0",O_RDWR | O_NDELAY,0);
oldfs=getfs();
set_fs(KERNEL_DS);
f->f_pos=0;
tty=(struct tty_struct*)f->private_data;
tty->termios->c_flag=B9600 | CRTSCTS | CS8 | CLOCAL | CREAD;
At this time, tty->termios is NULL, so I can not do the last step. What I do is:
struct ktermio term;
//setting the termio
tty->termios = &term;
And the result of this is that the setting is not applied to the serial port. Even if I change it to a absolutely wrong baud rateļ¼ I still receive fragments. Some of them are correct, while some are not. What is the problem, and what should I do.
I'm writing a console application under Ubuntu that uses the serial port. It needs to read and write from the serial port at 60 Hz.
I find that the call to read() is often, but not always, slow. I have set O_NDELAY, so often it returns immediately (great). Sometimes it takes up to 50 ms to finish, and that is too slow for my application. Before calling read(), I check the number of chars available, so it should not be waiting for data.
What is read() doing that takes so long? How can I speed it up?
Options on the port are:
options.c_cflag |= (CLOCAL | CREAD);
options.c_cflag &= ~PARENB;
options.c_cflag &= ~CSTOPB;
options.c_cflag &= ~CSIZE;
options.c_cflag |= CS8;
options.c_lflag &= ~(ICANON | ECHO | ECHOE | ISIG);
options.c_iflag &= ~IXON;
options.c_oflag = 0;
edit: I'd been using select() earlier but it turned out to be orthogonal to the question. Updated with my latest information.
The solution is to set the low_latency flag on the serial port.
See High delay in RS232 communication on a PXA270
and
http://osdir.com/ml/serial/2003-11/msg00020.html
It's not what select is doing, it's what the system is doing. Your thread eventually uses up its timeslice and the system allows other code to run. If you use a sensible timeout, rather than trying to return immediately, the system should treat your process as interactive and the delays should go away.
If there's a point to selecting on a single descriptor with a 0 timeout, I can't figure out what it is. Why not just try the operation and see if you get a EWOULDBLOCK error?
Why not use a sensible timeout so the system lets other processes run when you have nothing to do?