Flushing on UART doesn't work as expected - linux

I need to write a sequence of values (buffer, ~10bytes) via UART.
This sequence needs to start with a BREAK delimiter, and in my case I need to decrease the baud rate to a lower value.
Details about my environment:
Development board: BeagleBone Black.
Linux Kernel version: 3.8.13-bone70.
Serial driver used by the tty discipline: omap-serial.
What I finally get is something like this:
UARTIOHandler->setBaudRate(B9600);
unsigned char breakChar[] = { 0 };
UARTIOHandler->write(breakChar, 1);
UARTIOHandler->setBaudRate(B19200);
UARTIOHandler->write({1, 2, 3, 4, 5, 6, 7, 8, 9, 10});
The write method is implemented this way:
int UARTIOHandler::write(const std::initializer_list<uchar8> &data) {
uchar8 buffer[data.size()];
int counter = 0;
for(auto i : data) {
buffer[counter++] = i;
}
auto output = ::write(this->fd_write, buffer, data.size());
this->flush();
return output;
}
And finally the flush() method:
void UARTIOHandler::flush() {
tcflush(this->fd_write, TCIOFLUSH);
}
The problem with this code is that the flushing doesn't always work, sometimes the distance between the BREAK and the first byte of data (observed on a scope) is ~500us (which is fine for my application), and sometimes is up to ~3ms.
EDIT: This is the actual behavior:
For the first five seconds everything works fine (the distance between the BREAK and the rest of the message doesn't exceed ~1ms), then, after five seconds there are some frames that exceed this inter-byte timing (for up to ~3ms).
There's always the code that I posted which is executed, so there's no possible way that I somehow forget flushing the buffers.
Why do this variations happen?
I have searched for relevant problems and found this, one workaround described there is to use a delay in front of the tcflush(...) function call. I can't use this method in my application because it will affect the functionality.
Another comment in that topic suggests that this was a bug in the Linux kernel, could this also be in my case?

Related

Was: How does BPF calculate number of CPU for PERCPU_ARRAY?

I have encountered an interesting issue where a PERCPU_ARRAY created on one system with 2 processors creates an array with 2 per-CPU elements and on another system with 2 processors, an array with 128 per-CPU elements. The latter was rather unexpected to me!
The way I discovered this behavior is that a program that allocated an array for the number of CPUs (using get_nprocs_conf(3)) and then read in the PERCPU_ARRAY into it (using bpf_map_lookup_elem()) ended up writing past the end of the array and crashing.
I would like to find out what is the proper way to determine in a program that reads BPF maps the number of elements in a PERCPU_ARRAY used on a system.
Failing that, I think the second best approach is to pick a buffer for reading in that is "large enough." Here, the problem is similar: what is that number and is there way to learn it at runtime?
The question comes from reading the source of bpftool, which figures this out:
unsigned int get_possible_cpus(void)
{
int cpus = libbpf_num_possible_cpus();
if (cpus < 0) {
p_err("Can't get # of possible cpus: %s", strerror(-cpus));
exit(-1);
}
return cpus;
}
int libbpf_num_possible_cpus(void)
{
static const char *fcpu = "/sys/devices/system/cpu/possible";
static int cpus;
int err, n, i, tmp_cpus;
bool *mask;
/* ---8<--- snip */
}
So that's how they do it!

how to trim unknown first characters of string in code vision

I set a mega16 (16bit AVR microcontroller) to receive data from the serial port
which is connected to Bluetooth module HC-05 for attaining an acceptable number
sent by my android app and an android application sends a number in the form of a
string array whose maximum length is equal to 10 digits. The problem arrives
while receiving data such that one or two unknown characters(?) exist at the
beginning of the received string. I have to remove these unknown characters from
the beginning of the string in the case of existence.
this problem is just for HC-05. I mean I had no problem while sending numbers by
another microcontroller instead of android applications.
here is what I send by mobile:
"430102030405060\r"
and what is received in the serial port of microcontroller:
"??430102030405060\r"
or
"?430102030405060\r"
here is USART Receiver interrupt code:
//-------------------------------------------------------------------------
// USART Receiver interrupt service routine
interrupt [USART_RXC] void usart_rx_isr(void)
{
char status,data;
status=UCSRA;
data=UDR;
if (data==0x0D)
{
puts(ss);printf("\r")
a=0;
memset(ss, '\0', sizeof(ss));
}
else
{
ss[a]=data;
a+=1;
}
if ((status & (FRAMING_ERROR | PARITY_ERROR | DATA_OVERRUN))==0)
{
rx_buffer[rx_wr_index++]=data;
if RX_BUFFER_SIZE == 256
// special case for receiver buffer size=256
if (++rx_counter == 0) rx_buffer_overflow=1;
else
if (rx_wr_index == RX_BUFFER_SIZE) rx_wr_index=0;
if (++rx_counter == RX_BUFFER_SIZE)
{
rx_counter=0;
rx_buffer_overflow=1;
}
endif
}
}
//-------------------------------------------------------------------------
how can I remove extra characters (?) from the beginning of received data in codevision?
You do not need to remove them, just do not pass them to your processing.
You either may test the data character before putting it into your line buffer (ss) or after the complete line was received look for the first relevant character and only pass the string starting from this position to your processing functions.
Var 1:
BOOL isGarbage(char c){
return c<'0' || c > '9';
}
if (data==0x0D)
{
puts(ss);printf("\r")
a=0;
memset(ss, '\0', sizeof(ss));
} else {
if(!isGarbage(data))
{
ss[a]=data;
a+=1;
}
}
Var2:
if (data==0x0D)
{
const char* actualString = ss;
while(isGarbage(*actualString )){
actualString ++;
}
puts(actualString );printf("\r")
a=0;
memset(ss, '\0', sizeof(ss));
} else {
ss[a]=data;
a+=1;
}
However:
maybe you should try to solve the issue in contrast to just fix the symptoms (suppress '?' characters).
What is the exact value of the questionable characters? I suspect, that '?' is only used to represent non printable data.
Maybe your interface configuration is wrong and the sender uses software flow control on the line and the suspicious characters are XON/XOFF bytes
One additional note:
You may run into trouble if you use more complex functions or even peripheral devices from your interrupt service routine (ISR).
I would strongly suggest to only fill buffers there and do all other stuff in the main loop. triggered by some volatile flags data buffers.
Also I do not get why you are using an additional buffer (ss) in the ISR, since it seems that there already is a RX-Buffer. The implementation looks like that there is a good RX-receive buffer implementation that should have some functions/possibilities to get the buffer contents within the main loop, so that you do not need to add your own code to the ISR.
Additional additional notes:
string array whose maximum length is equal to 10 digits.
I count more than that, I hope your ss array is larger than that and you also should consider the fact that something may go wrong on transmission and you get a lot more characters before the next '\n'. Currently you overwrite all your ram.

Passing params from alsa application to kernel driver

I try to follow the path of parameters setting from linux user space (arecord/aplay) down to kernel driver. Let's take arecords --period-size as an example.
It all starts in set_params function in aplay.c:
if (period_time > 0)
err = snd_pcm_hw_params_set_period_time_near(handle, params, &period_time, 0);
else
err = snd_pcm_hw_params_set_period_size_near(handle, params, &period_frames, 0);
The function snd_pcm_hw_params_set_period_size_near() is defined in [pcm.c : 5186](alsa-lib https://github.com/alsa-project/alsa-lib/blob/master/src/pcm/pcm.c#L5186), and here my headache starts... This function starts a chain of calls to other functions which doesn't make much sense to me and doesn't seem to be leading to any end call of driver.
There is _end label so I skipped all calls like snd_pcm_hw_param_set_min() or snd_pcm_hw_param_set_max() and went to snd_pcm_hw_param_set_last() hoping for some driver invocation like:
drv->hw_params_set(...);
but instead I found an end call to:
MASK_INLINE unsigned int snd_mask_min(const snd_mask_t *mask)
{
int i;
assert(!snd_mask_empty(mask));
for (i = 0; i < MASK_SIZE; i++) {
if (mask->bits[i])
return ffs(mask->bits[i]) - 1 + (i << 5);
}
return 0;
}
where return values shall be the parameter set.
So to summarize, I found alsa-lib to be very difficult to read and understand. Maybe I am lacking some knowledge though. My question is simple, how is user space parameter passed to kernel driver. Can you provide a software path showing interfaces called?
Thanks.
The hw_params structure contains a configuration space, which is a description of all possible configurations that the device can support. Numeric parameters are described as intervals (i.e., min and max), access and format as bitmasks.
When you change one parameter, the library calls the kernel driver (SNDRV_PCM_IOCTL_HW_REFINE) to adjust all the other parameters in the hw_params structure that depend on the changed parameter.
After you have reduced the configuration space to the configuration you actually want, you call snd_pcm_hw_params() (→ SNDRV_PCM_IOCTL_HW_PARAMS) to actually configure the device for those parameters. (If some parameter has not been reduced to a single value, snd_pcm_hw_params() will choose a random one.)
snd_pcm_hw_params_set_xxx_near() is more complex because there is no SET_NEAR ioctl. This function tries to adjust the interval so that either its maximum or its minimum is the desired value, and then check whether the actual maximum or minimum is nearer.
For example, assume a device that supports period sizes of 1024, 2048, 4096, and 8192 frames. Initially, the interval is described as [1024, 8192]. When you call snd_pcm_hw_params_set_period_size_near(4000), the snd_pcm_hw_param_set_near() helper function calls set_min(4000) and set_max(4000) (on separate copies of the hw_params structure), so the intervals are [1024, 4000] and [4000, 8192]; after refining, the driver returns the intervals [1024, 2048] and [4096, 8192]. snd_pcm_hw_param_set_near() then sees that 4096 is nearest to the desired calue, so it then calls set_first on the second interval, which results in [4096, 4096].

Serial data acquisition program reading from buffer

I have developed an application in Visual C++ 2008 to read data periodically (50ms) from a COM Port. In order to periodically read the data, I placed the read function in an OnTimer function, and because I didn't want the rest of the GUI to hang, I called this timer function from within a thread. I have placed the code below.
The application runs fine, but it is showing the following unexpected behaviour: after the data source (a hardware device or even a data emulator) stop sending data, my application continues to receive data for a period of time that is proportional to how long the read function has been running for (EDIT: This excess period is in the same ballpark as the period of time the data is sent for). So if I start and stop the data flow immediately, this would be reflected on my GUI, but if I start data flow and stop it ten seconds later, my GUI continues to show data for 10 seconds more (EDITED).
I have made the following observations after exhausting all my attempts at debugging:
As mentioned above, this excess period of operation is proportional to how long the hardware has been sending data.
The frequency of incoming data is 50ms, so to receive 10 seconds worth of data, my GUI must be receiving around 200 more data packets.
The only buffer I have declared is abBuffer which is just a byte array of fixed size. I don't think this can increase in size, so this data is being stored somewhere.
If I change something in the data packet, this change, understandably, is shown on the GUI after a delay (because of the above points). But this would imply that the data received at the COM port is stored in some variable sized buffer from which my read function is reading data.
I have timed the read and processing periods. The latter is instantaneous while the former very rarely (3 times in 1000 reads (following no discernible pattern)) takes 16ms. This is well within the 50ms window the GUI has for each read.
The following is my thread and timer code:
UINT CMyCOMDlg::StartThread(LPVOID param)
{
THREADSTRUCT *ts = (THREADSTRUCT*)param;
ts->_this->SetTimer(1,50,0);
return 0;
}
//Timer function that is called at regular intervals
void CMyCOMDlg::OnTimer(UINT_PTR nIDEvent)
{
if(m_bCount==true)
{
DWORD NoBytesRead;
BYTE abBuffer[45];
if(ReadFile((m_hComm),&abBuffer,45,&NoBytesRead,0))
{
if(NoBytesRead==45)
{
if(abBuffer[0]==0x10&&abBuffer[1]==0x10||abBuffer[0]==0x80&&abBuffer[1]==0x80)
{
fnSetData(abBuffer);
}
else
{
CString value;
value.Append("Header match failed");
SetDlgItemText(IDC_RXRAW,value);
}
}
else
{
CString value;
value.Append(LPCTSTR(abBuffer),NoBytesRead);
value.Append("\r\nInvalid Packet Size");
SetDlgItemText(IDC_RXRAW,value);
}
}
else
{
DWORD dwError2 = GetLastError();
CString error2;
error2.Format(_T("%d"),dwError2);
SetDlgItemText(IDC_RXRAW,error2);
}
fnClear();
}
else
{
KillTimer(1);
}
CDialog::OnTimer(nIDEvent);
}
m_bCount is just a flag I use to kill the timer and the ReadFile function is a standard Windows API call. ts is a structure that contains a pointer to the main dialog class, i.e., this.
Can anyone think of a reason this could be happening? I have tried a lot of things, and also my code does so little I cannot figure out where this unexpected behaviour is happening.
EDIT:
I am adding the COM port settings and timeouts used below :
dcb.BaudRate = CBR_115200;
dcb.ByteSize = 8;
dcb.StopBits = ONESTOPBIT;
dcb.Parity = NOPARITY;
SetCommState(m_hComm, &dcb);
_param->_this=this;
COMMTIMEOUTS timeouts;
timeouts.ReadIntervalTimeout=1;
timeouts.ReadTotalTimeoutMultiplier = 0;
timeouts.ReadTotalTimeoutConstant = 10;
timeouts.WriteTotalTimeoutMultiplier = 1;
timeouts.WriteTotalTimeoutConstant = 1;
SetCommTimeouts(m_hComm, &timeouts);
You are processing one message at a time in the OnTimer() function. Since the timer interval is 1 second but the data source keeps sending message every 50 milliseconds, your application cannot process all messages in the timely manner.
You can add while loop as follow:
while(true)
{
if(::ReadFile(m_hComm, &abBuffer, sizeof(abBuffer), &NoBytesRead, 0))
{
if(NoBytesRead == sizeof(abBuffer))
{
...
}
else
{
...
break;
}
}
else
{
...
break;
}
}
But there is another problem in your code. If your software checks the message while the data source is still sending the message, NoBytesRead could be less than 45. You may want to store the data into the message buffer like CString or std::queue<unsigned char>.
If the message doesn't contain a NULL at the end of the message, passing the message to the CString object is not safe.
Also if the first byte starts at 0x80, CString will treat it as a multi-byte string. It may cause the error. If the message is not a literal text string, consider using other data format like std::vector<unsigned char>.
By the way, you don't need to call SetTimer() in the separate thread. It doesn't take time to kick a timer. Also I recommend you to call KillTimer() somewhere outside of the OnTimer() function so that the code will be more intuitive.
If the data source continuously keeps sending data, you may need to use PurgeComm() when you open/close the COMM port.

Linux termios VTIME not working?

We've been bashing our heads off of this one all morning. We've got some serial lines setup between an embedded linux device and an Ubuntu box. Our reads are getting screwed up because our code usually returns two (sometimes more, sometimes exactly one) message reads instead of one message read per actual message sent.
Here is the code that opens the serial port. InterCharTime is set to 4.
void COMClass::openPort()
{
struct termios tio;
this->fd = -1;
int tmpFD;
tempFD = open( port, O_RDWR | O_NOCTTY);
if (tempFD < 0)
{
cerr<< "the port is not opened"<< port <<"\n";
portOpen = 0;
return;
}
tio.c_cflag = BaudRate | CS8 | CLOCAL | CREAD ;
tio.c_oflag = 0;
tio.c_iflag = IGNPAR;
newtio.c_cc[VTIME] = InterCharTime;
newtio.c_cc[VMIN] = readBufferSize;
newtio.c_lflag = 0;
tcflush(tempFD, TCIFLUSH);
tcsetattr(tempFD,TCSANOW,&tio);
this->fd = tempFD;
portOpen = true;
}
The other end is configured similarly for communication, and has one small section of particular iterest:
while (1)
{
sprintf(out, "\r\nHello world %lu", ++ulCount);
puts(out);
WritePort((BYTE *)out, strlen(out)+1);
sleep(2);
} //while
Now, when I run a read thread on the receiving machine, "hello world" is usually broken up over a couple messages. Here is some sample output:
1: Hello
2: world 1
3: Hello
4: world 2
5: Hello
6: world 3
where number followed by a colon is one message recieved. Can you see any error we are making?
Thank you.
Edit:
For clarity, please view section 3.2 of the Linux Serial Programming HOWTO. To my understanding, with a VTIME of a couple seconds (meaning vtime is set anywhere between 10 and 50, trial-and-error), and a VMIN of 1, there should be no reason that the message is broken up over two separate messages.
I don't see why you are surprised.
You are asking for at least one byte. If your read() is asking for more, which seems probable since you are surprised you aren't getting the whole string in a single read, it can get whatever data is available up to the read() size. But all the data isn't available in a single read so your string is chopped up between reads.
In this scenario the timer doesn't really matter. The timer won't be set until at least one byte is available. But you have set the minimum at 1. So it just returns whatever number of bytes ( >= 1) are available up to read() size bytes.
If you are still experiencing this problem (realizing the question is old), and your code is accurate, you are setting your VTIME and VMIN in the newtio struct, and the rest of the other parameters in the tio struct.

Resources