Serial data acquisition program reading from buffer - visual-c++

I have developed an application in Visual C++ 2008 to read data periodically (50ms) from a COM Port. In order to periodically read the data, I placed the read function in an OnTimer function, and because I didn't want the rest of the GUI to hang, I called this timer function from within a thread. I have placed the code below.
The application runs fine, but it is showing the following unexpected behaviour: after the data source (a hardware device or even a data emulator) stop sending data, my application continues to receive data for a period of time that is proportional to how long the read function has been running for (EDIT: This excess period is in the same ballpark as the period of time the data is sent for). So if I start and stop the data flow immediately, this would be reflected on my GUI, but if I start data flow and stop it ten seconds later, my GUI continues to show data for 10 seconds more (EDITED).
I have made the following observations after exhausting all my attempts at debugging:
As mentioned above, this excess period of operation is proportional to how long the hardware has been sending data.
The frequency of incoming data is 50ms, so to receive 10 seconds worth of data, my GUI must be receiving around 200 more data packets.
The only buffer I have declared is abBuffer which is just a byte array of fixed size. I don't think this can increase in size, so this data is being stored somewhere.
If I change something in the data packet, this change, understandably, is shown on the GUI after a delay (because of the above points). But this would imply that the data received at the COM port is stored in some variable sized buffer from which my read function is reading data.
I have timed the read and processing periods. The latter is instantaneous while the former very rarely (3 times in 1000 reads (following no discernible pattern)) takes 16ms. This is well within the 50ms window the GUI has for each read.
The following is my thread and timer code:
UINT CMyCOMDlg::StartThread(LPVOID param)
{
THREADSTRUCT *ts = (THREADSTRUCT*)param;
ts->_this->SetTimer(1,50,0);
return 0;
}
//Timer function that is called at regular intervals
void CMyCOMDlg::OnTimer(UINT_PTR nIDEvent)
{
if(m_bCount==true)
{
DWORD NoBytesRead;
BYTE abBuffer[45];
if(ReadFile((m_hComm),&abBuffer,45,&NoBytesRead,0))
{
if(NoBytesRead==45)
{
if(abBuffer[0]==0x10&&abBuffer[1]==0x10||abBuffer[0]==0x80&&abBuffer[1]==0x80)
{
fnSetData(abBuffer);
}
else
{
CString value;
value.Append("Header match failed");
SetDlgItemText(IDC_RXRAW,value);
}
}
else
{
CString value;
value.Append(LPCTSTR(abBuffer),NoBytesRead);
value.Append("\r\nInvalid Packet Size");
SetDlgItemText(IDC_RXRAW,value);
}
}
else
{
DWORD dwError2 = GetLastError();
CString error2;
error2.Format(_T("%d"),dwError2);
SetDlgItemText(IDC_RXRAW,error2);
}
fnClear();
}
else
{
KillTimer(1);
}
CDialog::OnTimer(nIDEvent);
}
m_bCount is just a flag I use to kill the timer and the ReadFile function is a standard Windows API call. ts is a structure that contains a pointer to the main dialog class, i.e., this.
Can anyone think of a reason this could be happening? I have tried a lot of things, and also my code does so little I cannot figure out where this unexpected behaviour is happening.
EDIT:
I am adding the COM port settings and timeouts used below :
dcb.BaudRate = CBR_115200;
dcb.ByteSize = 8;
dcb.StopBits = ONESTOPBIT;
dcb.Parity = NOPARITY;
SetCommState(m_hComm, &dcb);
_param->_this=this;
COMMTIMEOUTS timeouts;
timeouts.ReadIntervalTimeout=1;
timeouts.ReadTotalTimeoutMultiplier = 0;
timeouts.ReadTotalTimeoutConstant = 10;
timeouts.WriteTotalTimeoutMultiplier = 1;
timeouts.WriteTotalTimeoutConstant = 1;
SetCommTimeouts(m_hComm, &timeouts);

You are processing one message at a time in the OnTimer() function. Since the timer interval is 1 second but the data source keeps sending message every 50 milliseconds, your application cannot process all messages in the timely manner.
You can add while loop as follow:
while(true)
{
if(::ReadFile(m_hComm, &abBuffer, sizeof(abBuffer), &NoBytesRead, 0))
{
if(NoBytesRead == sizeof(abBuffer))
{
...
}
else
{
...
break;
}
}
else
{
...
break;
}
}
But there is another problem in your code. If your software checks the message while the data source is still sending the message, NoBytesRead could be less than 45. You may want to store the data into the message buffer like CString or std::queue<unsigned char>.
If the message doesn't contain a NULL at the end of the message, passing the message to the CString object is not safe.
Also if the first byte starts at 0x80, CString will treat it as a multi-byte string. It may cause the error. If the message is not a literal text string, consider using other data format like std::vector<unsigned char>.
By the way, you don't need to call SetTimer() in the separate thread. It doesn't take time to kick a timer. Also I recommend you to call KillTimer() somewhere outside of the OnTimer() function so that the code will be more intuitive.
If the data source continuously keeps sending data, you may need to use PurgeComm() when you open/close the COMM port.

Related

how to trim unknown first characters of string in code vision

I set a mega16 (16bit AVR microcontroller) to receive data from the serial port
which is connected to Bluetooth module HC-05 for attaining an acceptable number
sent by my android app and an android application sends a number in the form of a
string array whose maximum length is equal to 10 digits. The problem arrives
while receiving data such that one or two unknown characters(?) exist at the
beginning of the received string. I have to remove these unknown characters from
the beginning of the string in the case of existence.
this problem is just for HC-05. I mean I had no problem while sending numbers by
another microcontroller instead of android applications.
here is what I send by mobile:
"430102030405060\r"
and what is received in the serial port of microcontroller:
"??430102030405060\r"
or
"?430102030405060\r"
here is USART Receiver interrupt code:
//-------------------------------------------------------------------------
// USART Receiver interrupt service routine
interrupt [USART_RXC] void usart_rx_isr(void)
{
char status,data;
status=UCSRA;
data=UDR;
if (data==0x0D)
{
puts(ss);printf("\r")
a=0;
memset(ss, '\0', sizeof(ss));
}
else
{
ss[a]=data;
a+=1;
}
if ((status & (FRAMING_ERROR | PARITY_ERROR | DATA_OVERRUN))==0)
{
rx_buffer[rx_wr_index++]=data;
if RX_BUFFER_SIZE == 256
// special case for receiver buffer size=256
if (++rx_counter == 0) rx_buffer_overflow=1;
else
if (rx_wr_index == RX_BUFFER_SIZE) rx_wr_index=0;
if (++rx_counter == RX_BUFFER_SIZE)
{
rx_counter=0;
rx_buffer_overflow=1;
}
endif
}
}
//-------------------------------------------------------------------------
how can I remove extra characters (?) from the beginning of received data in codevision?
You do not need to remove them, just do not pass them to your processing.
You either may test the data character before putting it into your line buffer (ss) or after the complete line was received look for the first relevant character and only pass the string starting from this position to your processing functions.
Var 1:
BOOL isGarbage(char c){
return c<'0' || c > '9';
}
if (data==0x0D)
{
puts(ss);printf("\r")
a=0;
memset(ss, '\0', sizeof(ss));
} else {
if(!isGarbage(data))
{
ss[a]=data;
a+=1;
}
}
Var2:
if (data==0x0D)
{
const char* actualString = ss;
while(isGarbage(*actualString )){
actualString ++;
}
puts(actualString );printf("\r")
a=0;
memset(ss, '\0', sizeof(ss));
} else {
ss[a]=data;
a+=1;
}
However:
maybe you should try to solve the issue in contrast to just fix the symptoms (suppress '?' characters).
What is the exact value of the questionable characters? I suspect, that '?' is only used to represent non printable data.
Maybe your interface configuration is wrong and the sender uses software flow control on the line and the suspicious characters are XON/XOFF bytes
One additional note:
You may run into trouble if you use more complex functions or even peripheral devices from your interrupt service routine (ISR).
I would strongly suggest to only fill buffers there and do all other stuff in the main loop. triggered by some volatile flags data buffers.
Also I do not get why you are using an additional buffer (ss) in the ISR, since it seems that there already is a RX-Buffer. The implementation looks like that there is a good RX-receive buffer implementation that should have some functions/possibilities to get the buffer contents within the main loop, so that you do not need to add your own code to the ISR.
Additional additional notes:
string array whose maximum length is equal to 10 digits.
I count more than that, I hope your ss array is larger than that and you also should consider the fact that something may go wrong on transmission and you get a lot more characters before the next '\n'. Currently you overwrite all your ram.

linux socket: lifetime of ancillary data for sendmsg

I use cmsg to activate timestamping on linux socket tx.
ssize_t sendWithOptions
(int sd, std::vector<uint8_t> &payload, uint32_t destIP, int flags)
{
msghdr msg { };
.... // filling standard
std::array<uint8_t, CMSG_LEN(sizeof(__u32))> buf;
msg.msg_control = buf.data();
msg.msg_controlen = buf.size();
auto cmsg { CMSG_FIRSTHDR ( &msg ) };
cmsg->cmsg_level = SOL_SOCKET;
cmsg->cmsg_type = SO_TIMESTAMPING;
cmsg->cmsg_len = buf.size();
*(reinterpret_cast<__u32>(CMSG_DATA (cmsg)) = static_cast<__u32>(flags);
return sendmsg ( sd, &msg, MSG_DONTWAIT );
}
Leaving the function, "buf" is automatically destroyed, but does sendmsg need this buffer to live longer?
Do I have a guarantee that the function does not need this buffer once it has returned the number of bytes sent.
Except for specific interfaces, it is generally the case that operating system calls do not rely on user-space to maintain data structures affecting their operation after they are finished. The exceptions will be spelled out in the manual pages.
With sendmsg, in particular, you can rely on the call to complete immediately - whether successful or not. It's fine therefore to use a dynamically allocated buffer as you're doing, and destroy it immediately after the call.
As an example of one exception, aio_write(2) is specifically intended to allow user-space to queue a write operation that will be completed asynchronously. For this call, the data is not consumed until it can be successfully written. Hence, you must not modify the data structures provided in the call until you have confirmed it is complete. That caveat is called out in the NOTES section of the manual page:
... The control block must not be changed while the write operation is in progress. The buffer area being written out must not be accessed during the operation or undefined results may occur. The memory areas involved must remain valid.
In summary: check the manual page for the system call. But most of the time, you don't need to worry about it.

Why to print a string in interrupt driven IO, only the first character needs to be copied?

Almost all materials I found online referenced the code below from Tananbaum's OS book. However I don't really understand why this would print the whole string instead of only the first character.
Is it because the interrupts will be generated recursively? But wouldn't that cost a lot of resources? Or did I miss something?
I'm really confused. Any help would be appreciated.
Code executed when print system call is made:
copy_from_user (buffer, p, count);
enable_interrupts ();
while (*printer_status_reg !=READY);
*printer_data_register = p[0];
scheduler ();
Interrupt handler:
if (count == 0) {
unblock_user ();
} else {
*printer_data_register = p[i];
count = count – 1;
i++;
}
acknowledge_interrupt ();
return_from_interrupt ();
You write first character in buffer and start the transmission.
After completion of transmission, Tx_Complete interrupt will be generated.
Now, your interrupt handler checks, whether there are any more bytes to transfer (The else part). If available, it adds next byte to transmit register, decrements number of bytes to transmit and increments buffer index.
This process goes on... When number of bytes to transmit reaches zero, you don't initiate next transfer and your interrupts stop.
By transferring first byte, you initiate the process and rest bytes are transferred by interrupt handler. You have to make sure that count is correct.
You can guess what can happen if count is less or more!

incomplete reading data serial port

I want to read 6 data sensors from arduino to VC++ with "handshaking" methods, I send "1" to arduino then this device will send the data to PC.
My data format is:
&data0,data1,data2,data3,data4,data5%
But when I read it with VC++ data always incomplete, even thought the size of buffer is enough for all the data
here is the snapshot of my vC++ program, I put it on timer event
DWORD nbytes;
char buffer[24];
//Read Sensors
if(!WriteFile( hnd_serial, "1", 1, &nbytes, NULL )){KillTimer(cTimer1);MessageBox(L"Write Com Port fail!");return;}
Sleep(5);
if(!ReadFile( hnd_serial, buffer, 23, &nbytes, NULL )){KillTimer(cTimer1);MessageBox(L"Read Com Port fail!");return;}
Sleep(50);
I have changed the baud rate but the result still the same.
but If I reduce one data such as data5 (become 5 sensors), data is complete.
Have I do something wrong with my program?
You can put your ReadFile() function inside a do-while loop.
do{
if(!ReadFile( hnd_serial, buffer, 23, &nbytes, NULL )){
//Process error
break;
}
if (nbytes>= 6)
{
//Put your flag - or process here
break;
}
}while(nbytes);
The ReadFile() only returns the current buffer status, it will not wait for receiving full of your expected data
You don't say what baud rate you're working with.
You also don't say what, if any, receive timeouts have been set.
Lets assume it's 9600 that means the port transfers 960 bytes per second. So just over 1 millisecond per byte.
If your sleep(5) is supposed to wait for the "1" to be sent AND for the the data to come back then you ought to wait long enough for all of your bytes to be transferred. Assuming timeouts are set so that the ReadFile returns immediately with whatever bytes are in the RX buffer then there's a chance you're reading too quickly.
Try a bigger delay.

recv with flags MSG_DONTWAIT | MSG_PEEK on TCP socket

I have a TCP stream connection used to exchange messages. This is inside Linux kernel. The consumer thread keeps processing incoming messages. After consuming one message, I want to check if there are more pending messages; in which case I would process them too. My code to achieve this looks like below. krecv is wrapper for sock_recvmsg(), passing value of flags without modification (krecv from ksocket kernel module)
With MSG_DONTWAIT, I am expecting it should not block, but apparently it blocks. With MSG_PEEK, if there is no data to be read, it should just return zero. Is this understanding correct ? Is there a better way to achieve what I need here ? I am guessing this should be a common requirement as message passing across nodes is used frequently.
int recvd = 0;
do {
recvd += krecv(*sockp, (uchar*)msg + recvd, sizeof(my_msg) - recvd, 0);
printk("recvd = %d / %lu\n", recvd, sizeof(my_msg));
} while(recvd < sizeof(my_msg));
BUG_ON(recvd != sizeof(my_msg));
/* For some reason, below line _blocks_ even with no blocking flags */
recvd = krecv(*sockp, (uchar*)tempbuf, sizeof(tempbuf), MSG_PEEK | MSG_DONTWAIT);
if (recvd) {
printk("more data waiting to be read");
more_to_process = true;
} else {
printk("NO more data waiting to be read");
}
You might check buffer's length first :
int bytesAv = 0;
ioctl(m_Socket,FIONREAD,&bytesAv); //m_Socket is the socket client's fd
If there are data in it , then recv with MSG_PEEK should not be blocked ,
If there are no data at all , then no need to MSG_PEEK ,
that might be what you like to do .
This is a very-very old question, but
1. problem persits
2. I faced with it.
At least for me (Ubuntu 19.04 with python 2.7) this MSG_DONTWAIT has no effect, however if I set the timeout to zero (with settimeout function), it works nicely.
This can be done in c with setsockopt function.

Resources