how to implement string priority queue with checking message priority - priority-queue

Messenger is used to send or receive text
messages. When someone is offline a messenger
maintains a buffer of messages which is delivered
to the receiver when he gets online.
The phenomena take place on simple
timestamp phenomena, the message delivered
earlier will be sent to the receiver first and the
message received late will be delivered after it.
Sometime a message in the buffer may have higher
priority so it should be delivered earlier on the
higher priority. Some of the messages are to be
delivered on a particular day or a date are also in
the same buffer. Your task is to select a suitable
data structure (Heap or Priority Queue) and
implement the requirements mentioned above.
You need to implement program which
shows a user to be offline, display the messages,
with a click or a key stroke make the user online
and deliver/display the messages according to the
mentioned criteria.
I dont understand how to check priority

Sounds to me like your code that orders the heap has to check two things when assigning priority. It checks the priority flag (the one that signals that a message must be delivered sooner than normal), and the timestamp.
So your comparison function looks something like:
// returns -1, 0, 1 to indicate if msg1 is less than, equal to,
// or greater than msg2.
int compare(msg1, msg2)
{
if (msg1.priority == true)
{
if (msg2.priority == false)
return -1; // msg1 has priority flag set and msg2 doesn't
}
else if (msg2.priority == true)
return 1; // msg2 has priority flag set and msg1 doesn't
// At this point, we know that the priority flag is the same
// for both messages.
// So compare timestamps.
if (msg1.timestamp < msg2.timestamp)
return -1;
if (msg1.timestamp == msg2.timestamp)
return 0;
return 1;
}

Related

Serial data acquisition program reading from buffer

I have developed an application in Visual C++ 2008 to read data periodically (50ms) from a COM Port. In order to periodically read the data, I placed the read function in an OnTimer function, and because I didn't want the rest of the GUI to hang, I called this timer function from within a thread. I have placed the code below.
The application runs fine, but it is showing the following unexpected behaviour: after the data source (a hardware device or even a data emulator) stop sending data, my application continues to receive data for a period of time that is proportional to how long the read function has been running for (EDIT: This excess period is in the same ballpark as the period of time the data is sent for). So if I start and stop the data flow immediately, this would be reflected on my GUI, but if I start data flow and stop it ten seconds later, my GUI continues to show data for 10 seconds more (EDITED).
I have made the following observations after exhausting all my attempts at debugging:
As mentioned above, this excess period of operation is proportional to how long the hardware has been sending data.
The frequency of incoming data is 50ms, so to receive 10 seconds worth of data, my GUI must be receiving around 200 more data packets.
The only buffer I have declared is abBuffer which is just a byte array of fixed size. I don't think this can increase in size, so this data is being stored somewhere.
If I change something in the data packet, this change, understandably, is shown on the GUI after a delay (because of the above points). But this would imply that the data received at the COM port is stored in some variable sized buffer from which my read function is reading data.
I have timed the read and processing periods. The latter is instantaneous while the former very rarely (3 times in 1000 reads (following no discernible pattern)) takes 16ms. This is well within the 50ms window the GUI has for each read.
The following is my thread and timer code:
UINT CMyCOMDlg::StartThread(LPVOID param)
{
THREADSTRUCT *ts = (THREADSTRUCT*)param;
ts->_this->SetTimer(1,50,0);
return 0;
}
//Timer function that is called at regular intervals
void CMyCOMDlg::OnTimer(UINT_PTR nIDEvent)
{
if(m_bCount==true)
{
DWORD NoBytesRead;
BYTE abBuffer[45];
if(ReadFile((m_hComm),&abBuffer,45,&NoBytesRead,0))
{
if(NoBytesRead==45)
{
if(abBuffer[0]==0x10&&abBuffer[1]==0x10||abBuffer[0]==0x80&&abBuffer[1]==0x80)
{
fnSetData(abBuffer);
}
else
{
CString value;
value.Append("Header match failed");
SetDlgItemText(IDC_RXRAW,value);
}
}
else
{
CString value;
value.Append(LPCTSTR(abBuffer),NoBytesRead);
value.Append("\r\nInvalid Packet Size");
SetDlgItemText(IDC_RXRAW,value);
}
}
else
{
DWORD dwError2 = GetLastError();
CString error2;
error2.Format(_T("%d"),dwError2);
SetDlgItemText(IDC_RXRAW,error2);
}
fnClear();
}
else
{
KillTimer(1);
}
CDialog::OnTimer(nIDEvent);
}
m_bCount is just a flag I use to kill the timer and the ReadFile function is a standard Windows API call. ts is a structure that contains a pointer to the main dialog class, i.e., this.
Can anyone think of a reason this could be happening? I have tried a lot of things, and also my code does so little I cannot figure out where this unexpected behaviour is happening.
EDIT:
I am adding the COM port settings and timeouts used below :
dcb.BaudRate = CBR_115200;
dcb.ByteSize = 8;
dcb.StopBits = ONESTOPBIT;
dcb.Parity = NOPARITY;
SetCommState(m_hComm, &dcb);
_param->_this=this;
COMMTIMEOUTS timeouts;
timeouts.ReadIntervalTimeout=1;
timeouts.ReadTotalTimeoutMultiplier = 0;
timeouts.ReadTotalTimeoutConstant = 10;
timeouts.WriteTotalTimeoutMultiplier = 1;
timeouts.WriteTotalTimeoutConstant = 1;
SetCommTimeouts(m_hComm, &timeouts);
You are processing one message at a time in the OnTimer() function. Since the timer interval is 1 second but the data source keeps sending message every 50 milliseconds, your application cannot process all messages in the timely manner.
You can add while loop as follow:
while(true)
{
if(::ReadFile(m_hComm, &abBuffer, sizeof(abBuffer), &NoBytesRead, 0))
{
if(NoBytesRead == sizeof(abBuffer))
{
...
}
else
{
...
break;
}
}
else
{
...
break;
}
}
But there is another problem in your code. If your software checks the message while the data source is still sending the message, NoBytesRead could be less than 45. You may want to store the data into the message buffer like CString or std::queue<unsigned char>.
If the message doesn't contain a NULL at the end of the message, passing the message to the CString object is not safe.
Also if the first byte starts at 0x80, CString will treat it as a multi-byte string. It may cause the error. If the message is not a literal text string, consider using other data format like std::vector<unsigned char>.
By the way, you don't need to call SetTimer() in the separate thread. It doesn't take time to kick a timer. Also I recommend you to call KillTimer() somewhere outside of the OnTimer() function so that the code will be more intuitive.
If the data source continuously keeps sending data, you may need to use PurgeComm() when you open/close the COMM port.

Why to print a string in interrupt driven IO, only the first character needs to be copied?

Almost all materials I found online referenced the code below from Tananbaum's OS book. However I don't really understand why this would print the whole string instead of only the first character.
Is it because the interrupts will be generated recursively? But wouldn't that cost a lot of resources? Or did I miss something?
I'm really confused. Any help would be appreciated.
Code executed when print system call is made:
copy_from_user (buffer, p, count);
enable_interrupts ();
while (*printer_status_reg !=READY);
*printer_data_register = p[0];
scheduler ();
Interrupt handler:
if (count == 0) {
unblock_user ();
} else {
*printer_data_register = p[i];
count = count – 1;
i++;
}
acknowledge_interrupt ();
return_from_interrupt ();
You write first character in buffer and start the transmission.
After completion of transmission, Tx_Complete interrupt will be generated.
Now, your interrupt handler checks, whether there are any more bytes to transfer (The else part). If available, it adds next byte to transmit register, decrements number of bytes to transmit and increments buffer index.
This process goes on... When number of bytes to transmit reaches zero, you don't initiate next transfer and your interrupts stop.
By transferring first byte, you initiate the process and rest bytes are transferred by interrupt handler. You have to make sure that count is correct.
You can guess what can happen if count is less or more!

MPI_Recv message ordering vs MPI_Send message ordering

While trying to simulate the behaviour of a network using OpenMPI, I am experiencing an issue which can be summed up as follows:
Rank 2 sends a message (message1) to rank 0;
Rank 2 sends a message (message2) to rank 1;
Rank 2 sends a message (message3) to rank 0;
At his own turn, rank 0 receives both messages from rank 2 and forwards them to rank 1 (in the correct order);
Rank 1 receives the messages in the following order: message1, message3 and message2.
This behaviour only occurs only once in a while running the program. Usually (6 times out of 7), following the same pattern, rank 1 appears to receive the messages in the expected order (i.e: message2, message1, message3)
I am only using the basic MPI_Recv and MPI_Send functions.
MPI makes no guarantee about the order in which messages from different processes will be recieved. In fact, a receive operation can begin after a send has completed if an output buffer is used in standard mode: http://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node40.html#Node40. The only order you can guarantee with standard mode send is that message3 will always arrive after message1. Here is a possible (not unique) sequence that would lead to your anomalous scenario:
Rank 2 sends a message (message1) to rank 0;
Rank 0 receives message1 from rank2;
Rank 0 sends a message (message1) to rank 1;
Rank 1 receives message1 from rank0;
Rank 2 sends a message (message2) to rank 1;
Rank 2 sends a message (message3) to rank 0;
Rank 0 receives message3 from rank2;
Rank 0 sends a message (message3) to rank 1;
Rank 1 receives message3 from rank0;
Rank 1 receives message2 from rank2;
Essentially, MPI_Send is an alias for either MPI_BSend or MPI_SSend, and it is not up to you which one is picked. Your anomaly is caused by MPI_BSend. You can guarantee a write to the corresponding receive buffer using synchronous mode (MPI_SSend) or ready mode (MPI_RSend). The main diffrerence between the two is that ready mode requires the receiver to already be waiting for the message for it not to fail, while synchronous mode will wait for it to happen.
If you are on a Linux platform, you can play with the standard mode by using the nice command to increase the priority of rank0 and decrease that of rank2. The anomaly should happen more consistently the more you increase the priority difference. Here is a brief tutorial on the subject: http://www.nixtutor.com/linux/changing-priority-on-linux-processes/

recv with flags MSG_DONTWAIT | MSG_PEEK on TCP socket

I have a TCP stream connection used to exchange messages. This is inside Linux kernel. The consumer thread keeps processing incoming messages. After consuming one message, I want to check if there are more pending messages; in which case I would process them too. My code to achieve this looks like below. krecv is wrapper for sock_recvmsg(), passing value of flags without modification (krecv from ksocket kernel module)
With MSG_DONTWAIT, I am expecting it should not block, but apparently it blocks. With MSG_PEEK, if there is no data to be read, it should just return zero. Is this understanding correct ? Is there a better way to achieve what I need here ? I am guessing this should be a common requirement as message passing across nodes is used frequently.
int recvd = 0;
do {
recvd += krecv(*sockp, (uchar*)msg + recvd, sizeof(my_msg) - recvd, 0);
printk("recvd = %d / %lu\n", recvd, sizeof(my_msg));
} while(recvd < sizeof(my_msg));
BUG_ON(recvd != sizeof(my_msg));
/* For some reason, below line _blocks_ even with no blocking flags */
recvd = krecv(*sockp, (uchar*)tempbuf, sizeof(tempbuf), MSG_PEEK | MSG_DONTWAIT);
if (recvd) {
printk("more data waiting to be read");
more_to_process = true;
} else {
printk("NO more data waiting to be read");
}
You might check buffer's length first :
int bytesAv = 0;
ioctl(m_Socket,FIONREAD,&bytesAv); //m_Socket is the socket client's fd
If there are data in it , then recv with MSG_PEEK should not be blocked ,
If there are no data at all , then no need to MSG_PEEK ,
that might be what you like to do .
This is a very-very old question, but
1. problem persits
2. I faced with it.
At least for me (Ubuntu 19.04 with python 2.7) this MSG_DONTWAIT has no effect, however if I set the timeout to zero (with settimeout function), it works nicely.
This can be done in c with setsockopt function.

Behavior of WaitForMultipleObjects when multiple handles signal at the same time

Given: I fill up an array of handles with auto reset events and pass it off to WaitForMultipleObjects with bWaitAll = FALSE.
From MSDN:
“When bWaitAll is FALSE, this function checks the handles in the array in order starting with index 0, until one of the objects is signaled. If multiple objects become signaled, the function returns the index of the first handle in the array whose object was signaled.”
So, now if multiple objects signal I’ll get the index of the first one. Do I have to loop though my array to see if any others have signaled?
Right now I have a loop that’s along the lines of:
For ( ; ; )
{
WaitForMultipleObjects(…)
If (not failed)
Process object that called.
Remove the handle that signaled from the array.
Compact the arrary.
}
So, now if multiple objects signal I’ll get the index of the first one. Do I have to loop
though my array to see if any others have signaled?
Why not just go back round into the Wait()? if multiple objects signalled, they will still be signalled when you come back round. Of course, if you have a very rapidly firing first object in the wait object array, it will starve the others; what you do is order your objects in the wait object array by frequency of firing, with the least frequent being first.
BTW, where you're using an endless for(), you could use a goto. If you really are not leaving a loop, an unconditional goto most properly expresses your intent.
Yes. One alternative would be that you could do WaitForSingleObject(handle, 0) on each handle which will return immediately and indicate if they are signaled or not.
EDIT: Here's sample pseudocode for what I mean:
ret = WaitForMultipleObjects()
if (ret >= WAIT_OBJECT_0 && ret < WAIT_OBJECT_0 + (count))
{
firstSignaled = ret - WAIT_OBJECT_0;
// handles[firstSignaled] guaranteed signalled!!
for (i = firstSignaled + 1; i < count; i++)
{
if (WaitForSingleObject(handles[i], 0) == WAIT_OBJECT_0)
{
// handles[i] Signaled!
}
}
}
One other option you might have is to use RegisterWaitForSingleObject. The idea is that you flag the signaled state of event in a secondary array from the callback function and then signal a master event which is used to wake up your primary thread (which calls WaitForSingleObject on the master event).
Obviously you'd have to take care to ensure that the secondary array was protected from access by the main thread but it would work.
Only the auto-reset event that ended the wait (whose index is returned) will be reset. If the wait times out no events will be reset.
cf
https://blogs.msdn.microsoft.com/oldnewthing/20150409-00/?p=44273

Resources