MS Visio diagram showing intraction with a message queue - diagram

Sorry for a silly question.
I need to draw a diagram in MS Visio showing how some small piece of C++ code works. There are two threads: the main GUI thread that renders some data (represented with some object model) with DirectX, and the data receiver thread that queues some messages received over TCP. GUI thread processes all the messages in the queue after each flip (emptying the queue) and makes corresponding changes to the data model.
What kind of diagram should I use to depict this process? Primarily I need to show processing messages from the message queue and changes made to the data model, but not structure of the data model and not objects and classes it consist of.

Related

Is my design for sending data to clients at various intervals correct?

The code should be written in C++. I'm mentioning this just in case someone will suggest a solution that won't work efficient when implementing in C++.
Objective:
Producer that runs on thread t1 inserts images to Consumer that runs on thread t2. The consumer has a list of clients that he should send the images to at various intervals. E.g. client1 requires images every 1sec, client2 requires images every 5sec and etc.
Suggested implementation:
There is one main queue imagesQ in Consumer to which Producer enqueues images to. In addition to the main queue, the Consumer manages a list of vector of queues clientImageQs of size as number of clients. The Consumer creates a sub-consumer, which runs on its own thread, for each client. Each such sub-consumer dequeues the images from a relevant queue from clientImageQs and sends images to its client at its interval.
Every time a new image arrives to imagesQ, the Consumer duplicates it and enqueus to each queue in clientImageQs. Thus, each sub-consumer will be able to send the images to its client at its own frequency.
Potential problem and solution:
If Producer enqueues images at much higher rate than one of the sub-consumers dequeues, the queue will explode. But, the Consumer can check the size of the queue in clientImageQs before enqueuing. And, if needed, Consumer will dequeue a few old images before enqueuing new ones.
Question
Is this a good design or there is a better one?
You describe the problem within a set of already determined solution limitations. Your description is complex, confusing, and I dare say, confused.
Why have a consumer that only distributes images out of a shared buffer? Why not allow each "client" as you call it read from the buffer as it needs to?
Why not implement the shared buffer as a single-image buffer. The producer writes at its rate. The clients perform non-destructive reads of the buffer at their own rate. Each client is ensured to read the most recent image in the buffer whenever the client reads the buffer. The producer simply over-writes the buffer with each write.
A multi-element queue offers no benefit in this application. In fact, as you have described, it greatly complicates the solution.
See http://sworthodoxy.blogspot.com/2015/05/shared-resource-design-patterns.html Look for the heading "unconditional buffer".
The examples in the posting listed above are all implemented using Ada, but the concepts related to concurrent design patterns are applicable to all programming languages supporting concurrency.

Application design for parallel collection processing

I'm experimenting with the System.Collections.Concurrent namespace but I have a problem implementing my design.
My input queue (ConcurrentQueue) is getting populated fine from a Thread which is doing some I/O at startup to read and parse.
Next I kick off a Parallel.ForEach() on the input queue. I'm doing some I/O bound work on each item.
A log item is created for each item processed in the ForEach() and is dropped into a result queue.
What I would like to do is kick off the logging I start reading the input because I may not be able to fit all of the log items in memory. What is the best way to wait for items to land in the result queue? Are there design patterns or examples that I should be looking at?
I think the pattern you're looking for is the producer/consumer pattern. More specifically, you can have a producer/consumer implementation built around TPL and BlockingCollection.
The main concepts you want to read about are:
Task,
BlockingCollection,
TaskFactory.ContinueWhenAll(will allow you to perform some action when a set of tasks/threads is finished running).
Bounding and Blocking in BlockingCollection. This allows you to set a maximum size for your output collection (for memory reasons) and producer thread(s) will wait for consumers to pick up elements in case the maximum size you specify is reached.
BlockingCollection.CompleteAdding and BlockingCollection.IsCompleted which can be used to synchronize producers and consumers (producer can say when it's finished, consumer can check for that and keep running until the producer(s) are finised).
A more complete sample is in the second article I linked.
In your case I think you want the consumer to just pick up things from the result queue and dispose of them as soon as possible (write them to a logging store, or similar).
So your final collection, where you dump log items should be a BlockingCollection, not a ConcurrentQueue.

Using Core Data using multithreading and notifications

Here's yet another question on Core Data and multithreading:
I'm writing an application on the iPhone that retrieves XML data from the internet, parses it in a background thread (using NSXMLparser) and saves the data in Core Data using its own NSManagedObjectContext. I have a class - let's call it DataRetriever - that does this for me.
There are different UIViewControllers that then retrieve the data to display it in their respective UITableViews, of course this happens on the main thread using NSFetchedResultsControllers and a single managed object context that is used for reading.
I've read the answer to this question, which tells me that I need to register for NSManagedObjectDidSaveNotifications on the background thread (this will be done by the DataRetriever class I suppose) and then call the mergeChangesFromContextDidSaveNotification method on the reading context from that class on the main thread. This, I think, is totally thread-unsafe. I might have interpreted this the wrong way, though.
I've also read this part of Apple's documentation on the subject (Track Changes in Other Threads Using Notifications), and it tells me to simply register for NSManagedObjectDidSaveNotifications coming from the reading context in the view controller on the main thread and then it would have to call mergeChangesFromContextDidSaveNotification to update its reading context.
I went with Apple's recommendations: I now have my view controllers register themselves to NSManagedObjectDidSaveNotifications on the main thread using the reading managed object context as the source of the notifications. Doing this on the writing context probably isn't thread safe, and Apple's documentation isn't very specific on this.
Result: No crashes, but I am not receiving any notifications either.
Side note: I've read in Apple's documentation that notifications don't automatically propagate to other threads and I might even be listening for notifications from the wrong context, but why is Apple telling me to do it this way, then?
Any help is greatly appreciated.
-- EDIT --
Just to be clear, I'm registering for notifications coming from a particular NSManagedObjectContext, Apple's documentation specifically states (here) that some system frameworks may use an instance of Core Data themselves, so I could be receiving notifications from contexts that don't concern me if I don't specify a source. The documentation I referred to earlier on doesn't say anything about this, though. Any comments on this design choice are welcome.
The UI runs on the main thread so you want any intensive processing that might bog the UI down done on another thread. You have the context in the main thread listen for notifications because the main thread context is usually the only one that needs to update itself because of changes by other context in other threads.
All this is thread safe because data won't be deleted from the persistent store as long as one or more context is still using it. So, if context A has an object with the data while context B deletes another object representing the same data, the object in context A remains alive until context A calls for a merge.
Basically, each context operates in its own little world until you call merge. The race conditions that normally bedevil thread based data operations don't occur with Core Data.

help with mixing widget painting and UDP data transfers in a multi-threaded context

Here is what I need to do.
-I receive log data through a udp connection
-I stack relevant data in a qlist
-I have a timer running in the main thread that, on timeout, unstacks this data, updates some arrays then calls widget->update
-The widget re-implements paintEvent and uses these arrays to draw charts.
What would be the best way to do this in order to not have any bugs.
this task is basically three processes, two of which are done in the main thread
1-I have a qthread that asks for and receives log through udp packets. This thread also stacks the data in a qlist.
2-I have a qtimer that on timeout, unstacks these events, preps the chart arrays and then calls update
3-I have a reimplementation of the paintEvent method on that widget.
I have mutexes to synchronise and protect data. Is this a bad way of doing it? Some suggestion for a "SAFE" way of doing would be appreciated.
On a side note the paintEvent is dont on a customwidget which is inside my mainwindow. I do have a second thread (concurent function) that periodically refreshes some data then emits a signal to update label fields that are outside the custom widget but inside the mainwindow. could this have a bad side effect on everything?
Overall, I think you've got the foundation for a solid program. The only suggestion I might make is to move task #2 to its own thread also. You can take advantage of the fact that you can draw on a QImage outside of the main UI thread to prep the chart arrays in a different thread. This would remove the biggest potential bottleneck to UI responsiveness from your code, and wouldn't add much more to the complexity, since you already have threads in your program. The updating on QTimer could work there just as well. When a new image is ready, you could send it either via signal or via posted event to your UI, which could then copy the image and update its display.

C++/CLI efficient multithreaded circular buffer

I have four threads in a C++/CLI GUI I'm developing:
Collects raw data
The GUI itself
A background processing thread which takes chunks of raw data and produces useful information
Acts as a controller which joins the other three threads
I've got the raw data collector working and posting results to the controller, but the next step is to store all of those results so that the GUI and background processor have access to them.
New raw data is fed in one result at a time at regular (frequent) intervals. The GUI will access each new item as it arrives (the controller announces new data and the GUI then accesses the shared buffer). The data processor will periodically read a chunk of the buffer (a seconds worth for example) and produce a new result. So effectively, there's one producer and two consumers which need access.
I've hunted around, but none of the CLI-supplied stuff sounds all that useful, so I'm considering rolling my own. A shared circular buffer which allows write-locks for the collector and read locks for the gui and data processor. This will allow multiple threads to read the data as long as those sections of the buffer are not being written to.
So my question is: Are there any simple solutions in the .net libraries which could achieve this? Am I mad for considering rolling my own? Is there a better way of doing this?
Is it possible to rephrase the problem so that:
The Collector collects a new data point ...
... which it passes to the Controller.
The Controller fires a GUI "NewDataPointEvent" ...
... and stores the data point in an array.
If the array is full (or otherwise ready for processing), the Controller sends the array to the Processor ...
... and starts a new array.
If the values passed between threads are not modified after they are shared, this might save you from needing the custom thread-safe collection class, and reduce the amount of locking required.

Resources