how do i prioritize code to look at 3 different queue - multithreading

We have a requirement where we will have messages coming in 3 different queues.
I need to write code such that messages from Queue A are given higher priority over Queue B followed by Queue C.
However I cannot keep any of the Queue waiting for too long so there should be some dedicated receivers for each thread.
Can you please suggest any existing framework that can do this for me?
A possible solution is a higher number of dedicated receivers for queue A that also look at B and C if there are no messages in A.
A slightly lesser number of dedicated receivers for Queue B that also look at A and C if there are no messages in B.
A very few dedicated receivers for Queue C that also look at A and B if there are no messages in C.
Is it possible to implement this solution at JMS consumer\receiver level or Do I need to write custom code for it?

JMS has no means to control priority of message handling. I propose to convert each message in a task (immediately as it arrives) and submit tasks to a prioritized Executor. See Java Executors: how can I set task priority?

If you control the queues (as in, the writing code can have a queue reference you provide), then you would put a single PriorityBlockingQueue, with a comparator that sort the A, B, C.
If you cannot avoid 3 queues (as in you only get the queue reference to read from), then you unfortunately have to poll each, not take(). However you cannot spin at full speed and must wait, so I would think that you should take(timeout) on the A queue for as long as your minimal response time allows for servicing the B and C queues (which would be large anyway if A always have priority). You only call A.take() if the B and C queues are empty of course (but don't rely on .size() if you don't know the queue implementation; just trust the last poll() outcome you just tried).
Of course you can spin 3 threads to simply take() and put in a single priority queue that you control. But that is a bit overkill.

Use the JMSPriority property on the JMS message, dump the messages on the same queue and let the provider do the work of prioritizing.

Related

Is my design for sending data to clients at various intervals correct?

The code should be written in C++. I'm mentioning this just in case someone will suggest a solution that won't work efficient when implementing in C++.
Objective:
Producer that runs on thread t1 inserts images to Consumer that runs on thread t2. The consumer has a list of clients that he should send the images to at various intervals. E.g. client1 requires images every 1sec, client2 requires images every 5sec and etc.
Suggested implementation:
There is one main queue imagesQ in Consumer to which Producer enqueues images to. In addition to the main queue, the Consumer manages a list of vector of queues clientImageQs of size as number of clients. The Consumer creates a sub-consumer, which runs on its own thread, for each client. Each such sub-consumer dequeues the images from a relevant queue from clientImageQs and sends images to its client at its interval.
Every time a new image arrives to imagesQ, the Consumer duplicates it and enqueus to each queue in clientImageQs. Thus, each sub-consumer will be able to send the images to its client at its own frequency.
Potential problem and solution:
If Producer enqueues images at much higher rate than one of the sub-consumers dequeues, the queue will explode. But, the Consumer can check the size of the queue in clientImageQs before enqueuing. And, if needed, Consumer will dequeue a few old images before enqueuing new ones.
Question
Is this a good design or there is a better one?
You describe the problem within a set of already determined solution limitations. Your description is complex, confusing, and I dare say, confused.
Why have a consumer that only distributes images out of a shared buffer? Why not allow each "client" as you call it read from the buffer as it needs to?
Why not implement the shared buffer as a single-image buffer. The producer writes at its rate. The clients perform non-destructive reads of the buffer at their own rate. Each client is ensured to read the most recent image in the buffer whenever the client reads the buffer. The producer simply over-writes the buffer with each write.
A multi-element queue offers no benefit in this application. In fact, as you have described, it greatly complicates the solution.
See http://sworthodoxy.blogspot.com/2015/05/shared-resource-design-patterns.html Look for the heading "unconditional buffer".
The examples in the posting listed above are all implemented using Ada, but the concepts related to concurrent design patterns are applicable to all programming languages supporting concurrency.

how to process hundreds of JMS message from 2 queues, response time of 1 second and 1 minute respectively

I have business requirement where I have to process messages in a certain priority say priority1 and priority2
We have decided to use 2 JMS queues where priority1 messages will be sent to priority1Queue and priority2 messages will be sent to priority2Queue.
Response time for priority1Queue messages is that the moment message is in Queue, I need to read, process and send the response back to say another queue in 1 second. This means I should immediately process these messages the moment they are in priority1Queue, and I will have hundreds of such messages coming in per second on priority1Queue so I will definitely need to have multiple concurrent consumers consuming messages on this queue so that they can be processed immediately when they are in the queue(consumed and processed within 1 second).
Response time for priority2Queue messages is that I need to read, process and send the response back to say another queue in 1 minute. So the response time of priority2 is lower to priority1 messages however I still need to respond back in a minute.
Can you suggest best possible approach for this so that I can concurrently read messages from both the queue and give higher priority to priority1 messages so that each priority1 message can be read and processed in 1 second.
Mainly how it can be read and fed to a processor so that the next message can be read and so on.
I need to write a java based component that does the reading and processing.
I also need to ensure this component is highly available and doesn't result in OutOfMemory, I will be having this component running across multiple JVMS and multiple application servers thus I can have multiple clusters running this Java component
First off, the requirement to process within 1 second is not going to be dependent on your messaging approach, but more about the actual processing of the message and the raw CPUs available. Picking up 100s of messages per second from a queue is child's play, the JMS provider is most likely not the issue. Depending on your deployment platform (Tomcat, Mule, JEE, whatever), there should be a way to have n listeners to scale up appropriately. Because the messages exist on the queue until you pick it up, doubtful you'll run out of memory. I've done these apps, processed many more messages without problems.
Second, number of strategies for prioritizing messages, not necessarily requiring different queues, using priorities. I'm leaning towards using message priorities and message filters, where one group of listeners take care of the highest priority messages and another listener filters off lower priority but makes sure it does enough to get them out within a minute.
You could also do something where a lower priority message gets rewritten back to the same queue with a higher priority, based on how close to 1 minute you are. I know that sounds wrong, but reading/writing from JMS has very little overhead (at least compared to do the equivalent, column-driven database transactions), but the listener for lower priority messages could just continually increase the priority until it has to be processed.
Or simpler, just have more listeners on the high priority queue/messages than the lower priority ones, and imbalance in number of processes for messages might be all it needs.
Lots of possibilities, time for a PoC.

Application design for parallel collection processing

I'm experimenting with the System.Collections.Concurrent namespace but I have a problem implementing my design.
My input queue (ConcurrentQueue) is getting populated fine from a Thread which is doing some I/O at startup to read and parse.
Next I kick off a Parallel.ForEach() on the input queue. I'm doing some I/O bound work on each item.
A log item is created for each item processed in the ForEach() and is dropped into a result queue.
What I would like to do is kick off the logging I start reading the input because I may not be able to fit all of the log items in memory. What is the best way to wait for items to land in the result queue? Are there design patterns or examples that I should be looking at?
I think the pattern you're looking for is the producer/consumer pattern. More specifically, you can have a producer/consumer implementation built around TPL and BlockingCollection.
The main concepts you want to read about are:
Task,
BlockingCollection,
TaskFactory.ContinueWhenAll(will allow you to perform some action when a set of tasks/threads is finished running).
Bounding and Blocking in BlockingCollection. This allows you to set a maximum size for your output collection (for memory reasons) and producer thread(s) will wait for consumers to pick up elements in case the maximum size you specify is reached.
BlockingCollection.CompleteAdding and BlockingCollection.IsCompleted which can be used to synchronize producers and consumers (producer can say when it's finished, consumer can check for that and keep running until the producer(s) are finised).
A more complete sample is in the second article I linked.
In your case I think you want the consumer to just pick up things from the result queue and dispose of them as soon as possible (write them to a logging store, or similar).
So your final collection, where you dump log items should be a BlockingCollection, not a ConcurrentQueue.

How does one determine if all messages in an Azure Queue have been processed?

I've just begun tinkering with Windows Azure and would appreciate help with a question.
How does one determine if a Windows Azure Queue is empty and that all work-items in it have been processed? If I have multiple worker processes querying a work-item queue, GetMessage(s) returns no messages if the queue is empty. But there is no guarantee that a currently invisible message will not be pushed back into the queue.
I need this functionality since follow-up behavior of my workflow depends on completion of all work-items in that particular queue. A possible way of tackling this problem would be to count the number of puts and deletes. But this will again require synchronization at a shared storage level and I would like to avoid it if possible.
Any ideas?
Take a look at the ApproximateMessageCount method. This should return the number of messages on the queue, including invisible messages (e.g. the ones being processed).
Mike Wood blogged about this subtlety, along with a tidbit about the queue's Clear method, here.
That said: you might want to choose a different mechanism for workflow management. Maybe a table row, where you have your rowkey equal to some multi-queue-item transation id, and individual properties being status flags. This allows you to track failed parts of the transaction (say, 9 out of 10 queue items process ok, the 10th fails; you can still delete the 10th queue item, but set its status flag to failed, then letting you deal with this scenario accordingly). Also: let's say you use the same queue to process another 'transaction' (meaning the queue is again non-zero in length). By using a separate object like a Table Row, you can still determine that your 'transaction' is complete even though there are additional queue messages.
The best way is to have another queue, call it termination indicator queue, and put a message in that queue for every message your process from your main queue. That is how it is done in research projects too. Check this out http://www.cs.gsu.edu/dimos/content/gis-vector-data-overlay-processing-azure-platform.html

How to approach parallel processing of messages?

I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches.
Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type.
The sequential version would be something like this (pseudocode):
for each message in message_sequence <- SEQUENTIAL
for each handler in (handler_table for message.type)
apply handler to message <- SEQUENTIAL
The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently.
Pros:
predictable ordering of messages (ie, we are guaranteed a FIFO processing order)
(potentially) lower latency of processing each message
Cons:
more processing resources available than handlers for a single message type (bad parallelization)
bad use of processor cache since message objects need to be copied for each handler to use
large overhead for small handlers
The pseudocode of this approach would be as follows:
for each message in message_sequence <- SEQUENTIAL
parallel_for each handler in (handler_table for message.type)
apply handler to message <- PARALLEL
The second approach is to process the messages in parallel and apply the handlers to each message sequentially.
Pros:
better use of processor cache (keeps the message object local to all handlers which will use it)
small handlers don't impose as much overhead (as long as there are other handlers also to be run)
more messages are expected than there are handlers, so the potential for parallelism is greater
Cons:
Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic)
The pseudocode is as follows:
parallel_for each message in message_sequence <- PARALLEL
for each handler in (handler_table for message.type)
apply handler to message <- SEQUENTIAL
The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage..
Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)?
Thanks!
EDIT:
I think what I'll do is use #2 by default, but allow a "conversation tag" to be attached to each message. Any messages with the same tag are ordered and handled sequentially in relation to its conversation. Handlers are passed the conversation tag alongside the message, so they may continue the conversation if they require. Something like this:
Conversation c = new_conversation()
send_message(a, c)
...
send_message(b, c)
...
send_message(x)
handler foo (msg, conv)
send_message(z, c)
...
register_handler(foo, a.type)
a is handled before b, which is handled before z. x can be handled in parallel to a, b and z. Once all messages in a conversation have been handled, the conversation is destroyed.
I'd say do something even different. Don't send work to the threads. Have the threads pull work when they finish previous work.
Maintain a fixed amount of worker threads (the optimal amount equal to the number of CPU cores in the system) and have each of them pull sequentially the next task to do from the global queue after it finishes with the previous one. Obviously, you would need to keep track of dependencies between messages to defer handling of a message until its dependencies are fully handled.
This could be done with very small synchronization overhead - possibly only with atomic operations, no heavy primitives like mutexes or semaphores.
Also, if you pass a message to each handler by reference, instead of making a copy, having the same message handled simultaneously by different handlers on different CPU cores can actually improve cache performance, as higher levels of cache (usually from L2 upwards) are often shared between CPU cores - so when one handler reads a message into the cache, the other handler on the second core will have this message already in L2. So think carefully - do you really need to copy the messages?
If possible I would go for number two with some tweaks. Do you really need every message tp be in order? I find that to be an unusual case. Some messages we just need to handle as soon as possible, and then some messages need be processed before another message but not before every message.
If there are some messages that have to be in order, then mark them someway. You can mark them with some conversation code that lets the processor know that it must be processed in order relative to the other messages in that conversation. Then you can process all conversation-less messages and one message from each conversation concurrently.
Give your design a good look and make sure that only messages that need to be in order are.
I Suppose it comes down to wether or not the order is important. If the order is unimportant you can go for method 2. If the order is important you go for method 1. Depending on what your application is supposed to do, you can still go for method 2, but use a sequence number so all the messages are processed in the correct order (unless of cause if it is the processing part you are trying to optimize).
The first method also has unpredictable ordering. The processing of message 1 on thread 1 could take very long, making it possible that message 2, 3 and 4 have long been processed
This would tip the balance to method 2
Edit:
I see what you mean.
However why in method 2 would you do the handlers sequentially. In method 1 the ordering doesn't matter and you're fine with that.
E.g. Method 3: both handle the messages and the handlers in parallel.
Of course, here also, the ordering is unguaranteed.
Given that there is some result of the handlers, you might just store the results in an ordered list, this way restoring ordering eventually.

Resources