I have a process thats supposed to handle two different kind of messages and process them similiarly but differently.
Naturally I would use two seperate queues for both kind of messages and call consume() twice.
The other possibility would be to just have one queue and differ by some kind of "message type" property inside the content buffer and handle each message in a switch case.
Which would be the more "recommended" way of doing this?
Are there any advantages/disadvantages when using any of the two approaches?
Related
I took the sample code from Apache here: https://activemq.apache.org/components/cms/example
(The producer section specfically) and tried to rewrite it so it doesn't create any threads for producing. And instead, in my program's main thread, creates a producer object and sets up the connection, session, destination, and so on. Then it sends messages using a message producer. This is all done in a singleton so that my program just has one Producer object and just goes to it whenever it needs to dump any message to one of my queues. This example code seems to create a producer for every thread, set it up everytime, just to send a message, then deletes everything. And it does this for every time you want to want to produce something from your program.
I am crashing right when I try to call send on a message producer with any given message. I found out after some digging that after the send call it tries to lock a mutex and enter a critical section. I guess this is for threading? I don't use threads at all in my code so I guess it crashes because of that... Does anyone know a way to bypass this? I don't want to use multiple threads, I won't need to worry about two threads trying to call send at the same time or whatever the problem is that using mutexes is trying to solve.
You don't need to create a thread to run the producer in but internally the library is going to use a couple of threads as that is necessary for meeting the API requirements and also just because you don't use multiple threads doesn't means others won't so the mutex is an internal requirement.
You are free to modify the example to only create a producer inside the main thread of the application, the example uses two threads because it is acting as both a producer and consumer.
One likely cause of the error you are receiving is because you did not initialize the ActiveMQ-CPP library:
activemq::library::ActiveMQCPP::initializeLibrary();
We have a distributed architecture and there is a native system which needs to be called. The challenge is the capacity of the system which is not scalable and cannot take on more load of requests at same time. We have implemented Service Bus queues, where there is a Message handler listening to this queue and makes a call to the native system. The current challenge is whenever a message posted in the queue, the message handler is immediately processing the request. However, We wanted to have a scenario to only process two requests at a time. Pick the two, process it and then move on to the next two. Does Service Bus Queue provide inbuilt option to control this or should we only be able to do with custom logic?
var options = new MessageHandlerOptions()
{
MaxConcurrentCalls = 1,
AutoComplete = false
};
client.RegisterMessageHandler(
async (message, cancellationToken) =>
{
try
{
//Handler to process
await client.CompleteAsync(message.SystemProperties.LockToken);
}
catch
{
await client.AbandonAsync(message.SystemProperties.LockToken);
}
}, options);
Message Handler API is designed for concurrency. If you'd like to process two messages at any given point in time then the Handler API with maximum concurrency of two will be your answer. In case you need to process a batch of two messages at any given point in time, this API is not what you need. Rather, fall back to building your own message pump using a lower level API outlined in the answer provided by Mikolaj.
Careful with re-locking messages though. It's not a guaranteed operation as it's a client-side operation and if there's a communication network, currently, the broker will reset the lock and the message will be processed again by another competing consumer if you scale out. That is why scaling-out in your scenario is probably going to be a challenge.
Additional point is about lower level API of the MessageReceiver when it comes to receiving more than a single message - ReceiveAsync(n) does not guarantee n messages will be retrieved. If you absolutely have to have n messages, you'll need to loop to ensure there are n and no less.
And the last point about the management client and getting a queue message count - strongly suggest not to do that. The management client is not intended for frequent use at run-time. Rather, it's uses for occasional calls as these calls are very slow. Given you might end up with a single processing endpoint constrained to only two messages at a time (not even per second), these calls will add to the overall time to process.
From the top of my head I don't think anything like that is supported out of the box, so your best bet is to do it yourself.
I would suggest you look at the ReceiveAsync() method, which allows you to receive specific amount of messages (NOTE: I don't think it guarantees that if you specify that you want to retrieve 2 message it will always get you two. For instance, if there's just one message in the queue then it will probably return that one, even though you asked for two)
You could potentially use the ReceiveAsync() method in combination with PeekAsync() method where you can also provide a number of messages you want to peek. If the peeked number of messages is 2 than you can call ReceiveAsync() with better chances of getting desired two messages.
Another way would be to have a look at the ManagementClient and the GetQueueRuntimeInfoAsync() method of the queue, which will give you the information about the number of messages in the queue. With that info you could then call the ReceiveAsync() mentioned earlier.
However, be aware that if you have multiple receivers listening to the same queue then there's no guarantees that anything from above will work, as there's no way to determine if these messages were received by another process or not.
It might be that you will need to go with a more sophisticated way of handling this and receive one message, then keep it alive (renew lock etc.) until you get another message and then process them together.
I don't think I helped too much but maybe at least it will give you some ideas.
I am trying to implement a messaging system between two processes with boost.interprocess and message_queue.
First problem: One queue can only be used for sending messages from process A to B, not B to A.
Thus, I am using two queues in both processes. Process A listens/receives at Queue-A and sends on Queue-B; Process B listens/receives at Queue-B and sends on Queue-A.
I am unable to get the to system work with both queues. Depending on the ordering of the processes calling boost::interprocess::message_queue(boost::interprocess::open_or_create,...)
or
boost::interprocess::message_queue(boost::interprocess::open_only,...)
either one Queue works or the other or neither.
Even if Process A creates Queue-A and Queue-B and Process B opens Queue-A and Queue-B, only. In one direction boost::interprocess is stuck at the receive-function and never awakes.
1) Is it possible to get bidirectional messaging/signalling to work with interprocess::message_queue using two queues in each process?
2) Is there a better way get bidirectional messaging without using message_queue?
I did not receive any comments on this. The solution was to not use boost::interprocess::message_queue. With the help of boost/interprocess/shared_memory_object I did write on my own a new, simple library for unidirectional interprocess messaging. https://github.com/svebert/InterprocessMsg
I am new to Event sourcing concept so there are a couple of moments I don't understand. One of them is how to handle following scenario:
I've got 2 instances of a service. Both of them listen to a event queue. There are two messages: CreateUser and UpdateUser. First instance picks up CreateUser and second instance picks up UpdateUser. For some reason second instance will handle its command quicker but there will be no User to update, since it was not created.
What am I getting wrong here?
What am I getting wrong here?
Review: Race Conditions Don't Exist
A microsecond difference in timing shouldn’t make a difference to core business behaviors.
In other words, what you want is logic such that the order of the messages doesn't change the final result, and a first writer wins policy (aka compare-and-swap), so that when you have two processes trying to update the same resource, the loser of the data race has to start over.
As a general rule, events should be understood to support multiple observers - all subscribers get to see all events. So a queue with competing consumers isn't the usual approach unless you are trying to distribute a specific subscriber across multiple processes.
You do not have a concurrency issue you can solve. This totally runs down to either using bad tools or not reading the documentation.
Both of them listen to a event queue.
And that queue should support that. Example are azure queues, where I Can listen AND TELL THE QUEUE not to show the event to anyone else for X seconds (which is enough for me to decide whether i handled it or not). If I do not answer -> event is reinserted after that time. If I kill it first, there is no concurrency.
So, you need a backend queue that can handle this.
I have single ActorSystem, which has several subscribers to it's eventStream. Application may produce thousands of messages per second, and some of the messages are more important than the rest of. So they should be handled before all.
I found that every ActorSystem has single eventStream attached, thus it seems that I need to register same actor class with two (or more) ActorSystems, in order to receive important messages in dedicated eventStream.
Is this preferred approach, or there are some tricks for this task? May be classifiers can also tweak message priorities somehow?
EventStream is not a datastructure that holds events, it just routes events to subscribers, hence you should use PriorityMailbox for the listener actors, see the documentation for how to use priority mailboxes: http://doc.akka.io/docs/akka/2.0.3/scala/dispatchers.html#Mailboxes