spring integration distributed parallel Scatter Gather pattern - spring-integration

I need to implement the following architecture:
I have large Order that must be split into smaller order( parallel) and send to Downstream async rest end point .
Down stream ordering API publish message to a reply queue ( kafka/rabbitmq) after completing order ( failed or success)
with correlation ids .
Need to have aggregate listener to collect all the responses and send the final out put to caller.
I am thinking of using spring integration Scatter gather pattern and other useful Spring features.
Can you help me show an example of how such an architecture can be implemented with the help of Spring-integration

large Order that must be split into smaller order( parallel)
This is not what scatter-gather is designed for. Its purpose to do many requests for the same input, e.g. ask dealerships for car quote and then choose the best one for you.
What you are asking is more like a splitter-aggregator.
So, you just perform a split function on your order object and produce as many items as you need into its output channel. And this one has to be an ExecutorChannel to be able to process those splitted items in parallel.
Since you talk about a reply to the original client, you cannot make your aggregator distributed (several instances of the same application), but you will still a gain an async, parallel processing benefit just with that ExecutorChannel. don't forget to carry on a replyChannel header throughout your flow, so an aggregator in the end would know where to produce a reply.

Related

Calculating performance metrics using trace.json for simulation in UnetStack3

I am working on the tool to calculate different performance metrics (like average end-to-end delay, throughput, packet delivery ratio, etc.) for the simulation of underwater networks in UnetStack3. I have done an implementation in python that parses the trace.json and calculates end-to-end delay. However, it works only for topology with one-hop communication, as I have considered the MessageID of the events. Further, I analyzed the implementation of the VizTrace tool in Julia and tried to extend the implementation. However, I am unable to figure out how to co-relate events that occur in different nodes for calculating performance measures in a multi-hop topology. Please let me know what approach I should follow with Python and with the vizTrace.
Every event entry in the trace.json file contains a few useful pieces of information to help you associate events across nodes:
component: name and class of the agent, along with the node on which it is running
threadID: unique identifier within a node that associates related events together.
stimulus: contains messageID of the message that caused this event.
response: contains messageID of the message that was sent in response to this event.
For more details, see https://blog.unetstack.net/whats-new-in-UnetStack-3.3
Tracing an event through the agents in the node simply involves collating the events with the same threadID. In order to trace an event across nodes, you need to look at the messageID of the response messages, and find the equivalent stimulus message (same messageID) on the next node. Then you do the same from that node to the following one, until you reach the destination.
If you are using the HalfDuplexModem simulation model, then these messages that go across nodes (and hence across threadIDs) will be the HalfDuplexModem$TX messages. Example: https://blog.unetstack.net/assets/img/mermaid-diagram-20210408195013.svg

Control Azure Service Bus Queue Message Reception

We have a distributed architecture and there is a native system which needs to be called. The challenge is the capacity of the system which is not scalable and cannot take on more load of requests at same time. We have implemented Service Bus queues, where there is a Message handler listening to this queue and makes a call to the native system. The current challenge is whenever a message posted in the queue, the message handler is immediately processing the request. However, We wanted to have a scenario to only process two requests at a time. Pick the two, process it and then move on to the next two. Does Service Bus Queue provide inbuilt option to control this or should we only be able to do with custom logic?
var options = new MessageHandlerOptions()
{
MaxConcurrentCalls = 1,
AutoComplete = false
};
client.RegisterMessageHandler(
async (message, cancellationToken) =>
{
try
{
//Handler to process
await client.CompleteAsync(message.SystemProperties.LockToken);
}
catch
{
await client.AbandonAsync(message.SystemProperties.LockToken);
}
}, options);
Message Handler API is designed for concurrency. If you'd like to process two messages at any given point in time then the Handler API with maximum concurrency of two will be your answer. In case you need to process a batch of two messages at any given point in time, this API is not what you need. Rather, fall back to building your own message pump using a lower level API outlined in the answer provided by Mikolaj.
Careful with re-locking messages though. It's not a guaranteed operation as it's a client-side operation and if there's a communication network, currently, the broker will reset the lock and the message will be processed again by another competing consumer if you scale out. That is why scaling-out in your scenario is probably going to be a challenge.
Additional point is about lower level API of the MessageReceiver when it comes to receiving more than a single message - ReceiveAsync(n) does not guarantee n messages will be retrieved. If you absolutely have to have n messages, you'll need to loop to ensure there are n and no less.
And the last point about the management client and getting a queue message count - strongly suggest not to do that. The management client is not intended for frequent use at run-time. Rather, it's uses for occasional calls as these calls are very slow. Given you might end up with a single processing endpoint constrained to only two messages at a time (not even per second), these calls will add to the overall time to process.
From the top of my head I don't think anything like that is supported out of the box, so your best bet is to do it yourself.
I would suggest you look at the ReceiveAsync() method, which allows you to receive specific amount of messages (NOTE: I don't think it guarantees that if you specify that you want to retrieve 2 message it will always get you two. For instance, if there's just one message in the queue then it will probably return that one, even though you asked for two)
You could potentially use the ReceiveAsync() method in combination with PeekAsync() method where you can also provide a number of messages you want to peek. If the peeked number of messages is 2 than you can call ReceiveAsync() with better chances of getting desired two messages.
Another way would be to have a look at the ManagementClient and the GetQueueRuntimeInfoAsync() method of the queue, which will give you the information about the number of messages in the queue. With that info you could then call the ReceiveAsync() mentioned earlier.
However, be aware that if you have multiple receivers listening to the same queue then there's no guarantees that anything from above will work, as there's no way to determine if these messages were received by another process or not.
It might be that you will need to go with a more sophisticated way of handling this and receive one message, then keep it alive (renew lock etc.) until you get another message and then process them together.
I don't think I helped too much but maybe at least it will give you some ideas.

Best practices for internal api calls to external apis with buffer

I have different external APIs doing basically the same things but in a different way : add product informations (ext_api).
I would like to make an adapter API that would call, behind the scene, the different external APIs (adapter_api).
My problem is the following : the external APIs are optimised when calling them with a batch of products attributes. However, my API would be optimised on a product by product basis.
I would like to somehow make a buffer of product attributes that would grow when I call my adapter_api. When the number of product attributes reach a certain limit, the ext_api would be called and the buffer would be reset and ready to receive more product attributes.
I'm wondering how to achieve that. I was thinking of making a REST api in python that would store the buffer of product attributes. I would like this REST api to be able to scale on a Kubernetes cluster : it would need low latency, and several instance of this API would write in the buffer of products until one of them reach the limit and make the call to the external API.
Here is what I have in mind :
Are there any best practices concerning the buffer on this use case ? To add some extra informations : my main purpose here is to hide from internal business APIs (not drawn) the complexity of calling many different external APIs each of which have their own rules and credentials.
Thank you very much for your help.
You didn't tell us your performance evaluation criteria.
You did tell us this:
don't know how to store the buffer : I would like to avoid databases or files.
which makes little sense,
since there's a simple answer to this question:
Is there any best practices on this use case ?
Yes. The best practice is to append requests to buffer.txt
and send the batch when that file exceeds some threshold.
A convenient way to implement the threshold would be
to send when getsize() reports a large enough value.
If requests are of quite different size and the batch
size really matters to you, then append a single byte
to a 2nd file, and use size of that to indicate how
many entries are enqueued.
requirements
The heart of your question seems to revolve around
what was left unsaid:
What is the cost function for sending too many "small" batches to ext_api?
What is the cost function for the consumer of the adapter_api, what does it care about? Low latency return, perhaps?
If ext_api permanently fails (say, a day of downtime), do we have some responsibility for quickly notifying the consumer that its updates are going into a black hole?
And why would using the filesystem be inappropriate?
It seems a perfect match for your needs.
Consider using a global in-memory object,
such as list or queue for the batch you're accumulating.
You might want to protect accesses with a lock.
Maybe your client doesn't really want a
one-product-at-a-time API.
Maybe you'd prefer to have your client
accumulate items,
sending only when its batch size is big enough.

Best Practice for Batch Processing with RabbitMQ

I'm looking for the best way to preform ETL using Python.
I'm having a channel in RabbitMQ which send events (can be even every second).
I want to process every 1000 of them.
The main problem is that RabbitMQ interface (I'm using pika) raise callback upon every message.
I looked at Celery framework, however the batch feature was depreciated in version 3.
What is the best way to do it? I thinking about saving my events in a list, and when it reaches 1000 to copy it to other list and preform my processing. However, how do I make it thread-safe? I don't want to lose events, and I'm afraid of losing events while synchronising the list.
It sounds like a very simple use-case, however I didn't find any good best practice for it.
How do I make it thread-safe?
How about set consumer prefetch-count=1000. If a consumer's unack messages reach its prefetch limit, rabbitmq will not deliver any message to it.
Don't ACK received message, until you have 1000 messages, then copy it to other list and preform your processing. When your job done, ACK the last message, and all message before this message will be ACK by rabbitmq server.
But I am not sure whether large prefetch is the best practice.
First of all, you should not "batch" messages from RabbitMQ unless you really have to. The most efficient way to work with messaging is to process each message independently.
If you need to combine messages in a batch, I would use a separate data store to temporarily store the messages, and then process them when they reach a certain condition. Each time you add an item to the batch, you check that condition (for example, you reached 1000 messages) and trigger the processing of the batch.
This is better than keeping a list in memory, because if your service dies, the messages will still be persisted in the database.
Note : If you have a single processor per queue, this can work without any synchronization mechanism. If you have multiple processors, you will need to implement some sort of locking mechanism.

Scala : Akka - multiple event buses for actorsystem or having prioritized events?

I have single ActorSystem, which has several subscribers to it's eventStream. Application may produce thousands of messages per second, and some of the messages are more important than the rest of. So they should be handled before all.
I found that every ActorSystem has single eventStream attached, thus it seems that I need to register same actor class with two (or more) ActorSystems, in order to receive important messages in dedicated eventStream.
Is this preferred approach, or there are some tricks for this task? May be classifiers can also tweak message priorities somehow?
EventStream is not a datastructure that holds events, it just routes events to subscribers, hence you should use PriorityMailbox for the listener actors, see the documentation for how to use priority mailboxes: http://doc.akka.io/docs/akka/2.0.3/scala/dispatchers.html#Mailboxes

Resources