Is it possible to have serial processing in spring integration that is the response of one request is get to the next as its request. I have a requirement where only after getting response from service-1 I can initiate call to service-2. This was suggested so because only service-1 has a roll back service implemented.
Is it possible to control which request is processed first, I want request 1 to be processed first. Is this also possible
It really depends on what you are trying to do, but the general solution would be to use a <publish-subscribe-channel/> set the order on the first service to "1" and the second to "2".
By default, the second service will only be called if the first is successful.
If you need to aggregate the results, add an aggregator downstream of both services.
Related
We have our HTTP layer served by Play Framework in Scala. One of our APIs is something of the form:
POST /customer/:id
Requests are sent by our UI team which calls these APIs through a React Framework.
The issue is that, sometimes, the requests are issued in batches, successively one after the other for the same customer ID. When this happens, different threads process these requests and so our persistent layer (MySQL) reaches an inconsistent state due to the difference in the timestamp of the handling of these requests.
Is it possible to configure some sort of thread affinity in Play Scala? What I mean by that is, can I configure Play to ensure that requests of a particular customer ID are handled by the same thread throughout the life-cycle of the application?
Batch is
put several API calls into a single HTTP request.
A batch request is a set of command in one HTTP request, like here https://developers.facebook.com/docs/graph-api/making-multiple-requests/
You describe it as
The issue is that, sometimes, the requests are issued in batches, successively one after the other for the same customer ID. When this happens, different threads process these requests and so our persistent layer (MySQL) reaches an inconsistent state due to the difference in the timestamp of the handling of these requests.
This is a set of concurrent requests. Play framework usually works as a stateless server. I assume you also organize it as stateless. There is nothing that binds one request to another, you can't control order. Well, you can, if you create a special protocol, like "opening batch request", request #1, #2, ... "closing batch request". You need to check if exactly all request was correct. You also need to run some stateful threads and some queues ... Thought akka can help with this but I am pretty sure you wan't do it.
This issue is not a "play-framework" depended. You will reproduce it in any server. For example, the general case: Is it possible to receive out-of-order responses with HTTP?
You can go in either way:
1. "Batch" the command in one request
You need to change the client so it jams "batch" requests into one. You also need to change server so it processes all the commands from the batch one after another.
Example of the requests: https://developers.facebook.com/docs/graph-api/making-multiple-requests/
2. "Pipeline" requests
You need to change the client so it sends the next request after receive the response from the previous.
Example: Is it possible to receive out-of-order responses with HTTP?
The solution to this is to pipeline Ajax requests, transmitting them serially. ... . The next request sent only after the previous one has returned successfully."
So I'd like to perform the following - each N seconds get X messages from a sessions-enabled queue (peek-lock) and then send them together(in a single request) up to the next processing point. Here are options I've come up so far -
"Get messages from a queue" action
Seems like it requires me to hardcode a session id beforehand(?), which is not that handy.
"Batch receiver" logic app
It's still in preview
Custom trigger
Seems like it will work, but requires extra coding.
Any suggestions on how to effectively achieve it via Logic Apps with stuff available today?
You don't need Sessions specifically to retrieve a specific number of messages in a batch....just read 10 message then do whatever processing you need.
If you need to also retrieve the messages in order, then yes, use a Session enabled Queue where all callers use the same SessionId.
Keep in mind, the SessinId is an arbitrary Application value so you can use the same value as the Queue name if you want. I don't see this as any kind of hurdle and it's just how it works.
You can use a Recurrence Trigger at whatever interval you need.
Sessions are primarily for grouping messages. The SessionID can be any specific arbitrary value, HighPriority/LowPriority or a value determined at runtime, such as a guid, if you're doing Correlation among specific related messages. Now that I think about it, the FIFO side affect seems more to support correlation scenarios.
One way to address this is to set the maximum concurrency on the logic app.
Go to the settings of the service bus receiving action:
Then choose to enable concurrency for 10:
I would like to know if I can have persistence in my Spring Integration setup when I use a aggregator, which is not backed by a MessageStore, by leveraging the persistence of AMQP (RabbitMQ) queues before and after the aggregator.
I imagine that this would use ack's: The aggregator won't ack a message before it's collected all the parts and sent out the resulting message.
Additionally I would like to know if this is ever a good idea :)
I am new working with queue's, and am trying to get a good feel for patterns to use.
My business logic for this is as follows:
I receive a messages on one queue.
Each message must result in two unrelated webservice calls (preferably in parallel).
The results of these two calls must be combined with details from the original message.
The combination must then be sent out as a new message on a queue.
Messages are important, so they must not be lost.
I was/am hoping to use only one 'persistent' system, namely RabbitMQ, and not having to add a database as well.
I've tried to keep the question specific, but any other suggestions on how to approach this are greatly appreciated :)
What you would like to do recalls me Scatter-Gather EI Pattern.
So, you get a message from the AMQP send it into the ScatterGather endpoint and wait for the aggregated reply. That's enough for to stick with the default acknowledge.
Right, the scatterChannel can be PublishSubscribeChannel with an executor to call Web Services in parallel. Anyway the gatherer process will wait for replies according the release strategy and will block the original AMQP listener do not ack the message prematurely.
I have a int-http:outbound-gateway in my spring integration config file that consumes a rest service.
I am trying to start the error handling part of the implementation, but I would like first to understand how the retry works. I can notice that when an error occurs, let's say a bad request, spring integration framework seems to retry to send the request to the rest service and, in fact, depending on the error - http code - I would like to handle it in a different way.
How can I avoid the retry depending on the http response code?
There is no inherent retry; retry is implemented using a retry advice.
It can be customized for different exception types, but not for status codes; you would need a custom advice for that - the documentation explains how to write one.
In my spring integration application i have several stored-proc-outbound-gateway, i would to log how much time each call is taking, any help would be appreciated.
I would ideally like to be able to enable/disable logging for the parameters used, time taken and total rows retrieved (returning-resultset) to monitor and performance tuning purpose.
Thanks
You can add a ChannelInterceptor (subclass of ChannelInterceptorAdapter) to the request channel which will give you raw timing (preSend/postSend), but the time will include any processing downstream of the gateway (on direct channels).
Since you want to examine the results too, you could start a timer (e.g. Spring StopWatch) in the interceptor (preSend) on the request channel and stop the timer in an interceptor on the reply channel). If you use the same interceptor bean you can store the timer in a ThreadLocal.
You can turn on/off collection using a boolean property on the interceptor.
Alternatively, you can add a custom advice to the gateway.
EDIT
The advice is probably the best approach because with a ThreadLocal you will need to add code to the first interceptor to handle failures and clean up. With an around advice, the timer would just be a local method variable.