Spring Integration Gateway Without Reply in DSL - spring-integration

My question is very similar to this Stackoverflow Question in that I want to send to JMS and then carry on with my integration flow.
The response is totally asynchronous and is therefore handled on a separate Jms.messageDrivenChannelAdapter. So, I basically want to "fire and forget".
My code is this (Spring 5.3.14);
.enrichHeaders(
h -> h.headerFunction("JMSCorrelationID", m -> m.getHeaders().get(MessageHeaders.ID))
)
.handle(
Jms
.outboundGateway(connectionFactory)
.requestDestination(queueName)
)
.handle(p -> System.err.println("Do something else with ... " + p))
.get();
And I get this;
org.springframework.integration.MessageTimeoutException: failed to receive JMS response within timeout of: 5000ms,
The referenced answer to me implies that I need to listen to a dummy queue, which I don't want to have to do. So what do I need to fix in my code above?
Edit; final code using the solution below tested with/without "queue PUT inhibit" in order cause an exception.
.publishSubscribeChannel(s -> s
.subscribe(f -> f.handle(
Jms
.outboundAdapter(connectionFactory)
.destination(queueName)
))
.subscribe(f -> f.handle(
p -> System.err.println("Do something else with ... " + p)
))
)

You need to use Jms.outboundChannelAdapter() instead. And together with that next handle() wrap them into a publishSubscribeChannel() as two subscribers sub-flows. This way they are going to be called one after other, but in parallel and their individual flows.

Related

Spring Integration DSL: How can I set the logging.level to DEBUG for .log()

How can I change the logging.level to a level below INFO for the .log() entries ? By default, it seems to log only INFO and above. I'm unable to get it to log DEBUG and TRACE.
I have tried (in application.yml):
logging.level:
org.springframework.integration: DEBUG
com.my.namespace: DEBUG
but so far no success. Setting the logging.level for org.springframework.integration to DEBUG does indeed log a whole bunch of DEBUG stuff, but not my own .log() statements.
UPDATE:: I am using .log() like this:
.log(DEBUG, "some category", m -> "print something using: " + m.getPayload())
But when I set the log level to DEBUG, nothing is printed. Only if I use INFO like this:
.log(INFO, "some category", m -> "print something using: " + m.getPayload())
The log() operator is fully based on the LoggingHandler: https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-endpoints.html#logging-channel-adapter and it is implemented like this:
public B log() {
return log(LoggingHandler.Level.INFO);
}
So, it is an INFO independently of what you have so far in the config.
So, if you'd like to change some log() operator defaults, you should consider to some its appropriate alternative like the same variant where you can provide a desired level:
public B log(LoggingHandler.Level level) {
It can be done via eternal configuration, but it is possible only via custom property and some conversion logic like LoggingHandler.Level.valueOf(level.toUpperCase()).
UPDATE
To make it visible in logs you must provide that org.springframework.integration: DEBUG because this one is a default category for the LoggingHandler. Or use different log() variant:
log(LoggingHandler.Level level, String category)
For example:
log(LoggingHandler.Level.DEBUG, "my.category")
and provide that category in your logging config. Otherwise it is going to rely on default (root?) level.
So, according your code in the question it must be like this:
logging.level:
com.my.namespace: DEBUG
.log(DEBUG, "com.my.namespace", m -> "print something using: " + m.getPayload())

Spring Integration aggregator's release strategy based on last modified

I'm trying to implement the following scenario:
I get a bunch of files that have common file pattern, i.e. doc0001_page0001, doc0001_page0002, doc0001_page0003, doc0002_page0001 (where doc0001 would be one document consisting of 3 pages that I would need to merge, doc0002 would only have 1 page)
I want to aggregate them in a way that I will release a group only if all of the files for specific document are gathered (doc0001 after 3 files were picked up, doc0002 after 1 file)
My idea was to read the files in an alphabetical order and wait for 2 seconds after a group was last modified to release it (g.getLastModified() is smaller than the current time minus 2 seconds)
I've tried the following without success:
return IntegrationFlows.from(Files.inboundAdapter(tmpDir.getRoot())
.patternFilter("*.json")
.useWatchService(true)
.watchEvents(FileReadingMessageSource.WatchEventType.CREATE,
FileReadingMessageSource.WatchEventType.MODIFY),
e -> e.poller(Pollers.fixedDelay(100)
.errorChannel("filePollingErrorChannel")))
.enrichHeaders(h -> h.headerExpression("CORRELATION_PATTERN", "headers[" + FileHeaders.FILENAME + "].substring(0,7)")) // docxxxx.length()
.aggregate(a -> a.correlationExpression("headers['CORRELATION_PATTERN']")
.releaseStrategy(g -> g.getLastModified() < System.currentTimeMillis() - 2000)) .channel(MessageChannels.queue("fileReadingResultChannel"))
.get();
Changing the release strategy to the following also didn't work:
.aggregate(a -> a.correlationExpression("headers['CORRELATION_PATTERN']")
.releaseStrategy(g -> {
Stream<Message<?>> stream = g.getMessages()
.stream();
Long timestamp = (Long) stream.skip(stream.count() - 1)
.findFirst()
.get()
.getHeaders()
.get(MessageHeaders.TIMESTAMP);
System.out.println("Timestamp: " + timestamp);
return timestamp.longValue() < System.currentTimeMillis() - 2000;
}))
Am I misunderstanding the release strategy concept?
Also, is it possible to print something out from the releaseStrategy block? I wanted to compare the timestamp (see System.out.println("Timestamp: " + timestamp);)
Right, since you don't know the whole sequence for message group, you don't have any other choice unless to use a groupTimeout. The regular releaseStrategy works only when a message arrives to the aggregator. Since at the point of one message you don't have enough info to release the group, it is going to sit in the group store forever.
The groupTimeout option has been introduced to the aggregator especially for this kind of use-cases when we definitely would like to release a group without enough messages to group normally.
You may consider to use a groupTimeoutExpression instead of constant-based groupTimeout. The MessageGroup is a root evaluation context object for SpEL, so you will be able to get access to the mentioned lastModified for it.
The .sendPartialResultOnExpiry(true) is right option to deal with here.
See more info in the docs: https://docs.spring.io/spring-integration/reference/html/#agg-and-group-to
I found a solution to that with a different approach. I still don't understand why the above one wasn't working.
I've also found a cleaner way of defining the correlation function.
IntegrationFlows.from(Files.inboundAdapter(tmpDir.getRoot())
.patternFilter("*.json")
.useWatchService(true)
.watchEvents(FileReadingMessageSource.WatchEventType.CREATE, FileReadingMessageSource.WatchEventType.MODIFY), e -> e
.poller(Pollers.fixedDelay(100)))
.enrichHeaders(h -> h.headerFunction(IntegrationMessageHeaderAccessor.CORRELATION_ID, m -> ((String) m
.getHeaders()
.get(FileHeaders.FILENAME)).substring(0, 17)))
.aggregate(a -> a.groupTimeout(2000)
.sendPartialResultOnExpiry(true))
.channel(MessageChannels.queue("fileReadingResultChannel"))
.get();

Build spring integration release strategy using spring DSL

I am new to Spring integration. I am trying to split the message from a file using file splitter and then use .aggregate() to build a single message and send to output channel.
I have markers as true and hence apply-sequence is false by default now.
I have set correlationId to a constant "1" using enrichHeaders. I have trouble setting the realease strategy as I do not have a hold on the sequence end. Here is how my code looks.
IntegrationFlows
.from(s -> s.file(new File(fileDir))
.filter(getFileFilter(fileName)),
e -> e.poller(poller))
.split(Files.splitter(true, true)
.charset(StandardCharsets.US_ASCII),
e -> e.id(beanName)).enrichHeaders(h -> h.header("correlationId", "1"));
IntegrationFlow integrationFlow = integrationFlowBuilder
.<Object, Class<?>>route(Object::getClass, m -> m
.channelMapping(FileSplitter.FileMarker.class, "markers.input")
.channelMapping(String.class, "lines.input"))
.get();
#Bean
public IntegrationFlow itemExcludes() {
return flow -> flow.transform(new ItemExcludeRowMapper(itemExcludeRowUnmarshaller)) //This maps each line to ItemExclude object
.aggregate(aggregator -> aggregator
.outputProcessor(group -> group.getMessages()
.stream()
.map(message -> ((ItemExclude) message.getPayload()).getPartNumber())
.collect(Collectors.joining(","))))
.transform(Transformers.toJson())
.channel(customSource.itemExclude());
}
#Bean
public IntegrationFlow itemExcludeMarkers() {
return flow -> flow
.log(LoggingHandler.Level.INFO)
.<FileSplitter.FileMarker>filter(m -> m.getMark().equals(FileSplitter.FileMarker.Mark.END))
.<FileHandler>handle(new FileHandler(configProps))
.channel(NULL_CHANNEL);
}
Any help appreciated.
I would move your header enricher for the correlationId before splitter and make it like this:
.enrichHeaders(h -> h
.headerFunction(IntegrationMessageHeaderAccessor.CORRELATION_ID,
m -> m.getHeaders().getId()))
The constant correlationId is absolutely not good in the multi-threaded environment: different threads splits different files and send different lines to the same aggregator. So, with the "1" as correlation key you'd have always one group to aggregate and release. The default sequence behavior is to populate the original message id to the correlationId. Since you are not going to rely on the applySequence from the FileSplitter I suggest that simple solution to emulate that behavior.
As Gary pointed in his answer you need to think about custom ReleaseStrategy and send FileSplitter.FileMarker to the aggregator as well. The FileSplitter.FileMarker.END has lineCount property which can be compared with the MessageGroup.size to decide that we are good to release the group. The MessageGroupProcessor indeed has to filter FileSplitter.FileMarker messages during building the result for output.
Use a custom release strategy that looks for the END marker in the last message and, perhaps, a custom output processor that removes the markers from the collection.

How can I use a custom error handler in a Yesod route handler?

I have a project that has two parts:
Webpage
API
I'm using the errorHandler function in my Yesod instance declaration to build error pages for the webpage when something goes wrong.
However all routes in the API creates JSON responses. I'm using runInputPost to generate a input form that handles input to the API. When the API is called with missing parameters Yesod generates the InvalidArgs exception and the error HTML-page is returned.
I want to be able to handle that exception and return JSON such as:
{
"success" : False,
"code" : 101,
"message" : "The argument 'blabla' was missing"
}
How can I do that without creating a subsite with it's own errorHandler?
While your solution certainly works, you could instead use the runInputPostResult function, which was actually added via PR by someone in pretty much the same situation you find yourself in.
After reading about how to catch exceptions that happens in monad stacks (here and here) I found out about the library exceptions which seemed easy to use.
I read about which type in Yesod that implements the Exception type class, turns out it's a type called HandlerContents:
data HandlerContents =
HCContent H.Status !TypedContent
| HCError ErrorResponse
| HCSendFile ContentType FilePath (Maybe FilePart)
| HCRedirect H.Status Text
| HCCreated Text
| HCWai W.Response
| HCWaiApp W.Application
deriving Typeable
I'm interested in HCError since it contains ErrorResponse (same type that errorHandler gets).
I added the exceptions library to build-depends in my cabal-file. All my handlers in the API had the signature :: Handler Value so I created a utility function called catchRouteError that I could run my handlers with:
catchRouteError :: Handler Value -> Handler Value
catchRouteError r = catch r handleError
where
handleError :: HandlerContents -> Handler Value
handleError (HCError (InvalidArgs _)) = ... create specific json error
handleError (HCError _) = ... create generic json error
handleError e = throwM e
Since HandlerContents is used for other things such as redirection and receiving files I only match against HCError and let the default implementation handle everything else.
Now I could easily run my handlers with this function:
postAPIAppStatusR :: Handler Value
postAPIAppStatusR = catchRouteError $ do
...
That's a quick solution to my problem, I'm sure there are more elegant solutions that people with better Yesod knowledge can provide.

Camel custom component: perform two different actions

I just want to know if I can do below pertaining to custom component
1) I created a sample component
somComponent://foo ---> what this foo refers to?can i have any string there?
What does it denotes?
2) consider below route
from("some blah")
.to(someCustomComponent://action1)
.to(someCustomComponent://action2);
Idea - I want to perform two different actions on the above. Kind of two different methods.
Is the above possible?
The notation for your custom component in Apache Camel can be described as follows:
someComponent://instance?parm1=foo&parm2=bar
The instance part can be pretty much anything you want to uniquely identify the endpoint.
You can derive DefaultComponent and implement the methods. The signature for createEndpoint method looks like this:
protected Endpoint createEndpoint(final String uri, String remaining,
Map<String, Object> parameters) throws Exception
So for the endpoint someComponent://instance?parm1=foo&parm2=bar
uri = someComponent://instance?parm1=foo&parm2=bar
remaining = instance
parmeters = (Map) parm1 -> foo, parm2 -> bar
Therefore, yes! You can easily denote the action you want, for example as a parameter such as:
someComponent://instance?action=something

Resources