Spring Integration Jms InboundGateway Dynamic Reply Queue - spring-integration

Is it possible to have a dynamic reply queue with Jms OutboundGateway via DSL?
Jms.inboundGateway(jmsListenerContainer)
.defaultReplyQueueName("queue1 or queue2")
Working Solution using ThreadLocal and DestinationResolver:
private static final ThreadLocal<String> REPLY_QUEUE = new ThreadLocal<>();
IntegrationFlows.from(Jms.inboundGateway(listenerContainer)
.defaultReplyQueueName("defaultQueue1")
.destinationResolver(destinationResolver())
.transform(p -> {
// on some condition, else "defaultQueue1"
REPLY_QUEUE.set("changedToQueue2");
return p;
})
#Bean
public DestinationResolver destinationResolver() {
return (session, destinationName, pubSubDomain) -> session.createQueue(REPLY_QUEUE.get());
}

It is not clear from where you'd like to take that dynamic reply queue name, but there is another option:
/**
* #param destinationResolver the destinationResolver.
* #return the spec.
* #see ChannelPublishingJmsMessageListener#setDestinationResolver(DestinationResolver)
*/
public S destinationResolver(DestinationResolver destinationResolver) {
By default this one is a DynamicDestinationResolver which does only this: return session.createQueue(queueName);. Probably here you can play somehow with your different names to determine.
Another way is to have a JMSReplyTo property set in the request message from the publisher.
UPDATE
Since you cannot rely on a default Reply-To JMS message property, I suggest you to look into a ThreadLocal in your downstream flow where you can place your custom header. Then a custom DestinationResolver can take a look into that ThreadLocal variable for a name to delegate to the same mentioned DynamicDestinationResolver.

Related

Using poller for service-activator in spring-integration, how can I pass on MCD (slf4j) context in thread pool

<service-activator ref="serviceName" input-channel="request-channel" method="methodName">
<poller task-executor="taskExecutorCustom"/>
</service-activator>
<task:executor id="taskExecutorCustom" pool-size="5-20" queue-capacity="0">
can any one suggest how can I pass the MCD context to the method of service "serviceName"?
The answer is to decorace a Runnable which is going to be performed on that TaskExecutor.
There are many article in the Internet on the matter:
How to use MDC with thread pools?
https://gist.github.com/pismy/117a0017bf8459772771
https://rmannibucau.metawerx.net/post/javaee-concurrency-utilities-mdc-propagation
Also Spring Security provides some solution how to propagate a SecurityContext from one thread to another: https://docs.spring.io/spring-security/site/docs/5.3.0.RELEASE/reference/html5/#concurrency
What I would suggest you to take some ideas from those links and use an existing API in the ThreadPoolTaskExecutor:
/**
* Specify a custom {#link TaskDecorator} to be applied to any {#link Runnable}
* about to be executed.
* <p>Note that such a decorator is not necessarily being applied to the
* user-supplied {#code Runnable}/{#code Callable} but rather to the actual
* execution callback (which may be a wrapper around the user-supplied task).
* <p>The primary use case is to set some execution context around the task's
* invocation, or to provide some monitoring/statistics for task execution.
* #since 4.3
*/
public void setTaskDecorator(TaskDecorator taskDecorator) {
So, that your decorator should just have a code like this:
taskExecutor.setTaskDecorator(runnable -> {
Map<String, String> mdc = MDC.getCopyOfContextMap();
return () -> {
MDC.setContextMap(mdc);
runnable.run();
};
});

Maximo Event Filter Java class not being picked up by Publish Channel

I have written a Java class for event filtering on one of the Publish channels, and rebuilt and deployed it. I have referenced it on the Publish channel too. However, Maximo behaves as if the class was never there.
package com.sof.iface.eventfilter;
import java.rmi.RemoteException;
import psdi.iface.mic.MaximoEventFilter;
import psdi.iface.mic.PublishInfo;
import psdi.mbo.MboRemote;
import psdi.util.MXException;
import psdi.util.logging.MXLogger;
import psdi.util.logging.MXLoggerFactory;
public class VSPPWOCOMPEventFilter extends MaximoEventFilter {
private static final String SILMX_ATTRIBUTE_STATUS = "STATUS";
private MXLogger log = MXLoggerFactory.getLogger("maximo.application.EVENTFILTER");
/**
* Constructor
*
* #param pubInfo Publish Channel Information
* #throws MXException Maximo Exception
*/
public VSPPWOCOMPEventFilter(PublishInfo pubInfo) throws MXException {
super(pubInfo);
} // end constructor.
/**
* Decide whether to filter out the event before it triggers the
* Publish Channel or not.
*/
public boolean filterEvent(MboRemote mbo) throws MXException, RemoteException {
log.debug("######## com.sof.iface.eventfilter.VSPPWOCOMPEventFilter::filterEvent() - Start of Method");
boolean filter = false;
// try {
String status = mbo.getString(SILMX_ATTRIBUTE_STATUS);
log.debug("######## com.sof.iface.eventfilter.VSPPWOCOMPEventFilter::filterEvent() - WO Status " + status);
if(mbo.isModified("STATUS") && status == "COMP") {
log.debug("######## com.sof.iface.eventfilter.VSPPWOCOMPEventFilter::filterEvent() - Skipping MBO");
filter = true;
} else {
filter = super.filterEvent(mbo);
}
log.debug("######## com.sof.iface.eventfilter.VSPPWOCOMPEventFilter::filterEvent() - End of Method");
return filter;
// }
} // end filterEvent.
} // end class.
Please ignore the below text :)
A good logging (tracing) is always a lifesaver when you have problems in a production environment. I will never stop telling to my fellow programmers how much is important to fill code with meaningful log calls.Maximo has a good and flexible logging subsystem. This IBM TechNote describes in detail how logging works in Maximo. Let’s now see hot to use Maximo logging in your custom Java code.
It looks like you need to skip the outbound message when the Work Order is completed. When the event doesn't seem to occur, make sure to check for these flags:
External System is active
Publish Channel is active
Publish Channel listener is enabled
I think you could easily achieve the same result with a SKIP action processing rule. See details here:
https://www.ibm.com/support/knowledgecenter/en/SSLKT6_7.6.0/com.ibm.mt.doc/gp_intfrmwk/c_message_proc_actions.html
Also worth mentioning: IBM added automation script support for Event Filtering in version 7.6 so no more build/redeploy required.

Spring Integration: Switch routing dynamically

A spring integration based converter consumes the messages from one system, checks, converts and sends it to the other one.
Should the target system be down, we stop the inbound adapters, but would also like to persist locally or forward the currently "in-flight" converted messages. For that would simply like to reroute the messages from the normal output channel to some "backup"-channel dynamically.
In the docs I have found only the option to route the messages based on their headers ( so on some step before in flow I would have to add those dynamically once the targer system is not availbale), or based on the payload type, which is not really my case. The case with adding dynamically some header, and then filtering it out down the pipe, or during de-/serializing still seems not the best approach for me. I would like rather to be able to turn a switch(on some internal Event) that would then reroute those "in-flight" messages to the "backup"-channel.
What would be a best SI approach to achive this? Thanks!
The router could not only be based on the the payload type or some header. You really can have a general POJO method invocation to return a channel, its name or some routing key which is mapped. That POJO method indeed can check some internal system state and produce this or that routing key.
So, you may have something like this in the router configuration:
.route(myRouter())
where your myRouter is something like this:
#Bean
MyRouter myRouter() {
return;
}
and its internal code might be like this:
public class MyRouter {
#Autowired
private SystemState systemState;
String route(Object payload) {
return this.systemState.isActive() ? "successChannel" : "backupChannel";
}
}
The same can be achieved a simple lambda definition:
.<Object, Boolean>route(p -> systemState().isActive(),
m -> m.channelMapping(true, "sucessChannel")
.channelMapping(false, "backupChannel"))
Also...
private final AtomicBoolean switcher = new AtomicBoolean();
#Bean
public IntegrationFlow flow() {
return IntegrationFlows.from(() -> "foo", e -> e.poller(Pollers.fixedDelay(Duration.ofSeconds(5))))
.route(s -> switcher.get() ? "foo" : "bar")
.get();
}

Records are not consumed when adding checkpointer

I have following configuration for KinesisMessageDrivenChannelAdapter, when I remove dynamoDbMetaDataStore as checkpointer, messages are received correctly, but when I add it back records are always empty.
I debugged the code and KinesisMessageDrivenChannelAdapter.processTask() line 776 (version 2.0.0.M2) returns empty records.
UPDATE:
public DynamoDbMetaDataStore dynamoDbMetaDataStore() {
String url = consumerClientProperties.getDynamoDB().getUrl();
final AmazonDynamoDBAsync amazonDynamoDB = AmazonDynamoDBAsyncClientBuilder.standard()
.withEndpointConfiguration(new EndpointConfiguration(
url,
Regions.fromName(awsRegion).getName()))
.withClientConfiguration(new ClientConfiguration()
.withMaxErrorRetry(consumerClientProperties.getDynamoDB().getRetries())
.withConnectionTimeout(consumerClientProperties.getDynamoDB().getConnectionTimeout())).build();
DynamoDbMetaDataStore dynamoDbMetaDataStore = new DynamoDbMetaDataStore(amazonDynamoDB, "consumer-test");
return dynamoDbMetaDataStore;
}
public KinesisMessageDrivenChannelAdapter kinesisInboundChannel(
AmazonKinesis amazonKinesis, String[] streamNames) {
KinesisMessageDrivenChannelAdapter adapter =
new KinesisMessageDrivenChannelAdapter(amazonKinesis, streamNames);
adapter.setConverter(null);
adapter.setOutputChannel(kinesisReceiveChannel());
adapter.setCheckpointStore(dynamoDbMetaDataStore());
adapter.setConsumerGroup(consumerClientProperties.getName());
adapter.setCheckpointMode(CheckpointMode.manual);
adapter.setListenerMode(ListenerMode.record);
adapter.setStartTimeout(10000);
adapter.setDescribeStreamRetries(1);
adapter.setConcurrency(10);
return adapter;
}
Thank you
I recommend you to test your solution with the latest 2.0.0.BUILD-SNAPSHOT.
There is already an option like:
/**
* Specify a {#link LockRegistry} for an exclusive access to provided streams.
* This is not used when shards-based configuration is provided.
* #param lockRegistry the {#link LockRegistry} to use.
* #since 2.0
*/
public void setLockRegistry(LockRegistry lockRegistry) {
where you would need to inject a DynamoDbLockRegistry for better checkpoint management.
For that purpose you would also need to add this dependency:
compile("com.amazonaws:dynamodb-lock-client:1.0.0")
There indeed might be some issues with filtering in that M2 yet...

Spring Integration enriching payload using DSL

I am using Spring Integration to consume RSS feeds. Once I get a feed item, I need to enhance the data by using a field from the payload, call a Java class to get some additional data and store this with the payload before writing all the data to the DB.
What is the best way to do this, a payload enricher or a service activator and how to specify this using DSL?
Finally as the payload is an SyndEntry object, do I need to create a new payload with new fields?
Any pointers would be helpful.
Yes, you need a new payload type; you can use a simple POJO...
#Bean
public Enricher enricher() {
return new Enricher();
}
public static class Enricher {
public Enhanced enhance(SyndEntry entry) {
return new Enhanced(entry, "foo", "bar");
}
}
Then, in the DSL...
...
.handle("enricher", "enhance")
...

Resources