MQ Listener Performance - spring-integration

I am building Docker Containers which have a war file running on Jetty, and I have been alternating a few settings to see if performance improves but nothing so far. Per container it has been achieving 7 tps.
The settings are
<bean id="cachingConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="MQConnectionFactory" />
<property name="sessionCacheSize" value="10"/>
</bean>
<bean id="requestQueue" class="com.ibm.mq.jms.MQQueue">
<constructor-arg index="0" value="${queuemanager}"/>
<constructor-arg index="1" value="${incoming.queue}"/>
</bean>
<integration:poller id="poller" default="true" fixed-delay="1000" error-channel="errorChannel"/>
How can I improve the number of threads processing over here?
Also, my connection factory details are as shown below
#Bean(name="DefaultJmsListenerContainerFactory")
public DefaultJmsListenerContainerFactory provideJmsListenerContainerFactory(PlatformTransactionManager transactionManager) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setTransactionManager(transactionManager);
factory.setConcurrency(jmsConcurrency);
factory.setCacheLevel(jmsCacheLevel);
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setSessionTransacted(true);
return factory;
}
#Bean(name = "txManager")
public PlatformTransactionManager provideTransactionManager() {
return new JmsTransactionManager(connectionFactory());
}
#Bean(name = "JmsTemplate")
public JmsTemplate provideJmsTemplate() {
JmsTemplate jmsTemplate = new JmsTemplate(connectionFactory());
jmsTemplate.setReceiveTimeout(Long.parseLong(env.getRequiredProperty(RECEIVE_TIMEOUT)));
return jmsTemplate;
}
#Bean(name="MQConnectionFactory")
public ConnectionFactory connectionFactory() {
if (factory == null) {
factory = new MQXAConnectionFactory();
try {
factory.setHostName(env.getRequiredProperty(HOST));
factory.setPort(Integer.parseInt(env.getRequiredProperty(PORT)));
factory.setQueueManager(env.getRequiredProperty(QUEUE_MANAGER));
factory.setChannel(env.getRequiredProperty(CHANNEL));
factory.setTransportType(WMQConstants.WMQ_CM_CLIENT);
} catch (JMSException e) {
throw new RuntimeException(e);
}
}
return factory;
}
The initial setting for the concurrency was '1-2' and I changed that to '10-15'. Did not affect performance.
The jmsCache was set to 3 (Consumer cache), but no change there yet either.
Any help is much appreciated.
Cheers
Kris

Answering my own post here. What we found out was that the problem was actually with our Database pooling not setup correctly in the first place.
But in order to increase the Listener count, I had to change my Spring integration adapter settings
<jms:message-driven-channel-adapter id="jmsIn"
destination="requestQueue"
channel="inputJsonConversionChannel"
connection-factory="cachingConnectionFactory"
error-channel="errorChannel"
concurrent-consumers="${jms_adapter_concurrent_consumers}" />
Only when the concurrent-consumers is varied, does the number of listeners on the queue increase.

Related

How to create a date range as a condition of a promotion in hybris?

is it possible to create a date condition for promotion in hybris?
I am trying to do this implementation. my first problem is how to map the parameter value
<bean id="dateRuleParameterValueMapperDefinition" class="de.hybris.platform.ruleengineservices.rule.strategies.impl.RuleParameterValueMapperDefinition">
<property name="mapper" ref="dateRuleParameterValueMapper" />
<property name="type" value="java.util.Date" />
</bean>
in this mapping, I have an exception that the type is not supported (Caused by: de.hybris.platform.ruleengineservices.rule.strategies.RuleParameterValueMapperException:)
if so, can I resolve this error .. is it possible to create a date condition in the RuleConditionTranslator?
hybris version: 6.5
If you want to add any new RuleParameterValueMapperDefinition, you have to implement 'RuleParameterValueMapper' and override 'toString' and 'fromString' methods in your mapper implementation.
public class MediaTypeRuleParameterValueMapper implements RuleParameterValueMapper<MediaModel>
{
#Resource
MediaService mediaService;
#Override
public String toString(final MediaModel mediaModel)
{
Preconditions.checkArgument(Objects.nonNull(mediaModel), "mediaModel must not be null!");
return mediaModel.getCode();
}
#Override
public MediaModel fromString(final String mediaCode)
{
try
{
return mediaService.getMedia(mediaCode);
}
catch (UnknownIdentifierException e)
{
e.message()
}
return null;
}
}
Now create a custom mapper definition (RuleParameterValueMapperDefinition)
<bean id="mediaTypeRuleParameterValueMapper" class="com.hybris.MediaTypeRuleParameterValueMapper"/>
<bean id="mediaTypeRuleParameterValueMapperDefinition" class="de.hybris.platform.ruleengineservices.rule.strategies.impl.RuleParameterValueMapperDefinition">
<property name="mapper" ref="mediaTypeRuleParameterValueMapper"/>
<property name="type" value="ItemType(Media)"/>
</bean>
Now you are ready to use
RuleConditionDefinitionParameter.type=ItemType(Media)
I don't believe that you need to map a date in that way, it should be treated the same as a String or Integer. So for reference look at the way any of the String or Integer parameters are handled.
My basis for saying this is the following from ruleengineservices-spring-rule.xml:
<alias name="defaultRuleParameterSupportedTypes" alias="ruleParameterSupportedTypes" />
<util:set id="defaultRuleParameterSupportedTypes" value-type="java.lang.String">
<value>java.lang.Boolean</value>
<value>java.lang.Character</value>
<value>java.lang.String</value>
<value>java.lang.Byte</value>
<value>java.lang.Short</value>
<value>java.lang.Integer</value>
<value>java.lang.Long</value>
<value>java.lang.Float</value>
<value>java.lang.Double</value>
<value>java.math.BigInteger</value>
<value>java.math.BigDecimal</value>
<value>java.util.Date</value>
<value>java.lang.Enum</value>
<value>java.util.List</value>
<value>java.util.Map</value>
</util:set>

reprocess maprstream messages using spring integration kafka

This is related to the thread and I am using spring-integration-kafka 2.0 to consume the messages from mapr stream topics.
I am facing difficulties to use the KafkaConsumer feature - reprocess maprstream messages - using offset and topic partitions.
If I can integrate seek feature I will be able to reprocess the messages based on offset value.
Can someone please help me to integrate the KafkaConsumer features seek, seekToBegining, seekToEnd in spring integration Kafka? The current consumer configuration is mentioned below:
<int-kafka:message-driven-channel-adapter
id="kafkaListener"
listener-container="container1"
auto-startup="true"
phase="100"
send-timeout="5000"
channel="inputFromStream"
error-channel="errorChannel" />
<bean id="container1" class="org.springframework.kafka.listener.KafkaMessageListenerContainer">
<constructor-arg>
<bean class="org.springframework.kafka.core.DefaultKafkaConsumerFactory">
<constructor-arg>
<map>
<entry key="bootstrap.servers" value="localhost:9092"/>
<entry key="group.id" value="siTestGroup1"/>
<entry key="enable.auto.commit" value="true"/>
<entry key="auto.commit.interval.ms" value="1000"/>
<entry key="auto.offset.reset" value="earliest" />
<entry key="max.partition.fetch.bytes" value="3145728"/>
<entry key="key.deserializer" value="org.apache.kafka.common.serialization.StringDeserializer"/>
<entry key="value.deserializer" value="org.apache.kafka.common.serialization.StringDeserializer"/>
</map>
</constructor-arg>
</bean>
</constructor-arg>
<constructor-arg>
<bean class="org.springframework.kafka.listener.config.ContainerProperties">
<constructor-arg name="topics" value="${maprstream.topicname}" />
</bean>
</constructor-arg>
</bean>
Use a ConsumerAwareRebalanceListener - this is how Spring Cloud Stream does it...
final AtomicBoolean initialAssignment = new AtomicBoolean(true);
if (!"earliest".equals(resetTo) && "!latest".equals(resetTo)) {
logger.warn("no (or unknown) " + ConsumerConfig.AUTO_OFFSET_RESET_CONFIG +
" property cannot reset");
resetOffsets = false;
}
if (groupManagement && resetOffsets) {
containerProperties.setConsumerRebalanceListener(new ConsumerAwareRebalanceListener() {
#Override
public void onPartitionsRevokedBeforeCommit(Consumer<?, ?> consumer, Collection<TopicPartition> tps) {
// no op
}
#Override
public void onPartitionsRevokedAfterCommit(Consumer<?, ?> consumer, Collection<TopicPartition> tps) {
// no op
}
#Override
public void onPartitionsAssigned(Consumer<?, ?> consumer, Collection<TopicPartition> tps) {
if (initialAssignment.getAndSet(false)) {
if ("earliest".equals(resetTo)) {
consumer.seekToBeginning(tps);
}
else if ("latest".equals(resetTo)) {
consumer.seekToEnd(tps);
}
}
}
});
}
else if (resetOffsets) {
Arrays.stream(containerProperties.getTopicPartitions())
.map(tpio -> new TopicPartitionInitialOffset(tpio.topic(), tpio.partition(),
// SK GH-599 "earliest".equals(resetTo) ? SeekPosition.BEGINNING : SeekPosition.END))
"earliest".equals(resetTo) ? 0L : Long.MAX_VALUE))
.collect(Collectors.toList()).toArray(containerProperties.getTopicPartitions());
}

How to copy files by SFTP sequentially (Spring integration)?

I have to copy files A and B sequentially to a remote folder. It is important that B is sent only after A has been sent, ot at least at the same time, but not before.
I've read the doc, but it's not clear. My idea is to put 2 messages into the same channel. But I don't know if the files linked to these 2 messages will be sent sequentially.
#Component
public class JobExportExecutionsRouter {
...
#Autowired
private MessageChannel sftpIncrExportChannel;
...
#Router
public List<String> routeJobExecution(JobExecution jobExecution) {
final List<String> routeToChannels = new ArrayList<String>();
...
sftpIncrExportChannel.send(MessageBuilder.withPayload(fileA).build());
sftpIncrExportChannel.send(MessageBuilder.withPayload(fileB).build());
routeToChannels.add("sftpIncrExportChannel");
return routeToChannels;
}
}
My XML configuration contains:
<int:channel id="sftpIncrExportChannel">
<int:queue/>
</int:channel>
...
<int-sftp:outbound-channel-adapter session-factory="sftpSessionFactory" channel="sftpIncrExportChannel" charset="UTF8" remote-directory="${export.incr.sftp.dir}" />
...
<bean id="sftpSessionFactory"
class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<property name="host" value="${export.incr.sftp.dir}"/>
<property name="user" value="${export.incr.sftp.user}"/>
<property name="password" value="${export.incr.sftp.password}"/>
</bean>
Do you have suggestions?
If you remove the <queue/> from the channel, they will run sequentially on your calling thread.
If you use a queue channel; you need a poller but, as long as the poller does not have a task-executor, the messages will be sent sequentially on the poller thread. The next poll doesn't happen until the current one completes.

PropertiesPersistingMetadataStore not writing to file

I am using SftpSimplePatternFileListFilter and SftpPersistentAcceptOnceFileListFilter along with metadata store. But I noticed that it is not flushing the entries to file. I never show flush() method being called from PropertiesPersistingMetadataStore which ultimately invokes saveMetaData() method.
Here is my config looks like
<bean id="compositeFilter" class="org.springframework.integration.file.filters.CompositeFileListFilter">
<constructor-arg>
<list>
<bean class="org.springframework.integration.sftp.filters.SftpSimplePatternFileListFilter">
<constructor-arg value="*.txt" />
</bean>
<bean class="org.springframework.integration.sftp.filters.SftpPersistentAcceptOnceFileListFilter">
<constructor-arg name="store" ref="metadataStore"/>
<constructor-arg value="myapp"/>
</bean>
</list>
</constructor-arg>
</bean>
<bean name="metadataStore" class="org.springframework.integration.metadata.PropertiesPersistingMetadataStore">
<property name="baseDirectory" value="/tmp/"/>
</bean>
By default PropertiesPersistingMetadataStore flushes to the file on applicationContext destroy:
#Override
public void close() throws IOException {
flush();
}
#Override
public void flush() {
saveMetadata();
}
#Override
public void destroy() throws Exception {
flush();
}
Starting with 4.1.2 you can invoke flush() manually at runtime.
E.g. periodically with <task:sheduled-tasks> or with some <int:outbound-channel-adapter>.
Feel free to ask for more information!

Spring integration message-store rolled back before error handler

I need to handle certain error conditions within my spring integration flow. My flow is using a message store and setting the error channel on the poller. I had thought that if I handled the message in the error handler that the rollback would not occur, but the messageStore remove (delete) is being rolled back before the error flow is even executed.
Here is a pseudo-flow that duplicates my issue.
<int:channel id="rollbackTestInput" >
<int:queue message-store="messageStore"/>
</int:channel>
<int:bridge input-channel="rollbackTestInput" output-channel="createException" >
<int:poller fixed-rate="50"
error-channel="myErrorChannel">
<int:transactional />
</int:poller>
</int:bridge>
<int:transformer input-channel="createException" output-channel="infoLogger"
expression="T(GarbageToForceException).doesNotExist()" />
<int:channel id="myErrorChannel">
<int:queue/>
</int:channel>
<!-- JDBC Message Store -->
<bean id="messageStore" class="org.springframework.integration.jdbc.JdbcMessageStore">
<property name="dataSource">
<ref bean="dataSource" />
</property>
</bean>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager" >
<property name="dataSource" ref="dataSource"/>
This flow will result in an infinite rollback/poll loop. How can I handle the error and not rollback?
That's correct behavior. The error-channel logic is accepted around the TX advice for the polling task.
The code looks like:
#Override
public void run() {
taskExecutor.execute(new Runnable() {
#Override
public void run() {
int count = 0;
while (initialized && (maxMessagesPerPoll <= 0 || count < maxMessagesPerPoll)) {
try {
if (!pollingTask.call()) {
break;
}
count++;
}
catch (Exception e) {
if (e instanceof RuntimeException) {
throw (RuntimeException) e;
}
else {
throw new MessageHandlingException(new ErrorMessage(e), e);
}
}
}
}
});
}
Where TX Advice is on the pollingTask.call(), but error handling is done from the taskExecutor:
this.taskExecutor = new ErrorHandlingTaskExecutor(this.taskExecutor, this.errorHandler);
Where your error-channel is configured on that errorHandler as MessagePublishingErrorHandler.
To reach your requirements you can try to follow with synchronization-factory on the <poller>:
<int:transaction-synchronization-factory id="txSyncFactory">
<int:after-rollback channel="myErrorChannel" />
</int:transaction-synchronization-factory>
Or supply your <int:transformer> with <request-handler-advice-chain>:
<int:request-handler-advice-chain>
<bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<property name="onFailureExpression" value="#exception" />
<property name="failureChannel" value="myErrorChannel" />
<property name="trapException" value="true" />
</bean>
</int:request-handler-advice-chain>
I have found a different solution that meets my requirements and is less invasive.
Instead of using transactional on the poller, I can use an advice chain that that specifies which exceptions I should rollback and which I shouldn't. For my case, I do not want to rollback most exceptions. So I rollback Throwable and list any specific exceptions I want to rollback.
<int:bridge input-channel="rollbackTestInput"
output-channel="createException">
<int:poller fixed-rate="50"
error-channel="myErrorChannel">
<int:advice-chain>
<int:ref bean="txAdvice"/>
</int:advice-chain>
</int:poller>
</int:bridge>
<tx:advice id="txAdvice" transaction-manager="transactionManager">
<tx:attributes>
<tx:method name="*" rollback-for="javax.jms.JMSException"
no-rollback-for="java.lang.Throwable" />
</tx:attributes>
</tx:advice>

Resources