Spring Integration Kafka threading config - spring-integration

I'm using spring-integration-kafka 1.1.0 with the following config. I don't quite understand about the streams config. When I increase this, does Spring automatically spawn more threads to handle the messages? e.g. when I have streams=2, does the correlated transformer and service-activator all run in 2 threads? I feel like missing some thread-executor configurations, but not sure how. Any hint is appreciated. Thanks.
<int:poller default="true" fixed-delay="10"/>
<int:channel id="tag.track">
</int:channel>
<int-kafka:inbound-channel-adapter id="kafkaInboundChannelAdapterForTagTrack" kafka-consumer-context-ref="consumerContextForTagTrack" auto-startup="true" channel="tag.track">
</int-kafka:inbound-channel-adapter>
<int-kafka:consumer-context id="consumerContextForTagTrack"
consumer-timeout="${kafka.consumer.timeout}" zookeeper-connect="zookeeperConnect">
<int-kafka:consumer-configurations>
<int-kafka:consumer-configuration group-id="${kafka.consumer.group.track}" max-messages="200">
<int-kafka:topic id="tag.track" streams="2" />
</int-kafka:consumer-configuration>
</int-kafka:consumer-configurations>
</int-kafka:consumer-context>
<int:channel id="tag.track.transformed">
<int:interceptors>
<int:wire-tap channel="event.logging" />
</int:interceptors>
</int:channel>
<int:transformer id="kafkaMessageTransformerForTagTrack"
ref="kafkaMessageTransformer" input-channel="tag.track" method="transform"
output-channel="tag.track.transformed" />
<int:service-activator input-channel="tag.track.transformed" ref="tagTrackMessageHandler" method="handleTagMessage">
<int:request-handler-advice-chain>
<ref bean="userTagRetryAdvice" />
</int:request-handler-advice-chain>
</int:service-activator>
Tried message-driven-channel-adapter, but can't get it work, the following config doesn't pick up any message. Also tried the org.springframework.integration.kafka.listener.KafkaTopicOffsetManager , it complains Offset management topic cannot have more than one partition. Also, in this adapter, how to configure the consumer group?
Is there any detailed example on how to use the message-driven-channel-adapter? The instruction on the project page is pretty high level.
<int:channel id="tag.track">
<int:queue capacity="100"/>
</int:channel>
<bean id="kafkaConfiguration" class="org.springframework.integration.kafka.core.ZookeeperConfiguration">
<constructor-arg ref="zookeeperConnect"/>
</bean>
<bean id="connectionFactory" class="org.springframework.integration.kafka.core.DefaultConnectionFactory">
<constructor-arg ref="kafkaConfiguration"/>
</bean>
<bean id="decoder" class="org.springframework.integration.kafka.serializer.common.StringDecoder"/>
<int-kafka:message-driven-channel-adapter
id="adapter"
channel="tag.track"
connection-factory="connectionFactory"
key-decoder="decoder"
payload-decoder="decoder"
max-fetch="100"
topics="tag.track"
auto-startup="true"
/>

The streams property has nothing to do with Spring itself; it's simply passed to Kafka when invoking ConsumerConnector.createMessageStreams() (each topic/streams entry is passed in the map argument).
Refer to the kafka documentation.
EDIT:
When using the high-level consumer, the kafka inbound channel adapter is polled, so the threads on which the downstream integration flow runs are not related to the kafka client threads; they are managed in the poller configuration.
You could consider using the message-driven channel adapter instead.

Related

Migration Of Spring Batch from 2.2 to 4.x (XML Configuration Of Partition Jobs

I am migrating Spring Batch Partition Jobs with XML configuration to Spring batch 4.x. I am trying to take advantage to an improvement in the MessageChannelPartitionHandler where it looks for completion of remote steps with both a reply channel and a datasource polling.
When I use this configuration:
<int:channel id="partitioned.jms.requests">
<int:dispatcher task-executor="springbatch.partitioned.jms.taskExecutor"/>
</int:channel>
<int:channel id="partitioned.jms.reply" />
<bean id="partitioned.jms.handler" class="org.springframework.batch.integration.partition.MessageChannelPartitionHandler">
<property name="messagingOperations">
<bean class="org.springframework.integration.core.MessagingTemplate">
<property name="defaultChannel" ref="partitioned.jms.requests"/>
</bean>
</property>
<property name="stepName" value="process.partitioned.step"/>
<property name="gridSize" value="${process.step.partitioned.gridSize}"/>
<property name="dataSource" ref="springbatch.repositoryDataSource" />
<property name="pollInterval" value="${springbatch.partition.verification.interval}"/>
</bean>
The step completes but I see an error in the logs.
no output-channel or replyChannel header available
I looked at the class and see I can add a replyChannel property to the MessageChannelPartitionHandler class. If I add the following:
<property name="replyChannel" ref="claim.acp.process.partitioned.jms.reply"/>
I get error back that a pollable channel is needed.
How do I create a pollable channel (assuming from the same JMS queue)?
You need to show the rest of your configuration.
If you are using DB polling for the results, set the output-channel on the jms outbound gateway to "nullChannel" and the replies received over JMS will be discarded.
Or, use an outbound channel adapter (instead of a gateway) (and an inbound-channel-adapter on the slaves). That avoids the replies being returned altogether.
You have to set pollRepositoryForResults to true.
To answer your specific question
<int:channel id="replies>
<int:queue />
<int:channel>

Inconsistency Dequeue issues with Spring Integration with Oracle AQ

I am using Spring integration with Oracle AQ,configuration code is as below.
Currently with the below configuration,the service activator is not getting invoked consistently with even the dequeue was successful at oracle end, unable to trace the message after successful Dequeue with the application logs, not even single error message shown on the log. Tried Debug,Trace and info options in the jms adaptor, but no clue from the log details. I have verified with Oracle team on enque and deque messages, but the health check reports are clearing mentioning the messages are dequeued successfully.
Badly required your help to get rid of this inconsitency behaviour while invoking the service activator using spring
<int:logging-channel-adapter id="jmslogger" log-full-message="true" level="TRACE"/>
<!-- Oracle Advanced Queue Integration -->
<bean id="jdbc4NativeJdbcExtractor"
class="org.springframework.jdbc.support.nativejdbc.Jdbc4NativeJdbcExtractor"
p:connectionType="oracle.jdbc.driver.OracleConnection" />
<orcl:aq-jms-connection-factory
id="oracleAqConnectionFactory"
use-local-data-source-transaction="true"
native-jdbc-extractor="jdbc4NativeJdbcExtractor"
data-source="dataSource"/>
<!-- Siebel Atlas Service Request - Oracle Advanced Queue Integration -->
<bean id="jmsJsonMessageConverter" class="org.springframework.jms.support.converter.MappingJackson2MessageConverter"
p:typeIdPropertyName="javaDtoClass"
/>
<int:channel id="submitSiebelAtlasCreateServiceRequestOutboundRequestChannel" ></int:channel>
<int:channel id="submitSiebelAtlasCreateServiceRequestOutboundRequestEnrichedChannel" >
<int:interceptors>
<int:wire-tap channel="jmslogger"/>
</int:interceptors>
</int:channel>
<int:channel id="createSiebelAtlasCreateServiceRequestOutboundReplyChannel" />
<int:logging-channel-adapter id="createSiebelAtlasCreateServiceRequestOutboundReplyChannelLogger"
channel="createSiebelAtlasCreateServiceRequestOutboundReplyChannel" />
<int:gateway
id="submitSiebelAtlasCreateServiceRequestMessagingService"
service-interface="ServiceRequestOutboundGatewayMessagingService"
default-request-channel="submitSiebelAtlasCreateServiceRequestOutboundRequestChannel"
default-reply-channel="submitSiebelAtlasCreateServiceRequestOutboundReplyChannel" />
<int:header-enricher input-channel="submitSiebelAtlasCreateServiceRequestOutboundRequestChannel" output-channel="submitSiebelAtlasCreateServiceRequestOutboundRequestEnrichedChannel">
<int:correlation-id expression="payload.getRequestId()"/>
</int:header-enricher>
<!-- Outbound driven channel adapter, meaning messagings are being sent to / queued in AQ -->
<int-jms:outbound-channel-adapter
id="siebelAtlasCreateServiceRequestJmsOutboundChannelAdapter"
destination-name="Q_NAME"
channel="submitSiebelAtlasCreateServiceRequestOutboundRequestEnrichedChannel"
connection-factory="oracleAqConnectionFactory"
message-converter="jmsJsonMessageConverter"
auto-startup="true">
</int-jms:outbound-channel-adapter>
<int:service-activator
output-channel="createSiebelAtlasCreateServiceRequestOutboundReplyChannel"
input-channel="createSiebelAtlasCreateServiceRequestInboundRequestChannel"
ref="createCustomerRelationshipsSiebelAtlasServiceRequestService"
method="create">
</int:service-activator>
<int:channel id="createSiebelAtlasCreateServiceRequestInboundRequestChannel">
<int:interceptors>
<int:wire-tap channel="jmslogger"/>
</int:interceptors>
</int:channel>
<!-- Inbound message driven channel adapter, meaning messagings are being consumed / dequeued from AQ -->
<int-jms:message-driven-channel-adapter connection-factory="oracleAqConnectionFactory"
message-converter="jmsJsonMessageConverter"
destination-name="Q_NAME"
channel="createSiebelAtlasCreateServiceRequestInboundRequestChannel"
acknowledge="transacted"
max-concurrent-consumers="5"
transaction-manager="transactionManager"
auto-startup="true"
concurrent-consumers="2" />

Spring Integration Task Executor Memory Leakage

My module is on spring integration which pushes message to RabbitMQ.
<task:executor id="bulkChannelExecutor" keep-alive="50" poolsize="50-100" queue-capacity="500"
></task:executor>
<int:channel id="logIngesterRestEndpointBulk" >
<int:dispatcher task-executor="bulkChannelExecutor" failover="false" />
</int:channel>
While Load testing ist not able to handle heavy loads(100 concurrent users) it causing message or request lost ,if i remove pool-size ,having unbounded poolsize its able to handling heavy loads but its creating memory thread leaks?
Rest Gateway will get input as Json and pass it to Filter and then will to
Chain ,there Json messages will get parsed and splitted as indivdiual messages and then will get push to rabbitMQ
<task:executor id="bulkChannelExecutor" keep-alive="50" pool-size="100-500"
queue-capacity="500"
></task:executor>
<int:channel id="logIngesterRestEndpointBulk" >
<int:dispatcher task-executor="bulkChannelExecutor" failover="false" />
</int:channel>
<int-http:inbound-gateway id="logIngesterGatewayBulk" auto-startup="true"
supported-methods="POST" request-channel="logIngesterRestEndpointBulk"
path="/rest/log/bulk" error-channel="errorChannel" reply-timeout="50"
request-payload-type="java.lang.String">
</int-http:inbound-gateway>
<int:channel id="filterChannelbulk">
</int:channel>
<int:channel id="messageOutputChannel" >
</int:channel>
<int:filter input-channel="logIngesterRestEndpointBulk"
throw-exception-on-rejection="true" method="validate" ref="payloadValidation"
output-channel="filterChannelbulk">
</int:filter>
<int:chain input-channel="filterChannelbulk" output-channel="messageOutputChannel" id="chaining" >
<int:splitter id="splitter" ref="payloadSplitter" method="splitPayLoad" >
</int:splitter>
<int:transformer id="logMessageTransformerbulk" ref="logMessageHeaderTransformer"
method="transform">
</int:transformer>
</int:chain>
<int:service-activator input-channel="errorChannel"
ref="responseHandler" method="handleFailedPayLoad" >
</int:service-activator>
<!-- Start RabbitMQ Configuration -->
<int:channel id="ackchannel">
</int:channel>
<int-amqp:outbound-channel-adapter
id="amqpAdapter" channel="messageOutputChannel" amqp-template="amqpTemplate" lazy-connect="false" confirm-ack-channel="ackchannel" confirm-correlation-expression="headers['amqp_publishConfirm']"
exchange-name="dhp_exchange" routing-key-expression="headers['routingKey']" >
</int-amqp:outbound-channel-adapter>
<int:service-activator id="ackservice" input-channel="ackchannel" ref="responseHandler" method="confirmAck" />
Since you are using HTTP, you should remove the task executor and allow the web container to manage the threads. If you need more threads, do it through the web container configuration; don't use a thread handoff here; it really serves no purpose and can cause the issues you describe.

Java 7 DSL representation for spring integration "int-jms:message-driven-channel-adapter"

I have code to read message from IBM MQ with spring integration config as below. I need to convert to Java 7 DSL using Spring integration annotations.
<bean id="inQueue" class="com.ibm.mq.jms.MQQueue" depends-on="esbQueueConnectionFactory">
<constructor-arg value="******" />
</bean>
<int:channel id="readFromChannel">
<int:interceptors>
<!-- <int:wire-tap channel="logger" /> -->
</int:interceptors>
</int:channel>
<int-jms:message-driven-channel-adapter
id="jmsInAdapter" connection-factory="esbQueueConnectionFactory"
destination="inQueue" channel="readFromChannel" />
<bean id="msgProcesser" class="com.gap.si.service.MessageProcessService" />
<int:service-activator id="servAct"
input-channel="readFromChannel" ref="msgProcesser" method="processMessage" />
If you want to use the Java DSL, see the reference manual.
Use the Jms factory class
IntegrationFlows.from(Jms.messageDriven...)
.handle(...)
.get();
If you want to use just annotations, see the Spring Integration reference manual.
The message driven adapter is simply a #Bean of type JmsMessageDrivenEndpoint which gets a listener container and a ChannelPublishingJmsMessageListener.

Spring Integration Threads Parked at Service Activator

I am having issues with threads getting parked at my Service Activators that leads to files hanging in the SftpGatewayChannel when the pool is depleted. I think it is related to the SA's having a void return, which is correct because they are only incrementing metrics.
I was able to work around the issue by adding a default-reply-timeout to the SftpGateway, but this is not ideal since there is retry advice and I don't want the threads to timeout if there is a connection issue. I would like a solution that returns the threads to the pool after a successful upload and call to the "Success" Service Activator.
<task:executor id="Tasker" rejection-policy="CALLER_RUNS" pool-size="${MaxThreads}" />
<int:channel id="SftpGatewayChannel">
<int:dispatcher task-executor="Tasker" />
</int:channel>
<int:service-activator id="SegmentStart" input-channel="SftpGatewayChannel" ref="SftpGateway" />
<int:gateway id="SftpGateway" default-request-channel="SftpOutboundChannel" error-channel="ErrorChannel" />
<int:channel id="SftpOutboundChannel" datatype="java.lang.String,java.io.File,byte[]" />
<int-sftp:outbound-channel-adapter id="SftpOutboundAdapter"
session-factory="SftpCachingSessionFactory" channel="SftpOutboundChannel" charset="UTF-8" >
<int-sftp:request-handler-advice-chain>
<ref bean="exponentialRetryAdvice" />
<bean id="SuccessAdvice" class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice" >
<property name="successChannel" ref="SuccessChannel"/>
<property name="onSuccessExpression" value="true"/>
</bean>
</int-sftp:request-handler-advice-chain>
</int-sftp:outbound-channel-adapter>
<int:channel id="ErrorChannel">
<int:interceptors>
<int:wire-tap channel="FailureChannel" />
</int:interceptors>
</int:channel>
<int:channel id="AttemptChannel" />
<int:channel id="SuccessChannel" />
<int:channel id="FailureChannel" />
<int:service-activator id="AttemptMetrics" input-channel="AttemptChannel"
expression="T(MetricsCounter).addAttempt()" />
<int:service-activator id="SuccessMetrics" input-channel="SuccessChannel"
expression="T(MetricsCounter).addSuccesses(inputMessage.Headers.messages.size())" />
<int:service-activator id="FailureMetrics" input-channel="FailureChannel"
expression="T(MetricsCounter).addFailures(payload.getFailedMessage().Headers.messages.size())" />
Yes, gateways expect a reply by default. Instead of using the default RequestReplyExchanger you could use a service-interface method with a void return void process(Message<?> m).
Alternatively, as you have done, simply add default-reply-timeout="0" on your gateway and the thread will return immediately without waiting for a reply (that will never come).
... but this is not ideal ...
The reply timeout clock only starts when the thread returns to the gateway, so it will have no impact on the downstream flow.

Resources