While using spring kafka I am able to read the messages from the topic based on time stamp with the below code -
ConsumerRecords<String, String> records = consumer.poll(100);
if (flag) {
Map<TopicPartition, Long> query = new HashMap<>();
query.put(new TopicPartition(kafkaTopic, 0), millisecondsFromEpochToReplay);
Map<TopicPartition, OffsetAndTimestamp> result = consumer.offsetsForTimes(query);
if(result != null)
{
records = ConsumerRecords.empty();
}
result.entrySet().stream()
.forEach(entry -> consumer.seek(entry.getKey(), entry.getValue().offset()));
flag = false;
}
How can the same functionality be achieved using spring integration DSL - with KafkaMessageDrivenChannelAdapter?
How can we set the Integration Flows and read message from topic based on the timestamp?
Configure the adapter's listener container with a ConsumerAwareRebalanceListener and perform the lookup/seeks when the partitions are assigned.
EDIT
Using Spring Boot (but you can configure the container however you create the container)...
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.group-id=so54664761
and
#SpringBootApplication
public class So54664761Application {
public static void main(String[] args) {
SpringApplication.run(So54664761Application.class, args);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> template.send("so54664761", "foo");
}
#Bean
public NewTopic topic() {
return new NewTopic("so54664761", 1, (short) 1);
}
#Bean
public IntegrationFlow flow(ConcurrentKafkaListenerContainerFactory<String, String> containerFactory) {
ConcurrentMessageListenerContainer<String, String> container = container(containerFactory);
return IntegrationFlows.from(new KafkaMessageDrivenChannelAdapter<>(container))
.handle(System.out::println)
.get();
}
#Bean
public ConcurrentMessageListenerContainer<String, String> container(
ConcurrentKafkaListenerContainerFactory<String, String> containerFactory) {
ConcurrentMessageListenerContainer<String, String> container = containerFactory.createContainer("so54664761");
container.getContainerProperties().setConsumerRebalanceListener(new ConsumerAwareRebalanceListener() {
#Override
public void onPartitionsAssigned(Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
System.out.println("Partitions assigned - do the lookup/seeks here");
}
});
return container;
}
}
and
Partitions assigned - do the lookup/seeks here
GenericMessage [payload=foo, headers={kafka_offset=0, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#2f5b2297, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedPartitionId=0, kafka_receivedTopic=so54664761, kafka_receivedTimestamp=1550241100112}]
Related
We are migrating to Redis from RabbitMQ in our microservice applications.
Here is our service activator
#ServiceActivator(inputChannel = ApplicationEventChannelNames.REMOTE_CHANNEL)
public void handleApplicationEvent(#Header(value = ApplicationEventHeaders.APPLICATION_EVENT) final ApplicationEvent event,
#Payload Object message) {
...
}
Initially we had a problem where we were losing application event in the SimpleMessageConverter. We solved it by implementing a CustomRedisMessageConverter and putting application event into the payload in the fromMessage method and retreiving it from payload and create a new message headers with application event in the toMessage method.
#Override
public Object fromMessage(Message<?> message, Class<?> targetClass) {
if (message.getHeaders().get(ApplicationEventHeaders.APPLICATION_EVENT) != null) {
Map<String, Object> map = new HashMap<>();
map.put("headers", ((ApplicationEvent) message.getHeaders().get(ApplicationEventHeaders.APPLICATION_EVENT)).getName());
map.put("payload", message.getPayload());
GenericMessage<Map<String, Object>> msg = new GenericMessage<>(map, message.getHeaders());
return super.fromMessage(msg, targetClass);
}
return super.fromMessage(message, targetClass);
}
#Override
public Message<?> toMessage(Object payload, MessageHeaders headers) {
try {
final Map<String, ?> message = new ObjectMapper().readValue((String) payload, new TypeReference<Map<String, ?>>() {});
if (message.get("headers") != null) {
final Map<String, Object> messageHeaders = new HashMap<>(headers);
messageHeaders.put(ApplicationEventHeaders.APPLICATION_EVENT, new ApplicationEvent((String) message.get("headers")));
return super.toMessage(message.get("payload"), new MessageHeaders(messageHeaders));
}
} catch (JsonProcessingException exception) {
/* Intentionally left blank */
}
return super.toMessage(payload, headers);
}
We are wondering if there is a better approach for doing this?
Lastly, payload in the service activator come as a LinkedHashMap but we want it to be an object. With RabbitMQ this was handled.
Is there any way to do the same in Redis? Or do we use headers to keep track of the type of a payload and manually convert them into an object?
UPDATE - REDIS Configuration
#Bean
public RedisInboundChannelAdapter applicationEventInboundChannelAdapter(#Value(value = "${com.xxx.xxx.xxx.integration.spring.topic}") String topic,
MessageChannel applicationEventRemoteChannel,
RedisConnectionFactory connectionFactory) {
final RedisInboundChannelAdapter inboundChannelAdapter = new RedisInboundChannelAdapter(connectionFactory);
inboundChannelAdapter.setTopics(topic);
inboundChannelAdapter.setOutputChannel(applicationEventRemoteChannel);
inboundChannelAdapter.setErrorChannel(errorChannel());
inboundChannelAdapter.setMessageConverter(new CustomRedisMessageConverter());
return inboundChannelAdapter;
}
#ServiceActivator(inputChannel = "errorChannel")
public void processError(MessageHandlingException exception) {
try {
logger.error(
"Could not process {}, got exception: {}",
exception.getFailedMessage().getPayload(),
exception.getMessage());
logger.error(
ExceptionUtils.readStackTrace(exception));
} catch (Throwable throwable) {
logger.error(
"Got {} during processing with message: {} ",
MessageHandlingException.class.getSimpleName(),
exception);
}
}
#Bean
#ServiceActivator(inputChannel = ApplicationEventChannelNames.LOCAL_CHANNEL)
public RedisPublishingMessageHandler redisPublishingMessageHandler(#Value(value = "${com.xxx.xxx.xxx.integration.spring.topic}") String topic,
RedisConnectionFactory redisConnectionFactory) {
final RedisPublishingMessageHandler redisPublishingMessageHandler = new RedisPublishingMessageHandler(redisConnectionFactory);
redisPublishingMessageHandler.setTopic(topic);
redisPublishingMessageHandler.setSerializer(new Jackson2JsonRedisSerializer<>(String.class));
redisPublishingMessageHandler.setMessageConverter(new CusomRedisMessageConverter());
return redisPublishingMessageHandler;
}
/*
* MessageChannel
*/
#Bean
public MessageChannel errorChannel() {
return new DirectChannel();
}
Redis does not support headers, so you have to embed them into a body. See EmbeddedJsonHeadersMessageMapper which could be supplied into that org.springframework.integration.support.converter.SimpleMessageConverter on both side.
I want to use EmbeddedKafkaBroker to test my flow that involves KafkaMessageDrivenChannelAdapter,
it looks like consumer starts correclty , subscribed to topic but handler is not triggered after pushing message to EmbeddedKafkaBroker.
#SpringBootTest(properties = {"...."}, classes = {....class})
#EmbeddedKafka
class IntTests {
#BeforeAll
static void setup() {
embeddedKafka = new EmbeddedKafkaBroker(1, true, TOPIC);
embeddedKafka.kafkaPorts(57412);
embeddedKafka.afterPropertiesSet();
}
#Test
void testit() throws InterruptedException {
String ip = embeddedKafka.getBrokersAsString();
Map<String, Object> configs = new HashMap<>(KafkaTestUtils.producerProps(embeddedKafka));
Producer<String, String> producer = new DefaultKafkaProducerFactory<>(configs, new StringSerializer(), new StringSerializer()).createProducer();
// Act
producer.send(new ProducerRecord<>(TOPIC, "key", "{\"name\":\"Test\"}"));
producer.flush();
....
}
...
}
And the main class:
#Configuration
public class Kafka {
#Bean
public KafkaMessageDrivenChannelAdapter<String, String> adapter(KafkaMessageListenerContainer<String, String> container) {
KafkaMessageDrivenChannelAdapter<String, String> kafkaMessageDrivenChannelAdapter =..
kafkaMessageDrivenChannelAdapter.setOutputChannelName("kafkaChannel");
}
#Bean
public KafkaMessageListenerContainer<String, String> container() {
ContainerProperties properties = new ContainerProperties(TOPIC);
KafkaMessageListenerContainer<String, String> kafkaContainer = ...;
return kafkaContainer;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:57412");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group12");
...
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public PublishSubscribeChannel kafkaChannel() {
return new PublishSubscribeChannel ();
}
#Bean
#ServiceActivator(inputChannel = "kafkaChannel")
public MessageHandler handler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
}
};
}
...
}
in log I do see:
clients.consumer.KafkaConsumer : [Consumer clientId=consumer-group12-1, groupId=group12] Subscribed to topic(s): TOPIC
ThreadPoolTaskScheduler : Initializing ExecutorService
KafkaMessageDrivenChannelAdapter : started bean 'adapter'; defined in: 'com.example.demo.demo.Kafka';
Having embeddedKafka = new EmbeddedKafkaBroker(1, true, TOPIC); and #EmbeddedKafka, you essentially start two separate Kafka clusters. See ports option of the #EmbeddedKafka if you want to change a random port for embedded broker. But at the same time it is better to rely in what Spring Boot provides for us with its auto-configuration.
See documentation for more info: https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html#boot-features-embedded-kafka. Pay attention to the bootstrapServersProperty = "spring.kafka.bootstrap-servers" property.
UPDATE
In your test you have this #SpringBootTest(classes = {Kafka.class}). When I remove that classes attribute, everything has started to work. The problem that your config class is not auto-configuration aware, therefore you don't have Spring Integration initialized properly and the message is not consumed from the channel. There might be some other affect. But still: better to rely on the auto-configuration, so let your test to see that #SpringBootApplication annotation.
Related to SO question: Spring Integration Java DSL using JMS retry/redlivery
Using a transacted poller and JmsTransactionManager on a connectionFactory with maximumRedeliveries set to 3 results in a doubling of the actual redlievery attempts.
How can I get this to honor the redelivery settings of the connection factory?
My connectionFactory is built as:
#Bean (name="spring-int-connection-factory")
ActiveMQConnectionFactory jmsConnectionFactory(){
return buildConnectionFactory(
brokerUrl,
DELAY_2_SECS,
MAX_REDELIVERIES,
"spring-int");
}
public static ActiveMQConnectionFactory buildConnectionFactory(String brokerUrl, Long retryDelay, Integer maxRedeliveries, String clientIdPrefix){
ActiveMQConnectionFactory amqcf = new ActiveMQConnectionFactory();
amqcf.setBrokerURL(brokerUrl);
amqcf.setClientIDPrefix(clientIdPrefix);
if (maxRedeliveries != null) {
if (retryDelay == null) {
retryDelay = 500L;
}
RedeliveryPolicy rp = new org.apache.activemq.RedeliveryPolicy();
rp.setInitialRedeliveryDelay(retryDelay);
rp.setRedeliveryDelay(retryDelay);
rp.setMaximumRedeliveries(maxRedeliveries);
}
return amqcf;
}
My flow with poller is as:
#Bean
public IntegrationFlow flow2(#Qualifier("spring-int-connection-factory") ConnectionFactory connectionFactory) {
IntegrationFlow flow = IntegrationFlows.from(
Jms.inboundAdapter(connectionFactory)
.configureJmsTemplate(t -> t.receiveTimeout(1000).sessionTransacted(true))
.destination(INPUT_DIRECT_QUEUE),
e -> e.poller(Pollers
.fixedDelay(5000)
.transactional()
.errorChannel("customErrorChannel")
.maxMessagesPerPoll(2))
).handle(this.msgHandler).get();
return flow;
}
My errorChannel handler simply re-throws which causes JMS redelivery to happen.
When I run this with the handler set to always throw an exception, I see that the message handler actually receives the message 7 times (1 initial and 6 redeliveries).
I expected only 3 redeliveries according to my connectionFactory config.
Any ideas what is causing the doubling of attempts and how to mitigate it?
This works fine for me - stops at 4...
#SpringBootApplication
public class So51792909Application {
private static final Logger logger = LoggerFactory.getLogger(So51792909Application.class);
public static void main(String[] args) {
SpringApplication.run(So51792909Application.class, args);
}
#Bean
public ApplicationRunner runner(JmsTemplate template) {
return args -> {
for (int i = 0; i < 1; i++) {
template.convertAndSend("foo", "test");
}
};
}
#Bean
public IntegrationFlow flow(ConnectionFactory connectionFactory) {
return IntegrationFlows.from(Jms.inboundAdapter(connectionFactory)
.destination("foo"), e -> e
.poller(Pollers
.fixedDelay(5000)
.transactional()
.maxMessagesPerPoll(2)))
.handle((p, h) -> {
System.out.println(h.get("JMSXDeliveryCount"));
try {
Thread.sleep(2000);
}
catch (InterruptedException e1) {
Thread.currentThread().interrupt();
}
throw new RuntimeException("foo");
})
.get();
}
#Bean
public JmsTransactionManager transactionManager(ConnectionFactory cf) {
return new JmsTransactionManager(cf);
}
#Bean
public ActiveMQConnectionFactory amqCF() {
ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory("vm://localhost?broker.persistent=false");
RedeliveryPolicy rp = new RedeliveryPolicy();
rp.setMaximumRedeliveries(3);
cf.setRedeliveryPolicy(rp);
return cf;
}
public CachingConnectionFactory connectionFactory() {
return new CachingConnectionFactory(amqCF());
}
#JmsListener(destination = "ActiveMQ.DLQ")
public void listen(String in) {
logger.info(in);
}
}
I am using SFTP Source in Spring cloud dataflow and it is working for getting files define in sftp:remote-dir:/home/someone/source , Now I have a many subfolders under the remote-dir and I want to recursively get all the files under this directory which match the patten. I am trying to use filename-regex: but so far it only works on one level. How do I recursively get the files I need.
The inbound channel adapter does not support recursion; use a custom source with the outbound gateway with an MGET command, with recursion (-R).
The doc is missing that option; fixed in the current docs.
I opened an issue to create a standard app starter.
EDIT
With the Java DSL...
#SpringBootApplication
#EnableBinding(Source.class)
public class So44710754Application {
public static void main(String[] args) {
SpringApplication.run(So44710754Application.class, args);
}
// should store in Redis or similar for persistence
private final ConcurrentMap<String, Boolean> processed = new ConcurrentHashMap<>();
#Bean
public IntegrationFlow flow() {
return IntegrationFlows.from(source(), e -> e.poller(Pollers.fixedDelay(30_000)))
.handle(gateway())
.split()
.<File>filter(p -> this.processed.putIfAbsent(p.getAbsolutePath(), true) == null)
.transform(Transformers.fileToByteArray())
.channel(Source.OUTPUT)
.get();
}
private MessageSource<String> source() {
return () -> new GenericMessage<>("foo/*");
}
private AbstractRemoteFileOutboundGateway<LsEntry> gateway() {
AbstractRemoteFileOutboundGateway<LsEntry> gateway = Sftp.outboundGateway(sessionFactory(), "mget", "payload")
.localDirectory(new File("/tmp/foo"))
.options(Option.RECURSIVE)
.get();
gateway.setFileExistsMode(FileExistsMode.IGNORE);
return gateway;
}
private SessionFactory<LsEntry> sessionFactory() {
DefaultSftpSessionFactory sf = new DefaultSftpSessionFactory();
sf.setHost("10.0.0.3");
sf.setUser("ftptest");
sf.setPassword("ftptest");
sf.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(sf);
}
}
And with Java config...
#SpringBootApplication
#EnableBinding(Source.class)
public class So44710754Application {
public static void main(String[] args) {
SpringApplication.run(So44710754Application.class, args);
}
#InboundChannelAdapter(channel = "sftpGate", poller = #Poller(fixedDelay = "30000"))
public String remoteDir() {
return "foo/*";
}
#Bean
#ServiceActivator(inputChannel = "sftpGate")
public SftpOutboundGateway mgetGate() {
SftpOutboundGateway sftpOutboundGateway = new SftpOutboundGateway(sessionFactory(), "mget", "payload");
sftpOutboundGateway.setOutputChannelName("splitterChannel");
sftpOutboundGateway.setFileExistsMode(FileExistsMode.IGNORE);
sftpOutboundGateway.setLocalDirectory(new File("/tmp/foo"));
sftpOutboundGateway.setOptions("-R");
return sftpOutboundGateway;
}
#Bean
#Splitter(inputChannel = "splitterChannel")
public DefaultMessageSplitter splitter() {
DefaultMessageSplitter splitter = new DefaultMessageSplitter();
splitter.setOutputChannelName("filterChannel");
return splitter;
}
// should store in Redis, Zookeeper, or similar for persistence
private final ConcurrentMap<String, Boolean> processed = new ConcurrentHashMap<>();
#Filter(inputChannel = "filterChannel", outputChannel = "toBytesChannel")
public boolean filter(File payload) {
return this.processed.putIfAbsent(payload.getAbsolutePath(), true) == null;
}
#Bean
#Transformer(inputChannel = "toBytesChannel", outputChannel = Source.OUTPUT)
public FileToByteArrayTransformer toBytes() {
FileToByteArrayTransformer transformer = new FileToByteArrayTransformer();
return transformer;
}
private SessionFactory<LsEntry> sessionFactory() {
DefaultSftpSessionFactory sf = new DefaultSftpSessionFactory();
sf.setHost("10.0.0.3");
sf.setUser("ftptest");
sf.setPassword("ftptest");
sf.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(sf);
}
}
I'm having issues using manual acknowledgements with the KafkaTopicOffsetManager. When acknowledge() is called, the topic begins to get spammed repeatedly. Kafka has log.cleaner.enable set to true and the topic is using cleanup.policy=compact. Thanks for any help.
Config:
#Bean
public ZookeeperConfiguration zookeeperConfiguration() {
ZookeeperConfiguration zookeeperConfiguration = new ZookeeperConfiguration(kafkaConfig.getZookeeperAddress());
zookeeperConfiguration.setClientId("clientId");
return zookeeperConfiguration;
}
#Bean
public ConnectionFactory connectionFactory() {
return new DefaultConnectionFactory(zookeeperConfiguration());
}
#Bean
public TestMessageHandler messageListener() {
return new TestMessageHandler();
}
#Bean
public OffsetManager offsetManager() {
ZookeeperConnect zookeeperConnect = new ZookeeperConnect(kafkaConfig.getZookeeperAddress());
OffsetManager offsetManager = new KafkaTopicOffsetManager(zookeeperConnect, kafkaConfig.getTopic() + "_OFFSET");
return offsetManager;
}
#Bean
public KafkaMessageListenerContainer kafkaMessageListenerContainer() {
KafkaMessageListenerContainer kafkaMessageListenerContainer = new KafkaMessageListenerContainer(connectionFactory(), kafkaConfig.getTopic());
kafkaMessageListenerContainer.setMessageListener(messageListener());
kafkaMessageListenerContainer.setOffsetManager(offsetManager());
return kafkaMessageListenerContainer;
}
Listener:
public class TestMessageHandler implements AcknowledgingMessageListener {
private static final Logger logger = LoggerFactory.getLogger(TestMessageHandler.class);
#Override
public void onMessage(KafkaMessage message, Acknowledgment acknowledgment) {
logger.info(message.toString());
acknowledgment.acknowledge();
}
}
The KafkaTopicOffsetManager needs its own topic to maintain the offset of the actual topic being consumed.
If you don't want to deal with decoding the message payload yourself (its painful in my opinion), extend listener from abstract class AbstractDecodingAcknowledgingMessageListener and provide org.springframework.integration.kafka.serializer.common.StringDecoder as the decoder.
public class TestMessageHandlerDecoding extends AbstractDecodingAcknowledgingMessageListener {
public TestMessageHandlerDecoding(Decoder keyDecoder, Decoder payloadDecoder) {
super(keyDecoder, payloadDecoder);
}
#Override
public void doOnMessage(Object key, Object payload, KafkaMessageMetadata metadata, Acknowledgment acknowledgment) {
LOGGER.info("payload={}",payload);
}