Is it possible to use KafkaMessageDrivenChannelAdapter (KafkaMessageListenerContainer) with Poller to read from Kafka every x minutes?
I try creating bean
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata defaultPoller() {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setTrigger(new PeriodicTrigger(5, TimeUnit.MINUTES));
return pollerMetadata;
}
But looks like it is not effecting.
No; the KafkaMessageDrivenChannelAdapter is push, not pull, technology.
Use the inbound channel adapter instead.
https://docs.spring.io/spring-integration/docs/current/reference/html/kafka.html#kafka-inbound-pollable
#Bean
public IntegrationFlow flow(ConsumerFactory<String, String> cf) {
return IntegrationFlow.from(Kafka.inboundChannelAdapter(cf, "myTopic")
.groupId("myDslGroupId"), e -> e.poller(Pollers.fixedDelay(5000)))
.handle(System.out::println)
.get();
}
or
#InboundChannelAdapter(channel = "fromKafka", poller = #Poller(fixedDelay = "5000"))
#Bean
public KafkaMessageSource<String, String> source(ConsumerFactory<String, String> cf) {
KafkaMessageSource<String, String> source = new KafkaMessageSource<>(cf, "myTopic");
source.setGroupId("myGroupId");
source.setClientId("myClientId");
return source;
}
You must ensure that the max.poll.interval.ms is larger than your poll interval, to avoid a rebalance.
Related
I want to use EmbeddedKafkaBroker to test my flow that involves KafkaMessageDrivenChannelAdapter,
it looks like consumer starts correclty , subscribed to topic but handler is not triggered after pushing message to EmbeddedKafkaBroker.
#SpringBootTest(properties = {"...."}, classes = {....class})
#EmbeddedKafka
class IntTests {
#BeforeAll
static void setup() {
embeddedKafka = new EmbeddedKafkaBroker(1, true, TOPIC);
embeddedKafka.kafkaPorts(57412);
embeddedKafka.afterPropertiesSet();
}
#Test
void testit() throws InterruptedException {
String ip = embeddedKafka.getBrokersAsString();
Map<String, Object> configs = new HashMap<>(KafkaTestUtils.producerProps(embeddedKafka));
Producer<String, String> producer = new DefaultKafkaProducerFactory<>(configs, new StringSerializer(), new StringSerializer()).createProducer();
// Act
producer.send(new ProducerRecord<>(TOPIC, "key", "{\"name\":\"Test\"}"));
producer.flush();
....
}
...
}
And the main class:
#Configuration
public class Kafka {
#Bean
public KafkaMessageDrivenChannelAdapter<String, String> adapter(KafkaMessageListenerContainer<String, String> container) {
KafkaMessageDrivenChannelAdapter<String, String> kafkaMessageDrivenChannelAdapter =..
kafkaMessageDrivenChannelAdapter.setOutputChannelName("kafkaChannel");
}
#Bean
public KafkaMessageListenerContainer<String, String> container() {
ContainerProperties properties = new ContainerProperties(TOPIC);
KafkaMessageListenerContainer<String, String> kafkaContainer = ...;
return kafkaContainer;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:57412");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group12");
...
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public PublishSubscribeChannel kafkaChannel() {
return new PublishSubscribeChannel ();
}
#Bean
#ServiceActivator(inputChannel = "kafkaChannel")
public MessageHandler handler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
}
};
}
...
}
in log I do see:
clients.consumer.KafkaConsumer : [Consumer clientId=consumer-group12-1, groupId=group12] Subscribed to topic(s): TOPIC
ThreadPoolTaskScheduler : Initializing ExecutorService
KafkaMessageDrivenChannelAdapter : started bean 'adapter'; defined in: 'com.example.demo.demo.Kafka';
Having embeddedKafka = new EmbeddedKafkaBroker(1, true, TOPIC); and #EmbeddedKafka, you essentially start two separate Kafka clusters. See ports option of the #EmbeddedKafka if you want to change a random port for embedded broker. But at the same time it is better to rely in what Spring Boot provides for us with its auto-configuration.
See documentation for more info: https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html#boot-features-embedded-kafka. Pay attention to the bootstrapServersProperty = "spring.kafka.bootstrap-servers" property.
UPDATE
In your test you have this #SpringBootTest(classes = {Kafka.class}). When I remove that classes attribute, everything has started to work. The problem that your config class is not auto-configuration aware, therefore you don't have Spring Integration initialized properly and the message is not consumed from the channel. There might be some other affect. But still: better to rely on the auto-configuration, so let your test to see that #SpringBootApplication annotation.
I have an app hosted multiple hosts listening to single remote SFTP location. How should i make sure same file is not picked by an host which is already picked up by other? I am pretty new to spring integration. Appreciate someone can share examples
}
EDIT:
Here is my integration flow getting file from sftp and placing in local directory and performing business logic in transformer and returning file and send it to remote sftp.
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
LOGGER.debug(" Creating SFTP Session Factory -Start");
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpHost);
factory.setUser(sftpUser);
factory.setPort(port);
factory.setPassword(sftpPassword);
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(true);
fileSynchronizer.setRemoteDirectory(sftpInboundDirectory);
fileSynchronizer.setFilter(new SftpPersistentAcceptOnceFileListFilter(store(), "*.json"));
return fileSynchronizer;
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata defaultPoller() {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setTrigger(new PeriodicTrigger(5000));
return pollerMetadata;
}
#Bean
#InboundChannelAdapter(channel = "fileInputChannel", poller = #Poller(fixedDelay = "5000"))
public MessageSource<File> sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source =
new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer());
source.setLocalDirectory(localDirectory);
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
source.setMaxFetchSize(1);
return source;
}
#Bean
IntegrationFlow integrationFlow() {
return IntegrationFlows.from(this.sftpMessageSource()).channel(fileInputChannel()).
transform(this::messageTransformer).channel(fileOutputChannel()).
handle(orderOutMessageHandler()).get();
}
#Bean
#ServiceActivator(inputChannel = "fileOutputChannel")
public SftpMessageHandler orderOutMessageHandler() {
SftpMessageHandler handler = new SftpMessageHandler(sftpSessionFactory());
LOGGER.debug(" Creating SFTP MessageHandler - Start ");
handler.setRemoteDirectoryExpression(new LiteralExpression(sftpOutboundDirectory));
handler.setFileNameGenerator(new FileNameGenerator() {
#Override
public String generateFileName(Message<?> message) {
if (message.getPayload() instanceof File) {
return ((File) message.getPayload()).getName();
} else {
throw new IllegalArgumentException("Expected Input is File.");
}
}
});
LOGGER.debug(" Creating SFTP MessageHandler - End ");
return handler;
}
#Bean
#org.springframework.integration.annotation.Transformer(inputChannel = "fileInputChannel", outputChannel = "fileOutputChannel")
public Transformer messageTransformer() {
return message -> {
File file=orderTransformer.transformInboundMessage(message);
return (Message<?>) file;
};
}
#Bean
public ConcurrentMetadataStore store() {
return new SimpleMetadataStore(hazelcastInstance().getMap("idempotentReceiverMetadataStore"));
}
#Bean
public HazelcastInstance hazelcastInstance() {
return Hazelcast.newHazelcastInstance(new Config().setProperty("hazelcast.logging.type", "slf4j"));
See SftpPersistentAcceptOnceFileListFilter to be injected into the SFTP Inbound Channel Adapter. This one has to be supplied with a MetadataStore based on the shared database.
See more info in the docs:
https://docs.spring.io/spring-integration/docs/current/reference/html/system-management.html#metadata-store
https://docs.spring.io/spring-integration/docs/current/reference/html/sftp.html#sftp-inbound
I have implemented the following scenario:
A queueChannel holding Messages in form of byte[]
A MessageHandler, polling the queue channel and uploading files over sftp
A Transformer, listening to errorChannel and sending extracted payload from the failed message back to the queueChannel (thought of as an error handler to handle failed messages so nothing gets lost)
If the sftp server is online, everything works as expected.
If the sftp server is down, the errormessage, that arrives as the transformer is:
org.springframework.messaging.MessagingException: Failed to obtain pooled item; nested exception is java.lang.IllegalStateException: failed to create SFTP Session
The transformer cannot do anything with this, since the payload's failedMessage is null and throws an exception itself. The transformer looses the message.
How can I configure my flow to make the tranformer get the right message with the corresponding payload of the unsucsesfully uploaded file?
My Configuration:
#Bean
public MessageChannel toSftpChannel() {
final QueueChannel channel = new QueueChannel();
channel.setLoggingEnabled(true);
return new QueueChannel();
}
#Bean
public MessageChannel toSplitter() {
return new PublishSubscribeChannel();
}
#Bean
#ServiceActivator(inputChannel = "toSftpChannel", poller = #Poller(fixedDelay = "10000", maxMessagesPerPoll = "1"))
public MessageHandler handler() {
final SftpMessageHandler handler = new SftpMessageHandler(sftpSessionFactory());
handler.setRemoteDirectoryExpression(new LiteralExpression(sftpRemoteDirectory));
handler.setFileNameGenerator(message -> {
if (message.getPayload() instanceof byte[]) {
return (String) message.getHeaders().get("name");
} else {
throw new IllegalArgumentException("byte[] expected in Payload!");
}
});
return handler;
}
#Bean
public SessionFactory<LsEntry> sftpSessionFactory() {
final DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
final Properties jschProps = new Properties();
jschProps.put("StrictHostKeyChecking", "no");
jschProps.put("PreferredAuthentications", "publickey,password");
factory.setSessionConfig(jschProps);
factory.setHost(sftpHost);
factory.setPort(sftpPort);
factory.setUser(sftpUser);
if (sftpPrivateKey != null) {
factory.setPrivateKey(sftpPrivateKey);
factory.setPrivateKeyPassphrase(sftpPrivateKeyPassphrase);
} else {
factory.setPassword(sftpPasword);
}
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
#Splitter(inputChannel = "toSplitter")
public DmsDocumentMessageSplitter splitter() {
final DmsDocumentMessageSplitter splitter = new DmsDocumentMessageSplitter();
splitter.setOutputChannelName("toSftpChannel");
return splitter;
}
#Transformer(inputChannel = "errorChannel", outputChannel = "toSftpChannel")
public Message<?> errorChannelHandler(ErrorMessage errorMessage) throws RuntimeException {
Message<?> failedMessage = ((MessagingException) errorMessage.getPayload())
.getFailedMessage();
return MessageBuilder.withPayload(failedMessage)
.copyHeadersIfAbsent(failedMessage.getHeaders())
.build();
}
#MessagingGateway
public interface UploadGateway {
#Gateway(requestChannel = "toSplitter")
void upload(#Payload List<byte[]> payload, #Header("header") DmsDocumentUploadRequestHeader header);
}
Thanks..
Update
#Bean(PollerMetadata.DEFAULT_POLLER)
#Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.READ_COMMITTED)
PollerMetadata poller() {
return Pollers
.fixedRate(5000)
.maxMessagesPerPoll(1)
.receiveTimeout(500)
.taskExecutor(taskExecutor())
.transactionSynchronizationFactory(transactionSynchronizationFactory())
.get();
}
#Bean
#ServiceActivator(inputChannel = "toMessageStore", poller = #Poller(PollerMetadata.DEFAULT_POLLER))
public BridgeHandler bridge() {
BridgeHandler bridgeHandler = new BridgeHandler();
bridgeHandler.setOutputChannelName("toSftpChannel");
return bridgeHandler;
}
The null failedMessage is a bug; reproduced INT-4421.
I would not recommend using a QueueChannel for this scenario. If you use a direct channel, you can configure a retry advice to attempt redeliveries. when the retries are exhausted (if so configured), the exception will be thrown back to the calling thread.
Add the advice to the SftpMessageHandler's adviceChain property.
EDIT
You can work around the "missing" failedMessage by inserting a bridge between the pollable channel and the sftp adapter:
#Bean
#ServiceActivator(inputChannel = "toSftpChannel", poller = #Poller(fixedDelay = "5000", maxMessagesPerPoll = "1"))
public BridgeHandler bridge() {
BridgeHandler bridgeHandler = new BridgeHandler();
bridgeHandler.setOutputChannelName("toRealSftpChannel");
return bridgeHandler;
}
#Bean
#ServiceActivator(inputChannel = "toRealSftpChannel")
public MessageHandler handler() {
final SftpMessageHandler handler = new SftpMessageHandler(sftpSessionFactory());
handler.setRemoteDirectoryExpression(new LiteralExpression("foo"));
handler.setFileNameGenerator(message -> {
if (message.getPayload() instanceof byte[]) {
return (String) message.getHeaders().get("name");
}
else {
throw new IllegalArgumentException("byte[] expected in Payload!");
}
});
return handler;
}
I am using SFTP Source in Spring cloud dataflow and it is working for getting files define in sftp:remote-dir:/home/someone/source , Now I have a many subfolders under the remote-dir and I want to recursively get all the files under this directory which match the patten. I am trying to use filename-regex: but so far it only works on one level. How do I recursively get the files I need.
The inbound channel adapter does not support recursion; use a custom source with the outbound gateway with an MGET command, with recursion (-R).
The doc is missing that option; fixed in the current docs.
I opened an issue to create a standard app starter.
EDIT
With the Java DSL...
#SpringBootApplication
#EnableBinding(Source.class)
public class So44710754Application {
public static void main(String[] args) {
SpringApplication.run(So44710754Application.class, args);
}
// should store in Redis or similar for persistence
private final ConcurrentMap<String, Boolean> processed = new ConcurrentHashMap<>();
#Bean
public IntegrationFlow flow() {
return IntegrationFlows.from(source(), e -> e.poller(Pollers.fixedDelay(30_000)))
.handle(gateway())
.split()
.<File>filter(p -> this.processed.putIfAbsent(p.getAbsolutePath(), true) == null)
.transform(Transformers.fileToByteArray())
.channel(Source.OUTPUT)
.get();
}
private MessageSource<String> source() {
return () -> new GenericMessage<>("foo/*");
}
private AbstractRemoteFileOutboundGateway<LsEntry> gateway() {
AbstractRemoteFileOutboundGateway<LsEntry> gateway = Sftp.outboundGateway(sessionFactory(), "mget", "payload")
.localDirectory(new File("/tmp/foo"))
.options(Option.RECURSIVE)
.get();
gateway.setFileExistsMode(FileExistsMode.IGNORE);
return gateway;
}
private SessionFactory<LsEntry> sessionFactory() {
DefaultSftpSessionFactory sf = new DefaultSftpSessionFactory();
sf.setHost("10.0.0.3");
sf.setUser("ftptest");
sf.setPassword("ftptest");
sf.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(sf);
}
}
And with Java config...
#SpringBootApplication
#EnableBinding(Source.class)
public class So44710754Application {
public static void main(String[] args) {
SpringApplication.run(So44710754Application.class, args);
}
#InboundChannelAdapter(channel = "sftpGate", poller = #Poller(fixedDelay = "30000"))
public String remoteDir() {
return "foo/*";
}
#Bean
#ServiceActivator(inputChannel = "sftpGate")
public SftpOutboundGateway mgetGate() {
SftpOutboundGateway sftpOutboundGateway = new SftpOutboundGateway(sessionFactory(), "mget", "payload");
sftpOutboundGateway.setOutputChannelName("splitterChannel");
sftpOutboundGateway.setFileExistsMode(FileExistsMode.IGNORE);
sftpOutboundGateway.setLocalDirectory(new File("/tmp/foo"));
sftpOutboundGateway.setOptions("-R");
return sftpOutboundGateway;
}
#Bean
#Splitter(inputChannel = "splitterChannel")
public DefaultMessageSplitter splitter() {
DefaultMessageSplitter splitter = new DefaultMessageSplitter();
splitter.setOutputChannelName("filterChannel");
return splitter;
}
// should store in Redis, Zookeeper, or similar for persistence
private final ConcurrentMap<String, Boolean> processed = new ConcurrentHashMap<>();
#Filter(inputChannel = "filterChannel", outputChannel = "toBytesChannel")
public boolean filter(File payload) {
return this.processed.putIfAbsent(payload.getAbsolutePath(), true) == null;
}
#Bean
#Transformer(inputChannel = "toBytesChannel", outputChannel = Source.OUTPUT)
public FileToByteArrayTransformer toBytes() {
FileToByteArrayTransformer transformer = new FileToByteArrayTransformer();
return transformer;
}
private SessionFactory<LsEntry> sessionFactory() {
DefaultSftpSessionFactory sf = new DefaultSftpSessionFactory();
sf.setHost("10.0.0.3");
sf.setUser("ftptest");
sf.setPassword("ftptest");
sf.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(sf);
}
}
I am creating a sftp inbound flow in spring dsl but when I am trying to test this in junit the mesage is coming null. I think the sftp ibound adapter is not starting properly, but not sure. So I am not able to go further. Can any ony one please provide any pointer to it as I am not able to proceed.
This is my flow....
#Bean
public IntegrationFlow sftpInboundFlow() {
return IntegrationFlows
.from(Sftp.inboundAdapter(this.sftpSessionFactory)
.preserveTimestamp(true).remoteDirectory(remDir)
.regexFilter(".*\\.txt$")
.localFilenameExpression("#this.toUpperCase()")
.localDirectory(new File(localDir))
.remoteFileSeparator("/"),
new Consumer<SourcePollingChannelAdapterSpec>() {
#Override
public void accept(
SourcePollingChannelAdapterSpec e) {
e.id("sftpInboundAdapter")
.autoStartup(true)
.poller(Pollers.fixedRate(1000)
.maxMessagesPerPoll(1));
}
})
.channel(MessageChannels.queue("sftpInboundResultChannel"))
.get();
}
This is my session Factory...
#Autowired
private DefaultSftpSessionFactory sftpSessionFactory;
#Bean
public DefaultSftpSessionFactory sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(
true);
factory.setHost("111.11.12.143");
factory.setPort(22);
factory.setUser("sftp");
factory.setPassword("*******");
return factory;
}
This is my Pollable channel..
#Autowired
private PollableChannel sftpInboundResultChannel;
#Bean
public PollableChannel sftpChannel() {
return new QueueChannel();
}
And this is my Test method..
#Test
public void testSftpInboundFlow() {
Message<?> message = ((PollableChannel) sftpInboundResultChannel).receive(1000); //Not receiving any message
System.out.println("message====" + message);
Object payload = message.getPayload();
File file = (File) payload;
message = this.sftpInboundResultChannel.receive(1000);
file = (File) message.getPayload();
}