Numerous Kafka Producer Network Thread generated during data publishing, Null Pointer Exception Spring Kafka - multithreading

I am writing a Kafka Producer using Spring Kafka 2.3.9 that suppose to publish around 200000 messages to a topic. For example, I have a list of 200000 objects that I fetched from a database and I want to publish json messages of those objects to a topic.
The producer that I have written is working fine for publishing, let's say, 1000 messages. Then it is creating some null pointer error(I have included the screen shot below).
During debugging, I found that the number of Kafka Producer Network Thread is very high. I could not count them but they are definitely more than 500.
I have read the thread Kafka Producer Thread, huge amound of threads even when no message is send and did a similar configuration by making producerPerConsumerPartition property false on DefaultKafkaProducerFactory. But still it is not decreasing the Kafka Producer Network Thread count.
Below are my code snippets, error and picture of those threads. I can't post all of the code segments since it is from a real project.
Code segments
public DefaultKafkaProducerFactory<String, String> getProducerFactory() throws IOException, IllegalStateException {
Map<String, Object> configProps = getProducerConfigMap();
DefaultKafkaProducerFactory<String, String> defaultKafkaProducerFactory = new DefaultKafkaProducerFactory<>(configProps);
//defaultKafkaProducerFactory.transactionCapable();
defaultKafkaProducerFactory.setProducerPerConsumerPartition(false);
defaultKafkaProducerFactory.setProducerPerThread(false);
return defaultKafkaProducerFactory;
}
public Map<String, Object> getProducerConfigMap() throws IOException, IllegalStateException {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getBootstrapAddress());
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.RETRIES_CONFIG, kafkaProperties.getKafkaRetryConfig());
configProps.put(ProducerConfig.ACKS_CONFIG, kafkaProperties.getKafkaAcknowledgementConfig());
configProps.put(ProducerConfig.CLIENT_ID_CONFIG, kafkaProperties.getKafkaClientId());
configProps.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 512 * 1024 * 1024);
configProps.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 10 * 1000);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
//updateSSLConfig(configProps);
return configProps;
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
ProducerFactory<String, String> producerFactory = getProducerFactory();
KafkaTemplate<String, String> kt = new KafkaTemplate<String, String>(stringProducerFactory, true);
kt.setCloseTimeout(java.time.Duration.ofSeconds(5));
return kt;
}
Error
2020-12-07 18:14:19.249 INFO 26651 --- [onPool-worker-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=kafka-client-09f48ec8-7a69-4b76-a4f4-a418e96ff68e-1] Closing the Kafka producer with timeoutMillis = 0 ms.
2020-12-07 18:14:19.254 ERROR 26651 --- [onPool-worker-1] c.w.p.r.g.xxxxxxxx.xxx.KafkaPublisher : Exception happened publishing to topic. Failed to construct kafka producer
2020-12-07 18:14:19.273 INFO 26651 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2020-12-07 18:14:19.281 ERROR 26651 --- [ main] o.s.boot.SpringApplication : Application run failed
java.lang.IllegalStateException: Failed to execute CommandLineRunner
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:787) [spring-boot-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:768) [spring-boot-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:322) [spring-boot-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226) [spring-boot-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1215) [spring-boot-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at xxx.xxx.xxx.Application.main(Application.java:46) [classes/:na]
Caused by: java.util.concurrent.CompletionException: java.lang.NullPointerException
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273) ~[na:1.8.0_144]
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280) ~[na:1.8.0_144]
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1592) ~[na:1.8.0_144]
at java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582) ~[na:1.8.0_144]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) ~[na:1.8.0_144]
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) ~[na:1.8.0_144]
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) ~[na:1.8.0_144]
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) ~[na:1.8.0_144]
Caused by: java.lang.NullPointerException: null
at com.xxx.xxx.xxx.xxx.KafkaPublisher.publishData(KafkaPublisher.java:124) ~[classes/:na]
at com.xxx.xxx.xxx.xxx.lambda$0(Publisher.java:39) ~[classes/:na]
at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_144]
at com.xxx.xxx.xxx.xxx.publishData(Publisher.java:38) ~[classes/:na]
at com.xxx.xxx.xxx.xxx.Application.lambda$0(Application.java:75) [classes/:na]
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590) ~[na:1.8.0_144]
... 5 common frames omitted
Following is the code for publishing the message - line number 124 is when we actually call KafkaTemplate
public void publishData(Object object) {
ListenableFuture<SendResult<String, String>> future = null;
// Convert the Object to JSON
String json = convertObjectToJson(object);
// Generate unique key for the message
String key = UUID.randomUUID().toString();
// Post the JSON to Kafka
try {
future = kafkaConfig.kafkaTemplate().send(kafkaProperties.getTopicName(), key, json);
} catch (Exception e) {
logger.error("Exception happened publishing to topic. {}", e.getMessage());
}
future.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
#Override
public void onSuccess(SendResult<String, String> result) {
logger.info("Sent message with key=[" + key + "]");
}
#Override
public void onFailure(Throwable ex) {
logger.error("Unable to send message=[ {} due to {}", json, ex.getMessage());
}
});
kafkaConfig.kafkaTemplate().flush();
}
============================
I am not sure if this error is causing by those many network threads.
After posting the data, I have called KafkaTemplate flush method. It did not work.
I also called ProducerFactory closeThreadBoundProducer, reset, destroy methods. None of them seems working.
Am I missing any configuration?

The null pointer issue was not actually related to Spring Kafka. We were reading the topic name from a different location connected by a network. That network connection was failing for few cases and that caused null pointer issue which ultimately caused the above error.

Related

Implementing clustered coordianted timer (runs on one node only in Payara Micro Cluster) using IScheduledExecutorService

I am trying to achieve the following behavior for the clustered coordinated events:
timer (event) is executed only in one thread\JVM in the Payara Micro cluster;
in case node goes down - timer (event) will be executed on another node in the cluster.
From the Payara Micro guide:
Persistent timers are NOT coordinated across a Payara Micro cluster.
They are always executed on an instance with the same name that
created the timers.
and
If that instance goes down, the timer will be recreated on another
instance with the same name once it joins the cluster. Until that
time, the timer becomes inactive.
Seems persistent timers will not work as desired in Payara Micro cluster by definition.
As such I am trying to use IScheduledExecutorService from Hazelcast, what seems to be a perfect match.
Basically implementation with IScheduledExecutorService works well except the scenario when the new Payara Micro node is starting & joining cluster (the cluster where some events already scheduled using IScheduledExecutorService). During this time the following exceptions happen:
Exception 1: java.lang.RuntimeException: ConcurrentRuntime not initialized
[2021-02-15T23:00:31.870+0800] [] [INFO] [] [fish.payara.nucleus.cluster.PayaraCluster] [tid: _ThreadID=63 _ThreadName=hz.angry_yalow.event-5] [timeMillis: 1613401231870] [levelValue: 800] [[
Data Grid Status
Payara Data Grid State: DG Version: 4 DG Name: testClusterDev DG Size: 2
Instances: {
DataGrid: testClusterDev Name: testNode0 Lite: false This: true UUID: 493b19ed-a58d-4508-b9ef-f5c58e05b859 Address: /10.41.0.7:6900
DataGrid: testClusterDev Lite: false This: false UUID: f12342bf-a37e-452a-8c67-1d36dd4dbac7 Address: /10.41.0.7:6901
}]]
[2021-02-15T23:00:32.290+0800] [] [WARNING] [] [com.hazelcast.internal.partition.operation.MigrationRequestOperation] [tid: _ThreadID=160 _ThreadName=ForkJoinPool.commonPool-worker-6] [timeMillis: 1613401232290] [levelValue: 900] [[
[10.41.0.7]:6900 [testClusterDev] [4.1] Failure while executing MigrationInfo{uuid=fc68e9ac-1081-4f9b-a70a-6fb0aae19016, partitionId=27, source=[10.41.0.7]:6900 - 493b19ed-a58d-4508-b9ef-f5c58e05b859, sourceCurrentReplicaIndex=0, sourceNewReplicaIndex=1, destination=[10.41.0.7]:6901 - f12342bf-a37e-452a-8c67-1d36dd4dbac7, destinationCurrentReplicaIndex=-1, destinationNewReplicaIndex=0, master=[10.41.0.7]:6900, initialPartitionVersion=1, partitionVersionIncrement=2, status=ACTIVE}
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.RuntimeException: ConcurrentRuntime not initialized
at com.hazelcast.internal.serialization.impl.SerializationUtil.handleException(SerializationUtil.java:103)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.readObject(AbstractSerializationService.java:292)
at com.hazelcast.internal.serialization.impl.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:567)
at com.hazelcast.scheduledexecutor.impl.ScheduledRunnableAdapter.readData(ScheduledRunnableAdapter.java:106)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.readInternal(DataSerializableSerializer.java:160)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:106)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:51)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.readObject(AbstractSerializationService.java:286)
at com.hazelcast.internal.serialization.impl.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:567)
at com.hazelcast.scheduledexecutor.impl.TaskDefinition.readData(TaskDefinition.java:144)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.readInternal(DataSerializableSerializer.java:160)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:106)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:51)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.readObject(AbstractSerializationService.java:286)
at com.hazelcast.internal.serialization.impl.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:567)
at com.hazelcast.scheduledexecutor.impl.ScheduledTaskDescriptor.readData(ScheduledTaskDescriptor.java:208)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.readInternal(DataSerializableSerializer.java:160)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:106)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:51)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.readObject(AbstractSerializationService.java:286)
at com.hazelcast.internal.serialization.impl.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:567)
at com.hazelcast.scheduledexecutor.impl.operations.ReplicationOperation.readInternal(ReplicationOperation.java:87)
at com.hazelcast.spi.impl.operationservice.Operation.readData(Operation.java:750)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.readInternal(DataSerializableSerializer.java:160)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:106)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:51)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.readObject(AbstractSerializationService.java:286)
at com.hazelcast.internal.serialization.impl.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:567)
at com.hazelcast.internal.partition.ReplicaFragmentMigrationState.readData(ReplicaFragmentMigrationState.java:97)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.readInternal(DataSerializableSerializer.java:160)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:106)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:51)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.readObject(AbstractSerializationService.java:286)
at com.hazelcast.internal.serialization.impl.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:567)
at com.hazelcast.internal.partition.operation.MigrationOperation.readInternal(MigrationOperation.java:249)
at com.hazelcast.spi.impl.operationservice.Operation.readData(Operation.java:750)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.readInternal(DataSerializableSerializer.java:160)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:106)
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.read(DataSerializableSerializer.java:51)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:205)
at com.hazelcast.spi.impl.NodeEngineImpl.toObject(NodeEngineImpl.java:346)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:437)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:166)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:136)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
Caused by: java.lang.RuntimeException: ConcurrentRuntime not initialized
at org.glassfish.concurrent.runtime.ConcurrentRuntime.getRuntime(ConcurrentRuntime.java:121)
at org.glassfish.concurrent.runtime.InvocationContext.readObject(InvocationContext.java:214)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1184)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2296)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
at com.hazelcast.internal.serialization.impl.defaultserializers.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:83)
at com.hazelcast.internal.serialization.impl.defaultserializers.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:76)
at fish.payara.nucleus.hazelcast.PayaraHazelcastSerializer.read(PayaraHazelcastSerializer.java:84)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.readObject(AbstractSerializationService.java:286)
... 50 more
]]
[2021-02-15T23:00:32.304+0800] [] [WARNING] [] [com.hazelcast.internal.partition.impl.MigrationManager] [tid: _ThreadID=160 _ThreadName=ForkJoinPool.commonPool-worker-6] [timeMillis: 1613401232304] [levelValue: 900] [10.41.0.7]:6900 [testClusterDev] [4.1] Migration failed: MigrationInfo{uuid=fc68e9ac-1081-4f9b-a70a-6fb0aae19016, partitionId=27, source=[10.41.0.7]:6900 - 493b19ed-a58d-4508-b9ef-f5c58e05b859, sourceCurrentReplicaIndex=0, sourceNewReplicaIndex=1, destination=[10.41.0.7]:6901 - f12342bf-a37e-452a-8c67-1d36dd4dbac7, destinationCurrentReplicaIndex=-1, destinationNewReplicaIndex=0, master=[10.41.0.7]:6900, initialPartitionVersion=1, partitionVersionIncrement=2, status=ACTIVE}
This seems to happen because the new node is not fully initialized (as it is just starting). Looks like this exception is less critical comparing with the next one.
Exception 2: java.lang.NullPointerException: Failed to execute java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask
[2021-02-15T23:44:19.544+0800] [] [SEVERE] [] [com.hazelcast.spi.impl.executionservice.ExecutionService] [tid: _ThreadID=35 _ThreadName=hz.elated_murdock.scheduled.thread-] [timeMillis: 1613403859544] [levelValue: 1000] [[
[10.4.0.7]:6901 [testClusterDev] [4.1] Failed to execute java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask#55a27ce3
java.lang.NullPointerException
at org.glassfish.concurrent.runtime.ContextSetupProviderImpl.isApplicationEnabled(ContextSetupProviderImpl.java:326)
at org.glassfish.concurrent.runtime.ContextSetupProviderImpl.setup(ContextSetupProviderImpl.java:194)
at org.glassfish.enterprise.concurrent.internal.ContextProxyInvocationHandler.invoke(ContextProxyInvocationHandler.java:94)
at com.sun.proxy.$Proxy154.run(Unknown Source)
at com.hazelcast.scheduledexecutor.impl.ScheduledRunnableAdapter.call(ScheduledRunnableAdapter.java:56)
at com.hazelcast.scheduledexecutor.impl.TaskRunner.call(TaskRunner.java:78)
at com.hazelcast.scheduledexecutor.impl.TaskRunner.run(TaskRunner.java:104)
at com.hazelcast.spi.impl.executionservice.impl.DelegateAndSkipOnConcurrentExecutionDecorator$DelegateDecorator.run(DelegateAndSkipOnConcurrentExecutionDecorator.java:77)
at com.hazelcast.internal.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:217)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
]]
This exception happens on the new node which is joining cluster. This doesn't happen always, probably Hazelcast is trying to execute event on the new node which is starting, and it fails becasue environment still not fully initialized. The issue that after two such failed attempts - event gets unloaded by Hazelcast.
Implementation insights:
Method which schedules event using IScheduledExecutorService (resides in application scoped bean in the main app WAR):
#Resource
ContextService _ctxService;
public void sheduleClusteredEvent() {
IScheduledExecutorService executorService = _instance.getScheduledExecutorService("default");
ClusteredEvent ce = new ClusteredEvent(new DiagEvent(null, "TestEvent1"));
Object ceProxy = _ctxService.createContextualProxy(ce, Runnable.class, Serializable.class);
executorService.scheduleAtFixedRate((Runnable) ceProxy, 0, 3, TimeUnit.SECONDS);
}
ClusteredEvent class (resides in a separate JAR and added to classpath via --addLibs param to the Payara Micro). It needs to somehow inform the main app about the event to be trigered, thus BeanManager.fireEvent() is used.
public class ClusteredEvent implements Runnable, Serializable {
private final DiagEvent _event;
public ClusteredEvent(DiagEvent event) {
_event = event;
}
#Override
public void run() {
// For sake of shortness - all check for nulls etc. were removed
((BeanManager) ic.lookup("java:comp/BeanManager")).fireEvent(_event);
}
}
So my questions:
How to solve the mentioned above exceptions / issues?
Am I on the right direction in achieving coordinated clustred events behaviour in Payara Micro cluster? I would expect this to be a trivial task working out-of-the-box, but instead it requires some custom implementation as persistent timers do not work as desired. Is there any other more elegant way available with Payara Micro Cluster (>=v5.2021.1) of achiving coordinated clustred events behaviour?
Thank you so much in advance!
Update 1:
Just to recall that the main purpose of this exercise is to have coordinated timer (events) functionality available in the Payara Micro Cluster, thus suggestions on more elegant solutions are highly welcome.
Addressing questions/suggestions from the comments:
Q1:
why do you need to create a contextual proxy for the even object?
A1: Indeed making the contextual proxy out of the plain ClusteredEvent() object - adds the main complexity here and causes listed above exceptions (meaning: scheduling ClusteredEvent() without making a contextual proxy out of it - works fine and doesn't cause exceptions, but there is a caveat).
The reason contextual proxy is used as I need to somehow trigger the main app running on Payara Micro from the un-managed thread launched by IScheduledExecutorService. So far I haven't found any other workable way of triggering any CDI/EJB bean in the main app from the un-managed thread. Only making it contextual - allows ClusteredEvent.run() to communicate with the main app via BeanManger for example.
Any suggestions on how to establish communication between un-managed thread and CDI/EJB beans running in separate app (and both running on the same Payara Micro instance) - are welcome.
Q2:
You can for example wrap the ceProxy to a Runnable, that executes ceProxy.run() in a try catch block
A2: I have tried it and indeed it helps to handle the "Exception 2" mentioned above. I am posting implementation of the ClusteredEventWrapper class below, try/catch inside run() method handles "Exception 2".
Q3:
The first exception comes from hazelcast trying to deserialize the
proxy on the new instance, which fails because the proxy needs an
initilaized environment to deserialize. To solve this, you would need
to wrap the ceProxy object and customize the deserialization of the
wrapper to wait until the ContextService is initilaized.
A3: Adding custom implementation for serialization/deserialization of ClusteredEventWrapper indeed allows to handle "Exception 1", but here I am still struggling on the best way of handling it. Postponing deserialization via Thread.sleep() - causes new (different) exceptions. Supressing of exceptions - need to check, but in that case I am afraid ClusteredEventWrapper will not be properly deserialized on the new (starting) node, as Hazelcast will consider sync was good and will not try to sync it again (I may be wrong - this I still need to check). As currently seems Hazelcast tries to sync several times util the "Exception 1" gone.
Implementation of the ClusteredEventWrapper which wraps ClusteredEvent:
public class ClusteredEventWrapper implements Runnable, Serializable {
private static final long serialVersionUID = 5878537035999797427L;
private static final Logger LOG = Logger.getLogger(ClusteredEventWrapper.class.getName());
private final Runnable _clusteredEvent;
public ClusteredEventWrapper(Runnable clusteredEvent) {
_clusteredEvent = clusteredEvent;
}
#Override
public void run() {
try {
_clusteredEvent.run();
} catch (Throwable e) {
if (e instanceof NullPointerException
&& e.getStackTrace() != null && e.getStackTrace().length > 0
&& "org.glassfish.concurrent.runtime.ContextSetupProviderImpl".equals(e.getStackTrace()[0].getClassName())
&& "isApplicationEnabled".equals(e.getStackTrace()[0].getMethodName())) {
// Means we got the "Exception 2" (posted above)
LOG.log(Level.WARNING, "Skipping scheduled event execution on this node as this node is still being initialized...");
} else {
LOG.log(Level.SEVERE, "Error executing scheduled event", e);
}
}
}
private void writeObject(ObjectOutputStream out) throws IOException {
LOG.log(Level.INFO, "1_WRITE_OBJECT...");
out.defaultWriteObject();
}
private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException {
LOG.log(Level.INFO, "2_READ_OBJECT...");
int retry = 0;
while (readObjectInner(in) != true && retry < 5) { // This doesn't work good, need to think of some other way on handling it
retry++;
LOG.log(Level.INFO, "2_READ_OBJECT: retry {0}", retry);
try {
// We need to wait
Thread.sleep(15000);
} catch (InterruptedException ex) {
}
}
}
private boolean readObjectInner(ObjectInputStream in) throws IOException, ClassNotFoundException {
try {
in.defaultReadObject();
return true;
} catch (Throwable e) {
if (e instanceof RuntimeException && "ConcurrentRuntime not initialized".equals(e.getMessage())) {
// This means node which is trying to desiarialize this objet is not ready yet
return false;
} else {
// For all other exceptions - we throw error
throw e;
}
}
}
}
So now event scheduled in the following way:
#Resource
ContextService _ctxService;
public void sheduleClusteredEvent() {
IScheduledExecutorService executorService = _instance.getScheduledExecutorService("default");
ClusteredEvent ce = new ClusteredEvent(new DiagEvent(null, "PersistentEvent1"));
Object ceProxy = _ctxService.createContextualProxy(ce, Runnable.class, Serializable.class);
executorService.scheduleAtFixedRate(new ClusteredEventWrapper((Runnable) ceProxy), 0, 3, TimeUnit.SECONDS);
}
Below I am posting implemented solution based on suggestions from #OndroMih in the comments:
Excerpt 1:
...a better approach to this is to avoid wrapping your object into a
contextual and instead register BeanManager into a global variable
(singleton) at application startup. In ClusteredEvent.run() you would
retrieve it from a static method, e.g. Registry.getBeanManager(). This
method would have to wait until the application starts up and saves
its BeanManager instance with Registry.setBeanManager()
And this one:
Excerpt 2:
Or maybe even better if you store a reference to the
ManagedExecutorService instead of the BeanManager, execute the run
method with that executor and just inject anything you need.
#OndroMih, please post them as reply - I will mark it as an accepted answer!
Before going into details of the implementation - few words on our application packaging: it consists of:
the main war file which is bundled into Payara Micro as Uber jar, so we do not redeploy application war, we start and stop the whole Payara Micro with the deployed war on it;
and tiny jar lib with few classes which are used mainly in Hazelcast and provided via --addLibs arg to Payara Micro Uber jar to avoid ClassNotFoundExceptions when Hazelcast syncs objects in DataGrid.
And now about the implementation which has given us the desired behavior for the clustered timer/events (see the 1st post):
I) Using ManagedExecutorService as per suggestion above indeed looks much more flexible as it allows to inject any desired object into clustered event, so I started with this approach. But due to some reason - I was not able to inject anything. Due to limited time I left this for investigation in future and switched to the next approach. I am also providing sample code for this case in the end of this post.
II) So I switched to the scenario with BeanManager.
I got the Registry signleton implemented as follows (all comments are removed in sake of shortness). This class resides in the tiny jar added via --addLibs arg to Payara Micro:
public final class Registry {
private ManagedExecutorService _executorService;
private BeanManager _beanManager;
private Registry() {
}
public ManagedExecutorService getExecutorService() {
return _executorService;
}
public void setExecutorService(ManagedExecutorService executorService) {
_executorService = executorService;
}
public BeanManager getBeanManager() {
return _beanManager;
}
public void setBeanManager(BeanManager beanManager) {
_beanManager = beanManager;
}
public static Registry getInstance() {
return InstanceHolder._instance;
}
private static class InstanceHolder {
private static final Registry _instance = new Registry();
}
}
In the main app war we already had an AppListener class which listens for the event when app is deployed, so we added Registry population logic into it:
public class AppListener implements SystemEventListener {
...
#Resource
private ManagedExecutorService _managedExecutorService;
#Resource
private BeanManager _beanManager;
#Override
public void processEvent(SystemEvent event) throws AbortProcessingException {
try {
if (event instanceof PostConstructApplicationEvent) {
LOG.log(Level.INFO, ">> Application started");
...
// Once app marked as started - populate global objects in the Registry
Registry.getInstance().setExecutorService(_managedExecutorService);
Registry.getInstance().setBeanManager(_beanManager);
}
...
} catch (Exception e) {
LOG.log(Level.SEVERE, ">> Error processing event: " + event, e);
}
}
}
ClusteredEvent class which as scheduled via IScheduledExecutorService.scheduleAtFixedRate() also resides in the tiny jar and has the following implementation:
public final class ClusteredEvent implements NamedTask, Runnable, Serializable {
...
private final MultiTenantEvent _event;
public ClusteredEvent(MultiTenantEvent event) {
if (event == null) {
throw new NullPointerException("Event can not be null");
}
_event = event;
}
#Override
public void run() {
try {
if (Registry.getInstance().getBeanManager() == null) {
LOG.log(Level.WARNING, "Skipping timer execution - application not initialized yet...");
return;
}
Registry.getInstance().getBeanManager().fireEvent(_event);
} catch (Throwable e) {
LOG.log(Level.SEVERE, "Error executing timer: " + _event, e);
}
}
#Override
public final String getName() {
return _event.getName();
}
}
And basically that is all. Scheduling is done using the following simple steps:
#Resource(lookup = "payara/Hazelcast")
private HazelcastInstance _instance;
_instance.getScheduledExecutorService("default").scheduleAtFixedRate(new ClusteredEvent(event), initialDelaySec, invocationPeriodSec, TimeUnit.SECONDS);
All tests went good so far. I was worried that Registry.getBeanManager() getting 'spoiled' after some time due to some closed contexts somewhere (I am not sure about the nature of the BeanManager reference), but tests have shown that ref to BeanManager stays valid after 1 day, so hopefully it will work fine.
Another concern (even not a concern, but caveat to be considered) that there is no possibility to control on which node event is to be fired by IScheduledExecutorService, as such when event is triggered on the node which is not yet initialized (still starting) - the event gets skipped. But for our usage-scenario this is acceptable, so currently we can live with these considerations.
And getting back to the issue with usage of ManagedExecutorService: ClusteredEvent was implemented like provided below:
public class ClusteredEvent implements Runnable, Serializable {
private final MultiTenantEvent _event;
public ClusteredEvent(MultiTenantEvent event) {
_event = event;
}
#Override
public void run() {
try {
LOG.log(Level.INFO, "TIMER THREAD NAME: {0}", Thread.currentThread().getName());
if (Registry.getInstance().getExecutorService() == null) {
LOG.log(Level.WARNING, "Skipping timer execution - application not initialized yet...");
return;
}
Registry.getInstance().getExecutorService().submit(new Callable<Boolean>() {
#Override
public Boolean call() throws Exception {
LOG.log(Level.INFO, "Timer.Run() THREAD NAME: {0}", Thread.currentThread().getName());
String beanManagerJndiName = "java:comp/BeanManager";
try {
Context ic = new InitialContext();
BeanManager beanManager = (BeanManager) ic.lookup(beanManagerJndiName);
beanManager.fireEvent(_event);
return true;
} catch (NullPointerException | NamingException ex) {
LOG.log(Level.SEVERE, "ERROR: no BeanManager resource could be located by JNDI name: " + beanManagerJndiName, ex);
return false;
}
}
}).get();
} catch (Throwable e) {
LOG.log(Level.SEVERE, "Error executing timer: " + _event, e);
}
}
}
Output was the following:
[2021-02-24 07:56:07] [INFO] [ua.appName.model.event.ClusteredEvent run]
TIMER THREAD NAME: hz.competent_mccarthy.cached.thread-11
[2021-02-24 07:56:07] [INFO] [ua.appName.model.event.ClusteredEvent$1 call]
Timer.Run() THREAD NAME: concurrent/__defaultManagedExecutorService-managedThreadFactory-Thread-1
[2021-02-24 07:56:07] [SEVERE] [ua.appName.model.event.ClusteredEvent$1 call]
ERROR: no BeanManager resource could be located by JNDI name: java:comp/BeanManager
javax.naming.NamingException: Lookup failed for 'java:comp/BeanManager' in SerialContext[myEnv={java.naming.factory.initial=com.sun.enterprise.naming.impl.SerialInitContextFactory, java.naming.factory.url.pkgs=com.sun.enterprise.naming, java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl} [Root exception is javax.naming.NamingException: Invocation exception: Got null ComponentInvocation ]
at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:496)
at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:442)
at javax.naming.InitialContext.lookup(InitialContext.java:417)
at javax.naming.InitialContext.lookup(InitialContext.java:417)
at ua.appName.model.event.ClusteredEvent$1.call(ClusteredEvent.java:70)
at ua.appName.model.event.ClusteredEvent$1.call(ClusteredEvent.java:63)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.glassfish.enterprise.concurrent.internal.ManagedFutureTask.run(ManagedFutureTask.java:143)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
Caused by: javax.naming.NamingException: Invocation exception: Got null ComponentInvocation
at com.sun.enterprise.naming.impl.GlassfishNamingManagerImpl.getComponentId(GlassfishNamingManagerImpl.java:870)
at com.sun.enterprise.naming.impl.GlassfishNamingManagerImpl.lookup(GlassfishNamingManagerImpl.java:737)
at com.sun.enterprise.naming.impl.JavaURLContext.lookup(JavaURLContext.java:167)
at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:476)
... 11 more
So line Timer.Run() THREAD NAME: concurrent/__defaultManagedExecutorService-managedThreadFactory-Thread-1 confirms that code runs already inside the managed thread but still I was not able to inject or lookup nothing. I left this investigation for future this time.
Once again, many thanks to #OndroMih for your suggestions on the implementation!
Thank you!

Spring Integration Java DSL using JMS retry/redlivery

How can I effectively support JMS redelivery when msg handling throws an exception?
I have a flow using JMS (ActiveMQ) with a connectionFactory that is configured to allow n redelivery attempts.
I would like to have any error that occurs while handling the msg cause the msg to get put back for redelivery as many times as the connectionFactory config allows and then when max redelivery attempts are exhausted, deliver to DLQ. per usual with AMQ.
An answer to a related SO question implies that I could have an errorChannel that re-throws which should trigger redelivery: Spring Integration DSL ErrorHandling
But, with the following that isnt happening:
/***
* Dispatch msgs from JMS queue to a handler using a rate-limit
* #param connectionFactory
* #return
*/
#Bean
public IntegrationFlow flow2(#Qualifier("spring-int-connection-factory") ConnectionFactory connectionFactory) {
IntegrationFlow flow = IntegrationFlows.from(
Jms.inboundAdapter(connectionFactory)
.configureJmsTemplate(t -> t.receiveTimeout(1000))
.destination(INPUT_DIRECT_QUEUE),
e -> e.poller(Pollers
.fixedDelay(5000)
.errorChannel("customErrorChannel")
//.errorHandler(this.msgHandler)
.maxMessagesPerPoll(2))
).handle(this.msgHandler).get();
return flow;
}
#Bean
public MessageChannel customErrorChannel() {
return MessageChannels.direct("customErrorChannel").get();
}
#Bean
public IntegrationFlow customErrorFlow() {
return IntegrationFlows.from(customErrorChannel())
.handle ("simpleMessageHandler","handleError")
.get();
}
The errorChannel method impl:
public void handleError(Throwable t) throws Throwable {
log.warn("got error from customErrorChannel");
throw t;
}
When an exception is thrown from the handler in flow2, the errorChannel does get the exception but then the re-throw causes a MessageHandlingException:
2018-08-13 09:00:34.221 WARN 98425 --- [ask-scheduler-5] c.v.m.i.jms.SimpleMessageHandler : got error from customErrorChannel
2018-08-13 09:00:34.224 WARN 98425 --- [ask-scheduler-5] o.s.i.c.MessagePublishingErrorHandler : Error message was not delivered.
org.springframework.messaging.MessageHandlingException: nested exception is org.springframework.messaging.MessageHandlingException: error occurred in message handler [simpleMessageHandler]; nested exception is java.lang.IllegalArgumentException: dont want first try, failedMessage=GenericMessage [payload=Enter some text here for the message body..., headers={jms_redelivered=false, jms_destination=queue://_dev.directQueue, jms_correlationId=, jms_type=, id=c2dbffc8-8ab0-486f-f2e5-e8d613d62b6a, priority=0, jms_timestamp=1534176031021, jms_messageId=ID:che2-39670-1533047293479-4:9:1:1:8, timestamp=1534176034205}]
at org.springframework.integration.handler.MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:107) ~[spring-integration-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.integration.handler.BeanNameMessageProcessor.processMessage(BeanNameMessageProcessor.java:61) ~[spring-integration-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.integration.handler.ServiceActivatingHandler.handleRequestMessage(ServiceActivatingHandler.java:93) ~[spring-integration-core-5.0.7.RELEASE.jar:5.0.7.RELEASE]
It would work with a message-driven channel adapter but I presume that's not what you want because of this question.
Since the polled adapter uses a JmsTemplate.receive() operation, the message is already ack'd by the time the flow is called.
You need to use a transactional poller with a JmsTransactionManager so that the exception thrown by the error flow rolls back the transaction and the message will be redelivered.

Trying to integrate Spring Batch with spring integration getting error

The below class is used to poll for a file in a directory and trigger the spring batch once a file is received in a directory. I am getting some error, which I am not able to figure out. please advice.
Also, if there is some sample code example to do the same, please refer me to that location.
#Configuration
class FilePollingIntegrationFlow {
#Autowired
private ApplicationContext applicationContext;
// this is the integration flow that foirst polls for messages and then trigger the spring batch job
#Bean
public IntegrationFlow inboundFileIntegration(#Value("${inbound.file.poller.fixed.delay}") long period,
#Value("${inbound.file.poller.max.messages.per.poll}") int maxMessagesPerPoll,
TaskExecutor taskExecutor,
MessageSource<File> fileReadingMessageSource,
JobLaunchingGateway jobLaunchingGateway) {
return IntegrationFlows.from(fileReadingMessageSource,
c -> c.poller(Pollers.fixedDelay(period)
.taskExecutor(taskExecutor)
.maxMessagesPerPoll(maxMessagesPerPoll)))
.transform(Transformers.fileToString())
.channel(ApplicationConfiguration.INBOUND_CHANNEL)
.handle((p, h) -> {
System.out.println("Testing:::::"+p);
return p;
})
.handle(fileMessageToJobRequest())
.handle(jobLaunchingGateway(),"toRequest")
.channel(MessageChannels.queue())
.get();
}
#Bean
public FileMessageToJobRequest fileMessageToJobRequest() {
FileMessageToJobRequest fileMessageToJobRequest = new FileMessageToJobRequest();
fileMessageToJobRequest.setFileParameterName("input.file.name");
// fileMessageToJobRequest.setJob(personJob());
System.out.println("FilePollingIntegrationFlow::fileMessageToJobRequest::::Job launched successfully!!!");
return fileMessageToJobRequest;
}
#Bean
public JobLaunchingGateway jobLaunchingGateway() {
SimpleJobLauncher simpleJobLauncher = new SimpleJobLauncher();
// simpleJobLauncher.setJobRepository(jobRepository);
simpleJobLauncher.setTaskExecutor(new SyncTaskExecutor());
JobLaunchingGateway jobLaunchingGateway = new JobLaunchingGateway(simpleJobLauncher);
System.out.println("FilePollingIntegrationFlow::jobLaunchingGateway::::Job launched successfully!!!");
return jobLaunchingGateway;
}
//This is another class used for batch job trigger
public class FileMessageToJobRequest {
private Job job;
private String fileParameterName;
public void setFileParameterName(String fileParameterName) {
this.fileParameterName = fileParameterName;
}
public void setJob(Job job) {
this.job = job;
}
#Transformer
public JobLaunchRequest toRequest(Message<File> message) {
JobParametersBuilder jobParametersBuilder =
new JobParametersBuilder();
jobParametersBuilder.addString(fileParameterName,
message.getPayload().getAbsolutePath());
return new JobLaunchRequest(job, jobParametersBuilder.toJobParameters());
}
}
I am getting the below error:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'inboundFileIntegration' defined in class path resource [com/porterhead/integration/file/FilePollingIntegrationFlow.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.integration.dsl.IntegrationFlow]: Factory method 'inboundFileIntegration' threw exception; nested exception is java.lang.IllegalArgumentException: Target object of type [class org.springframework.batch.integration.launch.JobLaunchingGateway] has no eligible methods for handling Messages.
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:599)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1123)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1018)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:510)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:482)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:772)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:839)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:538)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:766)
at org.springframework.boot.SpringApplication.createAndRefreshContext(SpringApplication.java:361)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307)
at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:134)
at com.porterhead.Application.main(Application.java:23)
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.integration.dsl.IntegrationFlow]: Factory method 'inboundFileIntegration' threw exception; nested exception is java.lang.IllegalArgumentException: Target object of type [class org.springframework.batch.integration.launch.JobLaunchingGateway] has no eligible methods for handling Messages.
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:189)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:588)
... 16 common frames omitted
Caused by: java.lang.IllegalArgumentException: Target object of type [class org.springframework.batch.integration.launch.JobLaunchingGateway] has no eligible methods for handling Messages.
at org.springframework.integration.util.MessagingMethodInvokerHelper.findHandlerMethodsForTarget(MessagingMethodInvokerHelper.java:494)
at org.springframework.integration.util.MessagingMethodInvokerHelper.<init>(MessagingMethodInvokerHelper.java:226)
at org.springframework.integration.util.MessagingMethodInvokerHelper.<init>(MessagingMethodInvokerHelper.java:135)
at org.springframework.integration.util.MessagingMethodInvokerHelper.<init>(MessagingMethodInvokerHelper.java:139)
at org.springframework.integration.handler.MethodInvokingMessageProcessor.<init>(MethodInvokingMessageProcessor.java:52)
at org.springframework.integration.handler.ServiceActivatingHandler.<init>(ServiceActivatingHandler.java:45)
at org.springframework.integration.dsl.IntegrationFlowDefinition.handle(IntegrationFlowDefinition.java:982)
at org.springframework.integration.dsl.IntegrationFlowDefinition.handle(IntegrationFlowDefinition.java:964)
at com.porterhead.integration.file.FilePollingIntegrationFlow.inboundFileIntegration(FilePollingIntegrationFlow.java:85)
at com.porterhead.integration.file.FilePollingIntegrationFlow$$EnhancerBySpringCGLIB$$c1cfa1e9.CGLIB$inboundFileIntegration$1(<generated>)
at com.porterhead.integration.file.FilePollingIntegrationFlow$$EnhancerBySpringCGLIB$$c1cfa1e9$$FastClassBySpringCGLIB$$4ce6110e.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:356)
at com.porterhead.integration.file.FilePollingIntegrationFlow$$EnhancerBySpringCGLIB$$c1cfa1e9.inboundFileIntegration(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:162)
... 17 common frames omitted
Please advice.
Since your FileMessageToJobRequest.toRequest() is marked with the #Transformer you should consider to use .transform() instead.
Also I see that you use that toRequest method name for the JobLaunchingGateway what is definitely wrong. So, the proper way to go is like this:
.transform(fileMessageToJobRequest())
.handle(jobLaunchingGateway())
The sample you are looking for is in the Spring Batch Reference Manual.

Kafka-Streams throwing NullPointerException when consuming

I have this problem:
When I'm consuming from a topic using the Processor API, when inside the processor the method context().forward(K, V), Kafka Streams throws a null pointer exception.
This is the stacktrace of it:
Exception in thread "StreamThread-1" java.lang.NullPointerException
at org.apache.kafka.streams.processor.internals.StreamTask.forward(StreamTask.java:336)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:187)
at org.apache.kafka.streams.processor.ProcessorContext$forward.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133)
at com.bnsf.ltf.processor.ConversionProcessor.process(ConversionProcessor.groovy:23)
at com.bnsf.ltf.processor.ConversionProcessor.process(ConversionProcessor.groovy)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:68)
at org.apache.kafka.streams.processor.internals.StreamTask.forward(StreamTask.java:338)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:187)
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:64)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:174)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:320)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
My Gradle dependencies look like this:
compile('org.codehaus.groovy:groovy-all')
compile('org.apache.kafka:kafka-streams:0.10.0.0')
Update: I tried with version 0.10.0.1 and it still throws the same error.
This is the code of the Topology I'm building...
topologyBuilder.addSource('inboundTopic', stringDeserializer, stringDeserializer, conversionConfiguration.inTopic)
.addProcessor('conversionProcess', new ProcessorSupplier() {
#Override
Processor get() {
return conversionProcessor
}
}, 'inboundTopic')
.addSink('outputTopic', conversionConfiguration.outTopic, stringSerializer, stringSerializer, 'conversionProcess')
stream = new KafkaStreams(topologyBuilder, streamConfig)
stream.start()
My processor looks like this:
#Override
void process(String key, String message) {
// Call to a service and the return of the service is set on the
// converted local variable named converted
context().forward(key, converted)
context().commit()
}
Provide your Processor directly.
.addProcessor('conversionProcess', () -> new MyProcessor(), 'inboundTopic')
MyProcessor should, in turn, inherit from AbstractProcessor.

Spring Integration DSL KafkaProducerContext configuration

I am trying to adapt the following example:
https://github.com/joshlong/spring-and-kafka
with latest stable versions of the following libraries:
org.apache.kafka > kafka_2.10 > 0.8.2.2
org.springframework.integration > spring-integration-kafka > 1.2.1.RELEASE
org.springframework.integration > spring-integration-java-dsl > 1.1.0.RELEASE
The integration dsl library seems to have gone through a refactoring probably driven by the introduction of the new KafkaProducer.
Here is the code of my Producer configuration:
#Bean(name = OUTBOUND_ID)
IntegrationFlow producer() {
log.info("starting producer flow..");
return flowDefinition -> {
ProducerMetadata<String, String> getProducerMetadata = new ProducerMetadata<>(this.kafkaConfig.getTopic(),
String.class, String.class, new StringSerializer(), new StringSerializer());
KafkaProducerMessageHandler kafkaProducerMessageHandler = Kafka.outboundChannelAdapter(props ->
props.put("timeout.ms", "35000"))
.messageKey(m -> m.getHeaders().get(IntegrationMessageHeaderAccessor.SEQUENCE_NUMBER))
.addProducer(getProducerMetadata, this.kafkaConfig.getBrokerAddress())
.get();
flowDefinition
.handle(kafkaProducerMessageHandler);
};
}
And the code for message generation:
#Bean
#DependsOn(OUTBOUND_ID)
CommandLineRunner kickOff(#Qualifier(OUTBOUND_ID + ".input") MessageChannel in) {
return args -> {
for (int i = 0; i < 1000; i++) {
in.send(MessageBuilder.withPayload("#" + i).setHeader(KafkaHeaders.TOPIC, this.kafkaConfig.getTopic()).build());
log.info("sending message #" + i);
}
};
}
Thats the exception I get:
Caused by: org.springframework.messaging.MessageHandlingException: error occurred in message handler [org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler#0]; nested exception is java.lang.NullPointerException
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:84)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:101)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:97)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:287)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:245)
at jc.DemoApplication$ProducerConfiguration.lambda$kickOff$0(DemoApplication.java:104)
at org.springframework.boot.SpringApplication.runCommandLineRunners(SpringApplication.java:673)
... 10 more
Caused by: java.lang.NullPointerException
at org.springframework.integration.kafka.support.KafkaProducerContext.getTopicConfiguration(KafkaProducerContext.java:67)
at org.springframework.integration.kafka.support.KafkaProducerContext.send(KafkaProducerContext.java:201)
at org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler.handleMessageInternal(KafkaProducerMessageHandler.java:88)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78)
... 18 more
UPDATE:
The full working source can be found in my fork:
https://github.com/magiccrafter/spring-and-kafka
Sorry for delay.
Your problem is around the early IntegrationComponentSpec instantiation:
KafkaProducerMessageHandler kafkaProducerMessageHandler = Kafka.outboundChannelAdapter(props ->
props.put("timeout.ms", "35000"))
.messageKey(m -> m.getHeaders().get(IntegrationMessageHeaderAccessor.SEQUENCE_NUMBER))
.addProducer(getProducerMetadata, this.kafkaConfig.getBrokerAddress())
.get();
you should not call .get() yourself.
The KafkaProducerMessageHandlerSpec is ComponentsRegistration and only SI Java DSL can resolve it correctly. The code there looks like:
public Collection<Object> getComponentsToRegister() {
this.kafkaProducerContext.setProducerConfigurations(this.producerConfigurations);
return Collections.<Object>singleton(this.kafkaProducerContext);
}
Since this code isn't invoked the this.producerConfigurations isn't populated to the this.kafkaProducerContext. Although the last one must be registered as a bean anyway.
So, to fix your problem you should deal only with the IntegrationComponentSpec in DSL definition.
Just obtain the KafkaProducerMessageHandlerSpec and use it for the .handle() below. Not sure if there is a reason to extract this object if we can use Kafka.outboundChannelAdapter() directly from the .handle().

Resources