Why does Spring batch multithreaded step break additional repository hibernate queries in ItemProcessor? - multithreading

I working on transforming a gigantic DB2 schema with about 30-40 tables into a streamlined JSON format via Spring Batch. My process works just fine with one thread, but as soon as I increase the thread pool size to enable multi-threading my step, my ItemProcessor breaks down with infuriatingly cryptic and incomprehensible errors.
I just don't understand how my Processor could possibly not be thread-safe: I'm not maintaining state anywhere, I'm just making a few extra repository calls to enrich the data, since I couldn't get the reader to pull in everything I need. And one such repository call is throwing an ArrayIndexOutOfBoundsException! I even added a #Transactional annotation to my processor and called it using Java's multithreaded java.util.concurrent.ExecutorService - works just fine there too.
I just can't seem to figure out why my Spring Batch multithreaded step would break my Item Processor's simple repository queries. I even intermittently get LazyLoading exceptions! Isn't an Item Processor supposed to be wrapped in a transaction? And intermittently I see absolutely nonsensical exceptions about class-cast exceptions, where a one to many mapping is returning the wrong Entity type! Again, this all works perfectly for the same data set in one thread. Is my configuration wrong?
java.lang.ArrayIndexOutOfBoundsException: 1765
at org.hibernate.engine.internal.EntityEntryContext.reentrantSafeEntityEntries(EntityEntryContext.java:319) ~[hibernate-core-5.3.14.Final.jar!/:5.3.14.Final]
at org.hibernate.engine.internal.StatefulPersistenceContext.reentrantSafeEntityEntries(StatefulPersistenceContext.java:1156) ~[hibernate-core-5.3.14.Final.jar!/:5.3.14.Final]
at org.hibernate.event.internal.AbstractFlushingEventListener.prepareEntityFlushes(AbstractFlushingEventListener.java:145) ~[hibernate-core-5.3.14.Final.jar!/:5.3.14.Final]
at org.hibernate.event.internal.AbstractFlushingEventListener.flushEverythingToExecutions(AbstractFlushingEventListener.java:83) ~[hibernate-core-5.3.14.Final.jar!/:5.3.14.Final]
at org.hibernate.event.internal.DefaultAutoFlushEventListener.onAutoFlush(DefaultAutoFlushEventListener.java:46) ~[hibernate-core-5.3.14.Final.jar!/:5.3.14.Final]
at org.hibernate.internal.SessionImpl.autoFlushIfRequired(SessionImpl.java:1433) ~[hibernate-core-5.3.14.Final.jar!/:5.3.14.Final]
at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1519) ~[hibernate-core-5.3.14.Final.jar!/:5.3.14.Final]
at org.hibernate.query.internal.AbstractProducedQuery.doList(AbstractProducedQuery.java:1538) ~[hibernate-core-5.3.14.Final.jar!/:5.3.14.Final]
at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1506) ~[hibernate-core-5.3.14.Final.jar!/:5.3.14.Final]
at org.hibernate.query.Query.getResultList(Query.java:132) ~[hibernate-core-5.3.14.Final.jar!/:5.3.14.Final]
at org.springframework.data.jpa.repository.query.JpaQueryExecution$CollectionExecution.doExecute(JpaQueryExecution.java:129) ~[spring-data-jpa-2.1.14.RELEASE.jar!/:2.1.14.RELEASE]
at org.springframework.data.jpa.repository.query.JpaQueryExecution.execute(JpaQueryExecution.java:91) ~[spring-data-jpa-2.1.14.RELEASE.jar!/:2.1.14.RELEASE]
at org.springframework.data.jpa.repository.query.AbstractJpaQuery.doExecute(AbstractJpaQuery.java:136) ~[spring-data-jpa-2.1.14.RELEASE.jar!/:2.1.14.RELEASE]
at org.springframework.data.jpa.repository.query.AbstractJpaQuery.execute(AbstractJpaQuery.java:125) ~[spring-data-jpa-2.1.14.RELEASE.jar!/:2.1.14.RELEASE]
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.doInvoke(RepositoryFactorySupport.java:605) ~[spring-data-commons-2.1.14.RELEASE.jar!/:2.1.14.RELEASE]
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.lambda$invoke$3(RepositoryFactorySupport.java:595) ~[spring-data-commons-2.1.14.RELEASE.jar!/:2.1.14.RELEASE]
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor$$Lambda$541/246846952.get(Unknown Source) ~[na:na]
at org.springframework.data.repository.util.QueryExecutionConverters$$Lambda$540/1619129136.apply(Unknown Source) ~[na:na]
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:595) ~[spring-data-commons-2.1.14.RELEASE.jar!/:2.1.14.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) [spring-aop-5.1.12.RELEASE.jar!/:5.1.12.RELEASE]
at org.springframework.data.projection.DefaultMethodInvokingMethodInterceptor.invoke(DefaultMethodInvokingMethodInterceptor.java:59) ~[spring-data-commons-2.1.14.RELEASE.jar!/:2.1.14.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) [spring-aop-5.1.12.RELEASE.jar!/:5.1.12.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor$$Lambda$539/1493625851.proceedWithInvocation(Unknown Source) ~[na:na]
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:295) ~[spring-tx-5.1.12.RELEASE.jar!/:5.1.12.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98) ~[spring-tx-5.1.12.RELEASE.jar!/:5.1.12.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) [spring-aop-5.1.12.RELEASE.jar!/:5.1.12.RELEASE]
at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:139) ~[spring-tx-5.1.12.RELEASE.jar!/:5.1.12.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) [spring-aop-5.1.12.RELEASE.jar!/:5.1.12.RELEASE]
at org.springframework.data.jpa.repository.support.CrudMethodMetadataPostProcessor$CrudMethodMetadataPopulatingMethodInterceptor.invoke(CrudMethodMetadataPostProcessor.java:144) ~[spring-data-jpa-2.1.14.RELEASE.jar!/:2.1.14.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) [spring-aop-5.1.12.RELEASE.jar!/:5.1.12.RELEASE]
at org.springframework.data.jpa.repository.support.CrudMethodMetadataPostProcessor$ExposeRepositoryInvocationInterceptor.invoke(CrudMethodMetadataPostProcessor.java:364) ~[spring-data-jpa-2.1.14.RELEASE.jar!/:2.1.14.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) [spring-aop-5.1.12.RELEASE.jar!/:5.1.12.RELEASE]
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:93) ~[spring-aop-5.1.12.RELEASE.jar!/:5.1.12.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) [spring-aop-5.1.12.RELEASE.jar!/:5.1.12.RELEASE]
at org.springframework.data.repository.core.support.SurroundingTransactionDetectorMethodInterceptor.invoke(SurroundingTransactionDetectorMethodInterceptor.java:61) ~[spring-data-commons-2.1.14.RELEASE.jar!/:2.1.14.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) [spring-aop-5.1.12.RELEASE.jar!/:5.1.12.RELEASE]
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212) [spring-aop-5.1.12.RELEASE.jar!/:5.1.12.RELEASE]
...
Here's my properties file:
#I don't want all my jobs run, I want to specify which one via an environment variable
spring.batch.job.enabled=false
spring.main.allow-bean-definition-overriding=true
spring.main.web-application-type=none
#If I set this count to 1, everything works just fine.
process.thread.count=4
process.page.size=25
hibernate.jdbc.fetch_size=100
process.chunk.size=25
process.publish.limit=200
My configuration class:
#Configuration
public class FullProductSyncBatchJobConfig {
#Bean
#StepScope
public ItemStreamReader<LegacyProduct> productReader(
LegacyProductRepository legacyProductRepository,
#Value("#{jobParameters[pageSize]}") Integer pageSize,
#Value("#{jobParameters[limit]}") Integer limit) {
RepositoryItemReader<LegacyProduct> legacyProductRepositoryReader = new RepositoryItemReader<>();
legacyProductRepositoryReader.setRepository(legacyProductRepository);
legacyProductRepositoryReader.setMethodName("findAllRelevantProducts");
legacyProductRepositoryReader.setSort(new HashMap<String, Sort.Direction>() {{
put("id.guid", Sort.Direction.ASC);
put("id.modelNumber", Sort.Direction.ASC);
}});
legacyProductRepositoryReader.setPageSize(pageSize);
if(limit > 0) legacyProductRepositoryReader.setMaxItemCount(limit);
legacyProductRepositoryReader.setSaveState(false);
return legacyProductRepositoryReader;
}
#Bean
#StepScope
public ItemProcessor<LegacyProduct, StreamlinedProduct> productDocumentBuilder(
SupplierRepository supplierRepository) {
return new StreamlinedProductBuilder(supplierRepository);
}
#Bean
#StepScope
public ItemWriter<StreamlinedProduct> productDocumentPublisher(
GcpPubsubPublisherService publisherService) {
return new StreamlinedProductPublisher(publisherService);
}
#Bean
public Step fullProductSync(ItemStreamReader<LegacyProduct> productReader,
ItemProcessor<LegacyProduct, StreamlinedProduct> productDocumentBuilder,
ItemWriter<StreamlinedProduct> productDocumentPublisher,
StepBuilderFactory stepBuilderFactory,
TaskExecutor syncProcessThreadPool,
PlatformTransactionManager jpaTransactionManager,
#Value("${process.chunk.size:100}") Integer chunkSize,
#Value("${process.publish.timeout.retry.limit:2}") int timeoutRetryLimit,
#Value("${process.failure.limit:20}") int maximumProcessingFailures) {
return stepBuilderFactory.get("fullProductSync")
.transactionManager(jpaTransactionManager)
.<GtinVendor, AbstractProduct>chunk(chunkSize)
.reader(productReader)
.processor(productDocumentBuilder)
.writer(productDocumentPublisher)
.faultTolerant()
.retryLimit(timeoutRetryLimit)
.retry(TimeoutException.class)
.skipPolicy(new SyncProcessSkipPolicy(maximumProcessingFailures))
.listener(new SyncProcessSkipListener()) // <== just logs them right now
.taskExecutor(syncProcessThreadPool)
.build();
}
#Bean
public Job fullProductSyncJob(Step fullProductSync,
JobBuilderFactory jobBuilderFactory) {
return jobBuilderFactory.get("fullProductSync")
.start(fullProductSync)
.build();
}
}
And my processor class:
#Slf4j
public class StreamlinedProductBuilder implements ItemProcessor<LegacyProduct, StreamlinedProduct> {
private DimensionRepository dimensionRepository;
public LegacyProductBuilder(DimensionRepository dimensionRepository) {
this.dimensionRepository = dimensionRepository;
}
public StreamlinedProduct process(LegacyProduct legacyProduct) {
StreamlinedProduct streamlinedProduct = new StreamlinedProduct();
streamlinedProduct.setPrimarySupplierNumber(parsePrimarySupplierNumber(product.getSuppliers()));
attachProductDimensions(streamlinedProduct, legacyProduct);
return streamlinedProduct;
}
private int parsePrimarySupplierNumber(List<Supplier> suppliers) {
/* This intermittently throws a ClassCastException when using multiple threads,
* saying that a Description can't be cast a Supplier... WHAT??! HOW???! How does
* getSuppliers() ever return a list of a completely different one-to-many entity????
*/
for(Supplier supplier : suppliers) {
if(supplier.isPrimary()) return supplier.getId();
}
return -1;
}
private void attachProductDimensions(LegacyProduct legacyProduct,
StreamlinedProduct streamlinedProduct) {
// The following line occasionally throws the ArrayOutOfBounds index I mentioned above. WHY?
// Works just fine in one thread...
List<Dimension> dimensions= dimensionRepository.findByProductIdAndModel(
legacyProduct.getId().getGuid(), legacyProduct.getId().getModelNumber());
Map<String, Double> dimensionsAsMap = new HashMap<>();
for (Description description : descriptions ) {
dimensionsAsMap.put(dimension.getName(), dimension.getValue());
}
streamlinedProduct.setDimensions(dimensionsAsMap);
}
}
My repository:
#Repository
public interface DimensionRepository extends PagingAndSortingRepository<ProductDimension, DimensionCompositePK> {
#Transactional(isolation = Isolation.READ_UNCOMMITTED) // <== fails with or without this
#Query(value = "select d.name as name, d.value as value " +
"from {h-schema}product p left join {h-schema}dimension d " +
"on p.guid = d.product_guid and p.model_number = d.product_model_number " +
"where p.guid = :guid and p.model_number = :model", nativeQuery = true)
List<Dimension> findVendorsByGtinGlnCountry(#Param("guid") String guid,
#Param("modelNumber") Integer model);
}

I don't know what was going on or why I was seeing exceptions suggesting Spring was trying to fill certain oneToMany relationships with the wrong Entity class from a different oneToMany relationship.
But regardless, adopting the Spring Batch driver-query pattern seems to have fixed things for me. I was under the impression that the same transaction was shared for a given chunk across the reader, processor, and writer. But I think that assumption was incorrect - each chunk of each stage has its own transaction.
So now my reader simply pulls in all relevant IDs of my entity class, and the processor goes out and fetches the oneToMany relationships for that entity.

Related

Numerous Kafka Producer Network Thread generated during data publishing, Null Pointer Exception Spring Kafka

I am writing a Kafka Producer using Spring Kafka 2.3.9 that suppose to publish around 200000 messages to a topic. For example, I have a list of 200000 objects that I fetched from a database and I want to publish json messages of those objects to a topic.
The producer that I have written is working fine for publishing, let's say, 1000 messages. Then it is creating some null pointer error(I have included the screen shot below).
During debugging, I found that the number of Kafka Producer Network Thread is very high. I could not count them but they are definitely more than 500.
I have read the thread Kafka Producer Thread, huge amound of threads even when no message is send and did a similar configuration by making producerPerConsumerPartition property false on DefaultKafkaProducerFactory. But still it is not decreasing the Kafka Producer Network Thread count.
Below are my code snippets, error and picture of those threads. I can't post all of the code segments since it is from a real project.
Code segments
public DefaultKafkaProducerFactory<String, String> getProducerFactory() throws IOException, IllegalStateException {
Map<String, Object> configProps = getProducerConfigMap();
DefaultKafkaProducerFactory<String, String> defaultKafkaProducerFactory = new DefaultKafkaProducerFactory<>(configProps);
//defaultKafkaProducerFactory.transactionCapable();
defaultKafkaProducerFactory.setProducerPerConsumerPartition(false);
defaultKafkaProducerFactory.setProducerPerThread(false);
return defaultKafkaProducerFactory;
}
public Map<String, Object> getProducerConfigMap() throws IOException, IllegalStateException {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getBootstrapAddress());
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.RETRIES_CONFIG, kafkaProperties.getKafkaRetryConfig());
configProps.put(ProducerConfig.ACKS_CONFIG, kafkaProperties.getKafkaAcknowledgementConfig());
configProps.put(ProducerConfig.CLIENT_ID_CONFIG, kafkaProperties.getKafkaClientId());
configProps.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 512 * 1024 * 1024);
configProps.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 10 * 1000);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
//updateSSLConfig(configProps);
return configProps;
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
ProducerFactory<String, String> producerFactory = getProducerFactory();
KafkaTemplate<String, String> kt = new KafkaTemplate<String, String>(stringProducerFactory, true);
kt.setCloseTimeout(java.time.Duration.ofSeconds(5));
return kt;
}
Error
2020-12-07 18:14:19.249 INFO 26651 --- [onPool-worker-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=kafka-client-09f48ec8-7a69-4b76-a4f4-a418e96ff68e-1] Closing the Kafka producer with timeoutMillis = 0 ms.
2020-12-07 18:14:19.254 ERROR 26651 --- [onPool-worker-1] c.w.p.r.g.xxxxxxxx.xxx.KafkaPublisher : Exception happened publishing to topic. Failed to construct kafka producer
2020-12-07 18:14:19.273 INFO 26651 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2020-12-07 18:14:19.281 ERROR 26651 --- [ main] o.s.boot.SpringApplication : Application run failed
java.lang.IllegalStateException: Failed to execute CommandLineRunner
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:787) [spring-boot-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:768) [spring-boot-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:322) [spring-boot-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226) [spring-boot-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1215) [spring-boot-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at xxx.xxx.xxx.Application.main(Application.java:46) [classes/:na]
Caused by: java.util.concurrent.CompletionException: java.lang.NullPointerException
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273) ~[na:1.8.0_144]
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280) ~[na:1.8.0_144]
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1592) ~[na:1.8.0_144]
at java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582) ~[na:1.8.0_144]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) ~[na:1.8.0_144]
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) ~[na:1.8.0_144]
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) ~[na:1.8.0_144]
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) ~[na:1.8.0_144]
Caused by: java.lang.NullPointerException: null
at com.xxx.xxx.xxx.xxx.KafkaPublisher.publishData(KafkaPublisher.java:124) ~[classes/:na]
at com.xxx.xxx.xxx.xxx.lambda$0(Publisher.java:39) ~[classes/:na]
at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_144]
at com.xxx.xxx.xxx.xxx.publishData(Publisher.java:38) ~[classes/:na]
at com.xxx.xxx.xxx.xxx.Application.lambda$0(Application.java:75) [classes/:na]
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590) ~[na:1.8.0_144]
... 5 common frames omitted
Following is the code for publishing the message - line number 124 is when we actually call KafkaTemplate
public void publishData(Object object) {
ListenableFuture<SendResult<String, String>> future = null;
// Convert the Object to JSON
String json = convertObjectToJson(object);
// Generate unique key for the message
String key = UUID.randomUUID().toString();
// Post the JSON to Kafka
try {
future = kafkaConfig.kafkaTemplate().send(kafkaProperties.getTopicName(), key, json);
} catch (Exception e) {
logger.error("Exception happened publishing to topic. {}", e.getMessage());
}
future.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
#Override
public void onSuccess(SendResult<String, String> result) {
logger.info("Sent message with key=[" + key + "]");
}
#Override
public void onFailure(Throwable ex) {
logger.error("Unable to send message=[ {} due to {}", json, ex.getMessage());
}
});
kafkaConfig.kafkaTemplate().flush();
}
============================
I am not sure if this error is causing by those many network threads.
After posting the data, I have called KafkaTemplate flush method. It did not work.
I also called ProducerFactory closeThreadBoundProducer, reset, destroy methods. None of them seems working.
Am I missing any configuration?
The null pointer issue was not actually related to Spring Kafka. We were reading the topic name from a different location connected by a network. That network connection was failing for few cases and that caused null pointer issue which ultimately caused the above error.

Kafka-Streams throwing NullPointerException when consuming

I have this problem:
When I'm consuming from a topic using the Processor API, when inside the processor the method context().forward(K, V), Kafka Streams throws a null pointer exception.
This is the stacktrace of it:
Exception in thread "StreamThread-1" java.lang.NullPointerException
at org.apache.kafka.streams.processor.internals.StreamTask.forward(StreamTask.java:336)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:187)
at org.apache.kafka.streams.processor.ProcessorContext$forward.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133)
at com.bnsf.ltf.processor.ConversionProcessor.process(ConversionProcessor.groovy:23)
at com.bnsf.ltf.processor.ConversionProcessor.process(ConversionProcessor.groovy)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:68)
at org.apache.kafka.streams.processor.internals.StreamTask.forward(StreamTask.java:338)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:187)
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:64)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:174)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:320)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
My Gradle dependencies look like this:
compile('org.codehaus.groovy:groovy-all')
compile('org.apache.kafka:kafka-streams:0.10.0.0')
Update: I tried with version 0.10.0.1 and it still throws the same error.
This is the code of the Topology I'm building...
topologyBuilder.addSource('inboundTopic', stringDeserializer, stringDeserializer, conversionConfiguration.inTopic)
.addProcessor('conversionProcess', new ProcessorSupplier() {
#Override
Processor get() {
return conversionProcessor
}
}, 'inboundTopic')
.addSink('outputTopic', conversionConfiguration.outTopic, stringSerializer, stringSerializer, 'conversionProcess')
stream = new KafkaStreams(topologyBuilder, streamConfig)
stream.start()
My processor looks like this:
#Override
void process(String key, String message) {
// Call to a service and the return of the service is set on the
// converted local variable named converted
context().forward(key, converted)
context().commit()
}
Provide your Processor directly.
.addProcessor('conversionProcess', () -> new MyProcessor(), 'inboundTopic')
MyProcessor should, in turn, inherit from AbstractProcessor.

Liferay model listener - on after update handler loop

I have created custom model listener for DLFileEntry in Liferay 6.2 GA6. However, the onAfterUpdate method is called repeatedly even when no change has been made on DLFileEntry.
Let me describe the situation:
The file entry is changed through the content administration in Liferay
The onAfterUpdate method is triggered (this is OK)
The onAfterUpdate method is triggered again and again - even though there is no update made on this entry
I' ve dumped the stack trace when the (unexpected) update event happenes. It looks like the onAfterUpdate is triggered by incrementViewCounter(..)method, which is triggered by BufferedIncrementRunnable class
java.lang.Exception: Stack trace
at java.lang.Thread.dumpStack(Thread.java:1365)
at eu.package.hook.model.listener.DLFileEntryModelListener.onAfterUpdate(DLFileEntryModelListener.java:63)
at eu.package.hook.model.listener.DLFileEntryModelListener.onAfterUpdate(DLFileEntryModelListener.java:32)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.liferay.portal.kernel.bean.ClassLoaderBeanHandler.invoke(ClassLoaderBeanHandler.java:67)
at com.sun.proxy.$Proxy865.onAfterUpdate(Unknown Source)
at com.liferay.portal.service.persistence.impl.BasePersistenceImpl.update(BasePersistenceImpl.java:340)
at com.liferay.portlet.documentlibrary.service.impl.DLFileEntryLocalServiceImpl.incrementViewCounter(DLFileEntryLocalServiceImpl.java:1450)
at sun.reflect.GeneratedMethodAccessor2034.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.liferay.portal.spring.aop.ServiceBeanMethodInvocation.proceed(ServiceBeanMethodInvocation.java:115)
at com.liferay.portal.spring.transaction.DefaultTransactionExecutor.execute(DefaultTransactionExecutor.java:62)
at com.liferay.portal.spring.transaction.TransactionInterceptor.invoke(TransactionInterceptor.java:51)
at com.liferay.portal.spring.aop.ServiceBeanMethodInvocation.proceed(ServiceBeanMethodInvocation.java:111)
at com.liferay.portal.increment.BufferedIncreasableEntry.proceed(BufferedIncreasableEntry.java:48)
at com.liferay.portal.increment.BufferedIncrementRunnable.run(BufferedIncrementRunnable.java:65)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
I have read the documenantation about the bufferend increment in portal.properties docs page. It's not recommended to disable this feature.
I have also thought about checking if any relevant change has been made on DLFileEntry object in model listener method. I just wanted to check, if there is any configuration that could be made to bypass the onAfterUpdate method when it's triggered by incrementViewCounter method.
Any help is appreciated.
Update:
On after update method:
private void createMessage(DLFileEntry model, String create) {
JSONObject jsonObject = JSONFactoryUtil.createJSONObject();
jsonObject.put("action", create);
jsonObject.put("id", model.getFileEntryId());
MessageBusUtil.sendMessage(SUPIN_MESSAGE_LISTENER_DESTINATION, jsonObject);
}
#Override
public void onAfterUpdate(DLFileEntry model) throws ModelListenerException {
if (LOG.isTraceEnabled()) {
URL[] urls = ((URLClassLoader) (Thread.currentThread().getContextClassLoader())).getURLs();
LOG.trace("Current thread classpath is: " + StringUtils.join(urls, ","));
}
LogMF.info(LOG, "File entry on update event - id {0}" , new Object[]{model.getFileEntryId()});
Thread.dumpStack();
createMessage(model, UPDATE);
}
Here is the message listener (message bus) which performs the on after update actions:
private void createOrUpdate(DLFileEntry model, String createOrUpdate) {
try {
initPermissionChecker(model);
LOG.info("Document " + model.getFileEntryId() + " " + createOrUpdate + "d in Liferay. Creating entry in Safe.");
long documentInSafe;
if (UPDATE.equalsIgnoreCase(createOrUpdate)) {
documentInSafe = (long) model.getExpandoBridge().getAttribute(EXPANDO_SAFE_DOCUMENT_ID);
if (documentInSafe > 0) {
safeClient.updateDocumentInSafe(model);
} else {
documentInSafe = safeClient.createDocumentInSafe(model);
}
} else {
documentInSafe = safeClient.createDocumentInSafe(model);
}
LOG.info("Document " + createOrUpdate +"d successfully with id " + documentInSafe);
saveSafeIDToExpando(model, documentInSafe);
} catch (Exception e) {
LOG.error("Unable to safe ID of document in Safe", e);
}
}
private void saveSafeIDToExpando(DLFileEntry model, long documentInSafe) throws SystemException {
try {
ExpandoTable table = ExpandoTableLocalServiceUtil.getDefaultTable(model.getCompanyId(), DLFileEntry.class.getName());
ExpandoColumn column = ExpandoColumnLocalServiceUtil.getColumn(table.getTableId(), EXPANDO_SAFE_DOCUMENT_ID);
ExpandoValueLocalServiceUtil.addValue(model.getCompanyId(), table.getTableId(), column.getColumnId(), model.getClassPK(), String.valueOf(documentInSafe));
LOG.info("ID of document in Safe updated in expando attribute");
} catch (PortalException e) {
LOG.error("Unable to save Safe document ID in expando." , e);
;
}
}
private void initPermissionChecker(DLFileEntry model) throws Exception {
User safeAdminUser = UserLocalServiceUtil.getUserByScreenName(model.getCompanyId(), SAFE_ADMIN_SCREEN_NAME);
PermissionChecker permissionChecher = PermissionCheckerFactoryUtil.create(safeAdminUser);
PermissionThreadLocal.setPermissionChecker(permissionChecher);
PrincipalThreadLocal.setName(safeAdminUser.getUserId());
CompanyThreadLocal.setCompanyId(model.getCompanyId());
LOG.info("Permission checker successfully initialized.");
}
I suggest this, but I am not sure if it resolves your case.
I've changed the body of your own method onAfterUpdate.
Using TransactionCommitCallbackRegistryUtil you can detach the model update request from the subsequent createMessage logic.
public void onAfterUpdate(DLFileEntry model) throws ModelListenerException {
TransactionCommitCallbackRegistryUtil.registerCallback(new Callable() {
#Override
public Void call() throws Exception {
createMessage(model, UPDATE);
}
}

Are spring-data-redis connections not properly released when transaction support is enabled?

In our Spring 4 project we would like to have database transactions that involve Redis and Hibernate. Whenever Hibernate fails, for example due to optimistic locking, the Redis transaction should be aborted as well.
This seems to work for
Single-threaded transaction execution.
Multi-threaded transaction execution, as long as the transaction only includes a single Redis call.
Multi-threaded transaction execution with multiple Redis calls, if Hibernate is excluded from our configuration.
As soon as a transaction includes multiple Redis calls, and Hibernate is configured to take part in the transactions, there seems to be a problem with connection binding and multithreading. Threads are stuck at RedisConnectionUtils.bindConnection(), probably since the JedisPool runs out of connections.
This can be reproduced as follows.
#Service
public class TransactionalService {
#Autowired
#Qualifier("redisTemplate")
private RedisTemplate<String, Object> redisTemplate;
#Transactional
public void processTask(int i){
redisTemplate.convertAndSend("testChannel", new Message());
redisTemplate.convertAndSend("testChannel", new Message());
}
}
We use a ThreadPoolTaskExecutor having a core pool size of 50 to simulate multithreaded transactions.
#Service
public class TaskRunnerService {
#Autowired
private TaskExecutor taskExecutor;
#Autowired
private TransactionalService transactionalService;
public void runTasks() {
for (int i = 0; i < 100; i++) {
final int j = i;
taskExecutor.execute(new Runnable() {
#Override
public void run() {
transactionalService.processTask(j);
}
});
}
}
}
Running this results in all taskExecutor threads hanging in JedisPool.getResource():
"taskExecutor-1" - Thread t#18
java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <1b83c92c> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:524)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:438)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361)
at redis.clients.util.Pool.getResource(Pool.java:40)
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:84)
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:10)
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.fetchJedisConnector(JedisConnectionFactory.java:90)
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.getConnection(JedisConnectionFactory.java:143)
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.getConnection(JedisConnectionFactory.java:41)
at org.springframework.data.redis.core.RedisConnectionUtils.doGetConnection(RedisConnectionUtils.java:128)
at org.springframework.data.redis.core.RedisConnectionUtils.bindConnection(RedisConnectionUtils.java:66)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:175)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:152)
at org.springframework.data.redis.core.RedisTemplate.convertAndSend(RedisTemplate.java:675)
at test.TransactionalService.processTask(TransactionalService.java:23)
at test.TransactionalService$$FastClassBySpringCGLIB$$9b3de279.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:708)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:98)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:262)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:95)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:644)
at test.TransactionalService$$EnhancerBySpringCGLIB$$a1b3ba03.processTask(<generated>)
at test.TaskRunnerService$1.run(TaskRunnerService.java:28)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Locked ownable synchronizers:
- locked <7d528cf7> (a java.util.concurrent.ThreadPoolExecutor$Worker)
Redis Config
#Configuration
public class RedisConfig {
#Bean
public JedisConnectionFactory jedisConnectionFactory() {
JedisConnectionFactory jedisConnectionFactory = new JedisConnectionFactory();
jedisConnectionFactory.setPoolConfig(new JedisPoolConfig());
return jedisConnectionFactory;
}
#Bean
public Jackson2JsonRedisSerializer<Object> jackson2JsonRedisSerializer() {
Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class);
jackson2JsonRedisSerializer.setObjectMapper(objectMapper());
return jackson2JsonRedisSerializer;
}
#Bean
public StringRedisSerializer stringRedisSerializer() {
return new StringRedisSerializer();
}
#Bean
public RedisTemplate<String, Object> redisTemplate() {
RedisTemplate<String, Object> redisTemplate = new RedisTemplate();
redisTemplate.setConnectionFactory(jedisConnectionFactory());
redisTemplate.setKeySerializer(stringRedisSerializer());
redisTemplate.setValueSerializer(jackson2JsonRedisSerializer());
redisTemplate.setEnableTransactionSupport(true);
return redisTemplate;
}
#Bean
public ObjectMapper objectMapper() {
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
objectMapper.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
return objectMapper;
}
}
Hibernate Config
#EnableTransactionManagement
#Configuration
public class HibernateConfig {
#Bean
public LocalContainerEntityManagerFactoryBean admin() {
LocalContainerEntityManagerFactoryBean entityManagerFactoryBean = new LocalContainerEntityManagerFactoryBean();
entityManagerFactoryBean.setJpaVendorAdapter(new HibernateJpaVendorAdapter());
entityManagerFactoryBean.setPersistenceUnitName("test");
return entityManagerFactoryBean;
}
#Bean
public JpaTransactionManager transactionManager(
#Qualifier("admin") LocalContainerEntityManagerFactoryBean entityManagerFactoryBean) {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(entityManagerFactoryBean.getObject());
transactionManager.setDataSource(entityManagerFactoryBean.getDataSource());
return transactionManager;
}
}
Is this a bug in spring-data-redis or is something wrong in our configuration?
I found your question (coincidentally) right before I hit the exact same issue using opsForHAsh and putting many keys. A thread dump confirmed it.
What I found helped to get me going was to increase the thread pool in my JedisPoolConfig. I set it as follows, to 128, and that got me on my way again.
#Bean
JedisPoolConfig jedisPoolConfig() {
JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
jedisPoolConfig.setMaxTotal(128);
return jedisPoolConfig;
}
I assume the pool was too small in my case, and all the threads were in use for my transaction, so were waiting indefinitely. Setting to total to 128 allowed me to continue. Try setting your config to a maxTotal that makes sense for your application.
I had a very similar problem but bumping the maxTotal threads bothered me if the threads really weren't being released. Instead I had some code that rapidly did a get and then a set. I put this in a SessionCallback and it behaved much better. Hope that helps.

rewriting a series in JavaFX linechart

I have a JavaFX app that utilizes the lineChart chart. I can write a chart to the app, and clear it, but when I want to write a new series and have it displayed, I get an error,
java.lang.IllegalArgumentException: Children: duplicate children added:
I understand the meaning, but not how to fix (I am very new to Java, let alone to FX).
Here is the relevant code from my controller (minus some class declarations):
(method called by the 'submit' button in chart tab window)
#FXML
private void getEngDataPlot(ActionEvent event) {
//check time inputs
boolean start = FieldVerifier.isValidUtcString(startRange.getText());
boolean end = FieldVerifier.isValidUtcString(endRange.getText());
type = engData.getValue().toString();
// Highlight errors.
startRangeMsg.setTextFill(Color.web(start ? "#000000" : "#ff0000"));
endRangeMsg.setTextFill(Color.web(end ? "#000000" : "#ff0000"));
if (!start || !end ) {
return;
}
// Save the preferences.
Preferences prefs = Preferences.userRoot().node(this.getClass().getName());
prefs.put("startRange", startRange.getText());
prefs.put("endRange", endRange.getText());
prefs.put("engData", engData.getValue().toString());
StringBuilder queryString = new StringBuilder();
queryString.append(String.format("edit out",
startRange.getText(),
endRange.getText()));
queryString.append(type);
log(queryString.toString());
// Start the query task.
submitEngData.setDisable(true);
// remove the old series.
engChart.getData().clear();
engDataProgressBar.setDisable(false);
engDataProgressBar.setProgress(-1.0);
//ProgressMessage.setText("Working...");
Thread t = new Thread(new EngDataPlotTask(queryString.toString()));
t.setDaemon(true);
t.start();
}
(the task called by above method:)
public EngDataPlotTask(String query) {
this.query = query;
}
#Override
protected Void call() {
try {
URL url = new URL(query);
String inputLine = null;
BufferedReader in = new BufferedReader(
new InputStreamReader(url.openStream()));
// while ( in.readLine() != null){
inputLine = in.readLine(); //}
Gson gson = new GsonBuilder().create();
DataObject[] dbin = gson.fromJson(inputLine, DataObject[].class);
in.close();
for (DataObject doa : dbin) {
series.getData().add(new XYChart.Data(doa.danTime, doa.Fvalue));
}
xAxis.setLabel("Dan Time (msec)");
} catch (Exception ex) {
log(ex.getLocalizedMessage());
}
Platform.runLater(new Runnable() {
#Override
public void run() {
submitEngData.setDisable(false);
// do some pretty stuff
String typeName = typeNameToTitle.get(type);
series.setName(typeName);
// put this series on the chart
engChart.getData().add(series);
engDataProgressBar.setDisable(true);
engDataProgressBar.setProgress(1.0);
}
});
return null;
}
}
The chart draws a first time, clears, and then the exception occurs. Requested stack trace follows:
Exception in runnable
java.lang.IllegalArgumentException: Children: duplicate children added: parent = Group#8922394[styleClass=plot-content]
at javafx.scene.Parent$1.onProposedChange(Unknown Source)
at com.sun.javafx.collections.VetoableObservableList.add(Unknown Source)
at com.sun.javafx.collections.ObservableListWrapper.add(Unknown Source)
at javafx.scene.chart.LineChart.seriesAdded(Unknown Source)
at javafx.scene.chart.XYChart$2.onChanged(Unknown Source)
at com.sun.javafx.collections.ListListenerHelper$SingleChange.fireValueChangedEvent(Unknown Source)
at com.sun.javafx.collections.ListListenerHelper.fireValueChangedEvent(Unknown Source)
at com.sun.javafx.collections.ObservableListWrapper.callObservers(Unknown Source)
at com.sun.javafx.collections.ObservableListWrapper.add(Unknown Source)
at com.sun.javafx.collections.ObservableListWrapper.add(Unknown Source)
at edu.arizona.lpl.dan.DanQueryToolFX.QueryToolController$EngDataPlotTask$1.run(QueryToolController.java:231)
at com.sun.javafx.application.PlatformImpl$4.run(Unknown Source)
at com.sun.glass.ui.win.WinApplication._runLoop(Native Method)
at com.sun.glass.ui.win.WinApplication.access$100(Unknown Source)
at com.sun.glass.ui.win.WinApplication$2$1.run(Unknown Source)
at java.lang.Thread.run(Thread.java:722)
Any ideas what I am doing wrong. I am a RANK NEWBIE, so please take that into account if you wish to reply. Thank you!
It took long time to find a workaround solution for this issue.
Please add below piece of code and test:
engChart.getData().retainAll();
engChart.getData().add(series);
My guess about the root cause according to your incomplete code is this line:
engChart.getData().add(series);
You should add series only once in initialize block for instance. But I think in your task thread, you are adding the already added same series again and having that mentioned exception. If your aim is to refresh the only series data, then just manipulate the series, getting it by engChart.getData().get(0); and delete that line in the code.
Once you add the series to the graph all you do is edit the series. Don't add it to the graph again.
The graph will follow whatever happens to the series i.e. just change the series data and the graph will automatically reflect the changes.

Resources