Spring Integration JpaInboundChannelAdapterSpec Update method - spring-integration

Why isn't there an equivalent updateSQL() (as in JdbcPollingChannelAdapter) for JpaInboundChannelAdapterSpec? Please confirm if manual persist through handle() should be called to update polled records as NOT to pick the same again?
UPDATE
#Bean
public JpaInboundChannelAdapterSpec dBJpaInboundSpec() {
return Jpa.inboundAdapter(entityManagerFactory)
.entityClass(XYZ.class)
.jpaQuery("SELECT a FROM XYZ a WHERE a.status is null")
// trick to update the polled record
.deleteAfterPoll(true)
.maxResults(maxMessagesPerPoll);
}
#Bean
public PollerSpec dBPollerSpec() {
return Pollers.fixedDelay(Duration.ofMinutes(pollInterval))
.maxMessagesPerPoll(maxMessagesPerPoll)
.transactional()
.advice(loggingAdvice);
}
2022-10-21 20:05:57,559 [scheduling-2 ] ERROR o.s.i.h.LoggingHandler - b46f904d98859f6f org.springframework.messaging.MessagingException: nested exception is org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute statement; SQL [n/a]; nested exception is org.hibernate.exception.SQLGrammarException: could not execute statement, failedMessage=GenericMessage [payload=[REMOVED_FOR_OBVIOUS_REASONS], headers={REMOVED_FOR_OBVIOUS_REASONS}]
at org.springframework.integration.endpoint.AbstractPollingEndpoint.pollForMessage(AbstractPollingEndpoint.java:427)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.lambda$createPoller$4(AbstractPollingEndpoint.java:348)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.lambda$execute$0(ErrorHandlingTaskExecutor.java:57)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:55)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.lambda$createPoller$5(AbstractPollingEndpoint.java:341)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:95)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute statement; SQL [n/a]; nested exception is org.hibernate.exception.SQLGrammarException: could not execute statement
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.con

I think it is not implement like you'd expect just because JPA is not just about DB, but persistent entity state in the memory. So, if you just did an update on the entity, then downstream logic is going to deal with that updated data - not an original you have polled from persistent layer before.
Yes, it is better to do such an update already after processing in this case manually.
I also always recommend a trick with an org.hibernate.annotations.SQLDelete where you indeed can specify an UPDATE query. That JpaInboundChannelAdapterSpec has this option then:
/**
* If set to 'true', the retrieved objects are deleted from the database upon
* being polled. May not work in all situations, e.g. for Native SQL Queries.
* #param deleteAfterPoll Defaults to 'false'.
* #return the spec
*/
public JpaInboundChannelAdapterSpec deleteAfterPoll(boolean deleteAfterPoll) {

Related

Micronaut + GORM does not save domain deeply

Trying update Execution domain, update only values from table execution
import grails.gorm.annotation.Entity
#Entity
class Execution {
RatingItem price
Boolean isDone
}
#Entity
class RatingItem {
Boolean isDone
}
Saving in Transactional and Singleton Service called by a Controller
#Transactional
#Singleton
class ExecutionService{
boolean saveExecution(Execution execution){
if( execution.save(flush:true, failOnError:true) ){
return true
}
return false
}
}
Debugging..
execution.isDone is updated
execution.price.isDone not
After trasaction ends is_done in rating_item was not updated
This happens frequently. There is similar code that worked. In some cases I use DataSource to force update, but I need to use GORM this time.
As I mentioned before, there was similar code that worked. So I noticed that in this code I used the following:
Execution execution = Execution.findById(id)
And in the current problem:
Execution execution = Execution.get(id)
When I used findBy it worked.
I think it's because get doesn't fetch the domain correctly, even though all inner domains are lazy: false. I believe this does not happen in grails, but only in micronaut.

how to add try catch exception for spring integration flow to achieve nested transactions

How to handle nested transactions in spring integration flow. Basically i have a process that fetches all the orders from database and process it order by order, in case of exception thrown on single order, all the orders processed are getting rolled back.
IntegrationFlows.from("perOrder")
.filter(Order.class, order -> order.getItems().size() > 0)
.handle(orderHandler, "handle") /*someway i way want to add try/catch for this method here so that
if handle method throws exception, want to suppress for that order and mark as failure only for that order */
.get();
public class OrderHandler {
#Transactional(propagation = Propagation.NESTED)
public handle() {
processing code
throw exception in case of any validation failure
}
}
For this purpose we provide an adviceChain to be injected into the endpoint of that handle():
.handle((GenericHandler<?>) (p, h) -> {
throw new RuntimeException("intentional");
}, e -> e.advice(retryAdvice()))
You can inject there any available Advice implementation: https://docs.spring.io/spring-integration/docs/current/reference/html/#message-handler-advice-chain, including TransactionInterceptor: https://docs.spring.io/spring-integration/docs/current/reference/html/#tx-handle-message-advice
The best way to have a try...catch semantics is with the ExpressionEvaluatingRequestHandlerAdvice. See its description in the Docs and also its JavaDocs.

How to Restrict a column of my pojo to be part of Ignite tables

I have a pojo that I am using to create ignite caches. now I want to add one more column(XXX) to that pojo and don't want that column(XXX) to be part of ignite cache creation.
Caused by: class org.apache.ignite.IgniteException: Failed to prepare Cassandra CQL statement: select "customer_ref", "tenant_id", "event_discount_id", "period_num", "domain_id", "event_source", "prod_group_id", "event_seq", "product_seq", "online_version_num", "total_authorised_mny", "version_num", "bonus_count", "customer_category", "recovery_status", "total_discounted_usage", "external_balance_liid", "total_online_discounted_mny", "anti_event_disc_mny", "total_partials_mny", "counter_usage", "total_partials_usage", "online_event_count", "event_count", "last_rated_dtm", "account_num", "dyn_alloc_charge_data", "anti_event_disc_usage", "total_usage", "anti_event_count", "last_online_event_dtm", "fast_cache_seq", "total_discounted_mny", "latest_event_dtm", "total_online_discounted_usage", "carried_over_boo", "total_authorised_usage", "total_otc_mny", "online_batch_info", "counter_resets", "total_bonus_award" from "smart"."custprodinvoicediscusage" where "customer_ref"=? and "tenant_id"=? and "event_discount_id"=? and "period_num"=? and "domain_id"=? and "event_source"=? and "prod_group_id"=? and "event_seq"=? and "product_seq"=?;
at org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.prepareStatement(CassandraSessionImpl.java:603)
at org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:201)
... 12 more
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Undefined column name recovery_status
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:50)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.AbstractSession.prepare(AbstractSession.java:104)
at org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.prepareStatement(CassandraSessionImpl.java:585)
... 13 more
ignite takes getter and setter method to read and write.
Changed this method signature in that POJO instead of getXXX() and setXXX()
public void putRecovery_status(Integer RECOVERY_STATUS) { this.RECOVERY_STATUS = RECOVERY_STATUS; }
public Integer fetchRecovery_status() { return RECOVERY_STATUS; }
If you don't want to make a field end up in Ignite cache, you can probably mark it transient.

StoredProcedureItemReader MultiThreading Exception

I am using Spring Batch StoredProcedureItemReader to retrive the result set and insert it to another database using JpaItemWriter.
Below is my code configuration.
#Bean
public JdbcCursorItemReader jdbcCursorItemReader(){
JdbcCursorItemReader jdbcCursorItemReader = new JdbcCursorItemReader();
jdbcCursorItemReader.setSql("call myProcedure");
jdbcCursorItemReader.setRowMapper(new MyRowMapper());
jdbcCursorItemReader.setDataSource(myDataSource);
jdbcCursorItemReader.setFetchSize(50);
jdbcCursorItemReader.setVerifyCursorPosition(false);
jdbcCursorItemReader.setSaveState(false);
return jdbcCursorItemReader;
}
#Bean
public Step step() {
threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
threadPoolTaskExecutor.setCorePoolSize(50);
threadPoolTaskExecutor.setMaxPoolSize(100);
threadPoolTaskExecutor.setThreadNamePrefix("My-TaskExecutor ");
threadPoolTaskExecutor.setWaitForTasksToCompleteOnShutdown(Boolean.TRUE);
threadPoolTaskExecutor.initialize();
return stepBuilderFactory.get("myJob").transactionManager(secondaryTransactionManager)
.chunk(50).reader(jdbcCursorItemReader())
.writer(myJpaItemWriter())
.taskExecutor(threadPoolTaskExecutor)
.throttleLimit(100)
.build();
}
The code works fine without multithreading or threadpooltaskexecutor.However, when using them i encounter below error.
Caused by: java.sql.SQLDataException: Current position is after the last row
could not execute statement [n/a] com.microsoft.sqlserver.jdbc.SQLServerException: Violation of PRIMARY KEY constraint
I have tried using JdbcCursotItemReader, even then i am facing the same error.Any Suggestions on how to make this work
JdbcCursorItemReader is not thread safe because it is based on a ResultSet which is not thread safe. The StoredProcedureItemReader is also based on a ResultSet, hence it is not thread-safe neither. See https://stackoverflow.com/a/53964556/5019386
Try to use the JdbcPagingItemReader which is thread-safe or if you really have to use the StoredProcedureItemReader, then make it thread-safe by wrapping it in a SynchronizedItemStreamReader.
Hope this helps.

Add new measure to ActivePivot?

I'm trying to add a new measure to an existing ActivePivot cube.
I started adding a measure to the Sanbox project. It worked fine.
I was able to see the new measure and i got the sum of all element.
I added following lines:
EquityDerivativesCube.xml:
<measure name="test" aggregationFunctions="SUM"/>
SandboxFields.xml:
<field name="test" type="double" defaultValue="0" />
PNLCalculator#PNLCalculatorResult.java
private Double test;
public Double getTest() {
return test;
}
public void setTest(Double test) {
this.test = test;
}
PNLCalculator.enrichTrade()
...
result.setTest(2.);
...
In the Real-World-Application this approach doesn't work so good.
I added the following lines:
Into the
RealWorldApplicationCube.xml
<measure name="Test" aggregationFunctions="SUM" folder="Dev\Test" />
Into
RealWorldApplicationSchema.xml
<field name="Test" type="double" defaultValue="0" />
The data is loaded from CSV files. There is a CSV file that defines all fields for each file that the Applcation can handle.
...;Test=N/A;
There is also a calculator that handles all the other measures. I extended it like this:
public void computeTest(IRelationalEntry entry) {
double price = org.apache.commons.collections.MapUtils.getDouble(entry, "price", 0.);
double test = price * 2;
entry.put("Test", test);
}
When i run the application i get couple exceptions and i cannot see the new measure.
Here is the exception:
com.quartetfs.fwk.transaction.TransactionException: [Transaction manager: ActivePivot] Prepare commit operation failed because an exception has been raised. You must now rollback the transaction. at com.quartetfs.fwk.transaction.impl.ATransactionManager.prepareCommit(ATransactionManager.java:130) at com.quartetfs.fwk.transaction.impl.ATransactionManager.commit(ATransactionManager.java:142) at com.quartetfs.tech.store.impl.ARelationalTransactionManager.commit(ARelationalTransactionManager.java:139) at com.quartetfs.tech.store.impl.ARelationalTransactionManager.commit(ARelationalTransactionManager.java:51) at com.real.world.application.impl.ATransactionExceptionAwareHandler.doSubmit(ATransactionExceptionAwareHandler.java:97) at com.real.world.application.impl.ATransactionExceptionAwareHandler.submit(ATransactionExceptionAwareHandler.java:51) at com.quartetfs.tech.store.impl.TransactionHandlerListener.receive(TransactionHandlerListener.java:61) at com.quartetfs.tech.store.csv.impl.FilteredSource.receive(FilteredSource.java:56) at com.quartetfs.fwk.messaging.impl.ParserContext.publishChunk(ParserContext.java:457) at com.quartetfs.fwk.messaging.impl.ParserContext.awaitTermination(ParserContext.java:382) at com.quartetfs.fwk.messaging.impl.CSVSource.process(CSVSource.java:308) at com.quartetfs.fwk.messaging.impl.CSVSource.onFileAction(CSVSource.java:282) at com.quartetfs.fwk.messaging.impl.AFileWatcher.filesAction(AFileWatcher.java:277) at com.quartetfs.fwk.messaging.impl.AFileWatcher.doInterval(AFileWatcher.java:267) at com.quartetfs.fwk.messaging.impl.AFileWatcher.startScheduling(AFileWatcher.java:125) at com.quartetfs.fwk.messaging.impl.AFileWatcher.start(AFileWatcher.java:107) at com.quartetfs.fwk.messaging.impl.CSVSource.start(CSVSource.java:182) at com.quartetfs.tech.store.csv.impl.ExtendedCSVDataModelFactory$1.run(ExtendedCSVDataModelFactory.java:217) at java.lang.Thread.run(Thread.java:662) Caused by: com.quartetfs.fwk.transaction.TransactionException: [Transaction manager: RealWorldApplicationSchema] Prepare commit operation failed because an exception has been raised. You must now rollback the transaction. at com.quartetfs.fwk.transaction.impl.ATransactionManager.prepareCommit(ATransactionManager.java:130) at com.quartetfs.tech.store.impl.ARelationalTransactionManager.doPrepareCommit(ARelationalTransactionManager.java:113) at com.quartetfs.fwk.transaction.impl.ATransactionManager.prepareCommit(ATransactionManager.java:128) ... 18 more Caused by: com.quartetfs.fwk.transaction.TransactionException: One of the schema transaction contribution tasks failed. at com.quartetfs.biz.pivot.transaction.impl.ActivePivotSchemaTransaction.prepareCommit(ActivePivotSchemaTransaction.java:239) at com.quartetfs.biz.pivot.impl.ActivePivotSchemaTransactionManager.doPrepareCommit(ActivePivotSchemaTransactionManager.java:194) at com.quartetfs.fwk.transaction.impl.ATransactionManager.prepareCommit(ATransactionManager.java:128) ... 20 more Caused by: java.lang.RuntimeException: com.quartetfs.biz.pivot.ClassificationException: The calculator has thrown an exception during the evaluation of the object: RelationalEntry [type=ActivePivot, key=Key [ ... ,Test=null, ...]] at jsr166y.ForkJoinTask.completeExceptionally(ForkJoinTask.java:1116) at jsr166y.cancellable.impl.CancellableCountedCompleter.onCompletion(CancellableCountedCompleter.java:132) at jsr166y.CountedCompleter.tryComplete(CountedCompleter.java:391) at com.quartetfs.biz.pivot.transaction.impl.ActivePivotSchemaTransaction$ContributeAction.afterCompute(ActivePivotSchemaTransaction.java:492) at jsr166y.cancellable.impl.CancellableCountedCompleter.compute(CancellableCountedCompleter.java:96) at jsr166y.CountedCompleter.exec(CountedCompleter.java:437) at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:265) at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:976) at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1480) at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:105) Caused by: com.quartetfs.biz.pivot.ClassificationException: The calculator has thrown an exception during the evaluation of the object: RelationalEntry [type=ActivePivot, key=Key [ ... ,Test=null, ...]] at com.quartetfs.biz.pivot.classification.impl.Classifier.classifyAll(Classifier.java:114) at com.quartetfs.biz.pivot.transaction.impl.ActivePivotSchemaTransaction$ContributeAction.compute(ActivePivotSchemaTransaction.java:446) at com.quartetfs.biz.pivot.transaction.impl.ActivePivotSchemaTransaction$ContributeAction.computeSafely(ActivePivotSchemaTransaction.java:487) at jsr166y.cancellable.impl.CancellableCountedCompleter.compute(CancellableCountedCompleter.java:91) ... 5 more Caused by: com.quartetfs.biz.pivot.ClassificationException: Classification failure for level "Test": the mandatory property "Test" is not found. Contribution key: "Key [ ... ,Test=null, ...]]. at com.quartetfs.biz.pivot.classification.impl.ResultClassificationProcedure.execute(ResultClassificationProcedure.java:192) at com.quartetfs.biz.pivot.classification.impl.ClassificationTree$ClassificationNode.execute(ClassificationTree.java:215) at com.quartetfs.biz.pivot.classification.impl.ClassificationTree$ClassificationNode.execute(ClassificationTree.java:235) at com.quartetfs.biz.pivot.classification.impl.ClassificationTree.execute(ClassificationTree.java:57) at com.quartetfs.biz.pivot.classification.impl.Classifier.classify(Classifier.java:161) at com.quartetfs.biz.pivot.classification.impl.Classifier.classifyAll(Classifier.java:110) ... 8 more
06.02.2013 16:09:24 com.real.world.application.impl.ATransactionExceptionAwareHandler doSubmit
What am i missing? How can i add a new measure to the cube?
What causes the error in the calculator?
While debugging the code is visiting the calculators method (computeTest()) and is setting the value to the RelationalEntry.
Cheers
The steps to add a new measure in ActivePivot are not more complicated than what you describe in your post. Declare the field that holds the value to aggregate, reference that field as a measure in your cube, together with the aggregation function(s) you want to apply on it. And optionally the post processors you have written if the base aggregation functions are not enough.
But something else in your project seems to cause an issue and you need support (the data, the loading of the data, another component...). StackOverflow is not meant for issue tracking or troubleshooting support, it is more about "I would like to do that, how can I do it?". My advice is to bring your issue directly to the Quartet FS customer support ( http://support.quartetfs.com )

Resources