jOOQ TransactionListener never gets called - jooq

I am trying to attach a transaction listener to my jOOQ DSL configuration but none of its method gets ever called.
DSL context configuration
#Bean
public Configuration dslConfiguration(DataSourceConnectionProvider connectionProvider) {
return new DefaultConfiguration()
.set(SQLDialect.POSTGRES)
.set(connectionProvider)
.set(new DefaultTransactionProvider(connectionProvider)) // <-- probably unnecessary
.set(new TxListener()) // <-- here
.set(executeListener());
}
Transaction manager
#Bean
public PlatformTransactionManager transactionManager(EntityManagerFactory entityManagerFactory) {
return new JpaTransactionManager(entityManagerFactory);
}
(Just checked that my calls are done within a transaction: updated some data, threw an exception, data remains unchanged.)

Related

How to multi-thread parsing of JMS messages

In my Spring Boot project, I have two JMS listeners listening to one queue. All messages received from the queue have to be processed in the same way and persisted / updated in the database (Oracle). Currently, I have a synchronized method in a class that is doing the parsing of the messages. As expected, all thread read messages simultaneously, but parsing is done one by one as the method (parseMessage()) is synchronized. What I want is to parse the messages simultaneously and do database operations as well.
How can I solve this?
I don't want to create two different classes with the same code and use #Qualifier to call different classes in each listener, as the code for parsing the message is the same.
The ideal solution, I think, is to do database operations using a new synchronized method in a new class, but parsing the message in a multi-threaded way. So, at a time only one thread can say persist / update. When a thread is not waiting to persist / update, it continues the parsing on its own thread.
Please correct me if I am wrong or if you find the optimal solution. Let me know if any other info is needed.
JMS Controller Class
#RestController
#EnableJms
public class JMSController {
#Autowired
private IParseMapXml iParseMapXml;
#JmsListener(destination = "${app.jms_destinaltion}")
public void receiveMessage1(String recvMsg) {
try {
InputSource is = new InputSource(new StringReader(recvMsg.replaceAll("&", "&amp")));
Document doc = new SAXReader().read(is);
iParseMapXml.parseMessage(doc);
} catch (Exception e) {
}
}
#JmsListener(destination = "${app.jms_destinaltion}")
public void receiveMessage2(String recvMsg) {
try {
InputSource is = new InputSource(new StringReader(recvMsg.replaceAll("&", "&amp")));
Document doc = new SAXReader().read(is);
iParseMapXml.parseMessage(doc);
} catch (Exception e) {
}
}
}
Parse XML Interface
public interface IParseMapXml {
public void parseMessage(Document doc);
}
Parsing Implementation
public class ParsingMessageClass implements IParseMapXml{
#Override
#Transactional
synchronized public void parseMessage(Document doc) {
// TODO Auto-generated method stub
....
PROCESS DATA/MESSAGE
....
DO DB OPERATIONS
}
}

How to pause the Spring cloud data flow Source class from sending data to kafka?

i am working on spring cloud data flow application ,Following is the code snippet
#Bean
#InboundChannelAdapter(channel = TbeSource.PR1, poller = #Poller(fixedDelay = "2000"))
public MessageSource<Product> getProductSource(ProductBuilder dataAccess) {
return new MessageSource<Product>() {
#SneakyThrows
#Override
public Message<Product> receive() {
System.out.println("calling method");
return MessageBuilder.withPayload(dataAccess.getNext()).build();
}
};
}
In above code the getNext() method will get the data from the database and return that object,so if the data is completely readed then it will return null
we can't return null to this MessageSource.
so is there any options available to pause and resume this Source connection class whenever we need?
Did any one faced / overcome this scenario?
First of all you just can have a Supplier<Product> instead of that MessageSourceand your code would be just like this:
return () -> dataAccess.getNext();
The null result is valid over here and no message is going to be emitted in this case and no error since the framework handles null result properly.
You still can have an idle functionality on that #InboundChannelAdapter when result of the method call is null. For that reason you need to take a look into the SimpleActiveIdleMessageSourceAdvice. See docs for more info: https://docs.spring.io/spring-integration/docs/5.3.4.RELEASE/reference/html/core.html#simpleactiveidlereceivemessageadvice

Thread safety in executor channel

I have a message producer which produces around 15 messages/second
The consumer is a spring integration project which consumes from the Message Queue and does a lot of processing. I have used the Executor channel to process messages in parallel and then the flow passes through some common handler class.
Please find below the snippet of code -
baseEventFlow() - We receive the message from the EMS queue and send it to a router
router() - Based on the id of the message" a particular ExecutorChannel instance is configured with a singled-threaded Executor. Every ExecutorChannel is going to be its dedicated executor with only single thread.
skwDefaultChannel(), gjsucaDefaultChannel(), rpaDefaultChannel() - All the ExecutorChannel beans are marked with the #BridgeTo for the same channel which starts that common flow.
uaEventFlow() - Here each message will get processed
#Bean
public IntegrationFlow baseEventFlow() {
return IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(Jms.container(this.emsConnectionFactory, this.emsQueue).get()))
.wireTap(FLTAWARE_WIRE_TAP_CHNL)
.route(router()).get();
}
public AbstractMessageRouter router() {
return new AbstractMessageRouter() {
#Override
protected Collection<MessageChannel> determineTargetChannels(Message<?> message) {
if (message.getPayload().toString().contains("\"id\":\"RPA")) {
return Collections.singletonList(skwDefaultChannel());
}else if (message.getPayload().toString().contains("\"id\":\"ASH")) {
return Collections.singletonList(rpaDefaultChannel());
} else if (message.getPayload().toString().contains("\"id\":\"GJS")
|| message.getPayload().toString().contains("\"id\":\"UCA")) {
return Collections.singletonList(gjsucaDefaultChannel());
} else {
return Collections.singletonList(new NullChannel());
}
}
};
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel skwDefaultChannel() {
return MessageChannels.executor(SKW_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel gjsucaDefaultChannel() {
return MessageChannels.executor(GJS_UCA_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel rpaDefaultChannel() {
return MessageChannels.executor(RPA_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
public IntegrationFlow uaEventFlow() {
return IntegrationFlows.from("uaDefaultChannel")
.wireTap(UA_WIRE_TAP_CHNL)
.transform(eventHandler, "parseEvent")
.handle(uaImpl, "process").get();
}
My concern is in the uaEVentFlow() the common transform and handler method are not thread safe and it may cause issue. How can we ensure that we inject a new transformer and handler at every message invocation?
Should I change the scope of the transformer and handler bean as prototype?
Instead of bridging to a common flow, you should move the .transform() and .handle() to each of the upstream flows and add
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
to their #Bean definitions so each gets its own instance.
But, it's generally better to make your code thread-safe.

Override declarative transactional methods with programmatic transactional code in Spring

I am trying to override the transactional behaviour for a service method(someService.updateSomething() in the example) annotated with #Transactional annotation in Spring. To do so, from other class, I am using programmatic transactional code like the next:
#Service
public class MyServiceClass {
private TransactionTemplate transactionTemplate;
public MyClass (PlatformTransactionManager transactionManager) {
transactionTemplate = new TransactionTemplate(transactionManager);
}
#Transactional
public void someMethod(){
transactionTemplate.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
transactionTemplate.execute(new TransactionCallbackWithoutResult() {
protected void doInTransactionWithoutResult(TransactionStatus status){
try{
someService.updateSomething();
}catch(Exception e){
LOGGER.error("Error has ocurred");
}
}
});
}
}
My problem is that someService.updateSomething() does not run in a new Transaction and I dont understand why. So:
If I call a proxied service method with transactional behaviour like someService.updateSomething() but in the call I create a new transaction like in the example, when the code hits to the proxied method, it will take the new transaction created and not the transaction already running for the someMethod() method, right?
Thanks!

Spring Integration: reuse MessageProducer definition

I have an outbound gateway for soap calls (MarshallingWebServiceOutboundGateway) with elaborate setup. I need to use that gateway definition from multiple flows.
The question spring-integration: MessageProducer may only be referenced once is somewhat similar, but this question is about the proper use of the spring bean scope prototype for spring integration collaborators.
I have a separate config file which sets up the gateway and its dependencies:
#Bean
public MarshallingWebServiceOutboundGateway myServiceGateway() {
Jaxb2Marshaller marshaller = new Jaxb2Marshaller();
marshaller.setPackagesToScan("blah.*");
MarshallingWebServiceOutboundGateway gateway = new MarshallingWebServiceOutboundGateway(
serviceEndpoint, marshaller, messageFactory);
gateway.setMessageSender(messageSender);
gateway.setRequestCallback(messageCallback);
return gateway;
}
This is how I initially tried to wire up the outbound gateway from two different flows in two different config files.
In one config file:
#Bean
public IntegrationFlow flow1() {
MarshallingWebServiceOutboundGateway myServiceGateway = context.getBean("myServiceGateway", MarshallingWebServiceOutboundGateway.class);
return IntegrationFlows
.from(Http.inboundGateway("/res1")
.requestMapping(r -> r.methods(HttpMethod.GET))
.transform(soapRequestTransformer)
.handle(myServiceGateway) // wrong: cannot be same bean
.transform(widgetTransformer)
.get();
}
In a separate config file:
#Bean
public IntegrationFlow flow2() {
MarshallingWebServiceOutboundGateway myServiceGateway = context.getBean("myServiceGateway", MarshallingWebServiceOutboundGateway.class);
return IntegrationFlows
.from(Http.inboundGateway("/res2")
.requestMapping(r -> r.methods(HttpMethod.GET))
.transform(soapRequestTransformer)
.handle(myServiceGateway) // wrong: cannot be same bean
.transform(widgetTransformer)
.handle(servicePojo)
.get();
}
This is a problem because - as I understand it - myServiceGateway cannot be the same instance, since that instance has only one outbound channel and cannot belong to two different flows.
In the related question spring-integration: MessageProducer may only be referenced once, #artem-bilan advised not to create the outbound gateway in an #Bean method, rather to use a plain method which creates new instances for every call.
That works, but it is inconvenient in my case. I need to reuse the outbound gateway from several flows in different config files and I would have to copy the code to create the gateway into each config file. Also, the gateway dependencies inflate my Configuration file constructors, making Sonar bail.
Since the error message coming out of IntegrationFlowDefinition.checkReuse() says A reply MessageProducer may only be referenced once (myServiceGateway) - use #Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE) on #Bean definition. I wanted to give the scope prototype another try.
So I try to make spring integration look up a prototype gateway from the context by name, hoping to get a different gateway instance in flow1 and flow2:
.handle(context.getBean("myServiceGateway",
MarshallingWebServiceOutboundGateway.class))
And I annotated the outbound gateway #Bean definition with
#Scope(value = ConfigurableBeanFactory.SCOPE_PROTOTYPE)
But I can see that the myServiceGateway() method is only invoked once, despite the prototype scope, and application startup still fails with the error message which advises to use the prototype scope - quite confusing, actually ;-)
Based on Mystery around Spring Integration and prototype scope I also tried:
#Scope(value = ConfigurableBeanFactory.SCOPE_PROTOTYPE, proxyMode = ScopedProxyMode.TARGET_CLASS)
The application starts, but the responses never reach the step after the gateway, the widgetTransformer. (Even more strange, exactly the widgetTransformer is skipped: in flow1 the outcome is the untransformed gateway response and in flow2 the untransformed messages hit the step after the widgetTransformer, i.e. the servicePojo). Making a proxy out of a message producer seems not to be a good idea.
I really want to get to the bottom of this. Is the exception message wrong which asks to use the prototype scope or am I just getting it wrong? How can I avoid to repeat the bean definition for message producers if I need several such producers which are all set up the same way?
Using spring-integration 5.0.9.
I am not entirely sure why the #Scope is not working, but here is a work-around...
#SpringBootApplication
public class So52453934Application {
public static void main(String[] args) {
SpringApplication.run(So52453934Application.class, args);
}
#Autowired
private HandlerConfig config;
#Bean
public IntegrationFlow flow1() {
return f -> f.handle(this.config.myHandler())
.handle(System.out::println);
}
#Bean
public IntegrationFlow flow2() {
return f -> f.handle(this.config.myHandler())
.handle(System.out::println);
}
#Bean
public ApplicationRunner runner() {
return args -> {
context.getBean("flow1.input", MessageChannel.class).send(new GenericMessage<>("foo"));
context.getBean("flow2.input", MessageChannel.class).send(new GenericMessage<>("bar"));
};
}
}
#Configuration
class HandlerConfig {
public AbstractReplyProducingMessageHandler myHandler() {
return new AbstractReplyProducingMessageHandler() {
#Override
protected Object handleRequestMessage(Message<?> requestMessage) {
return ((String) requestMessage.getPayload()).toUpperCase();
}
};
}
}
i.e. do as #artem suggested, but inject the bean with the factory method.

Resources