How To Handle Spring Transaction In Two Different Application Server - spring-transactions

As of now i have good knowledge on Spring Transaction and successfully implemented Spring transaction with one application server and transaction is working good.But wondering about if we have two different application server at that time how to use spring transaction.As of my understanding spring transaction support only one application server with one or two datasource. Guys please give advice how to handle in this situation.
Thanks in advance

It's almost same, in case of two application server you need two data source object, there by two session factory and two transaction manager. while performing transaction you can always choose the transaction which you want to use.
For ex: Two data sources :-
1.#Autowired #Qualifier("firstAppServerDBConnection")
DataSource oracleDataSource
2.
#Autowired
#Qualifier("secondAppServerDBConnection")
DataSource mySqlDataSource
Two Session Factory
1.
#Autowired
#Qualifier("firstAppServeroracleSessionFactory")
SessionFactory oracleSessionFactory
2.
#Autowired
#Qualifier("secondAppServermySqlSessionFactory")
SessionFactory mySqlSessionFactory
TWo transaction manager :
Same for transaction manager
Imp Method which help you :
LocalSessionFactoryBean.setDataSource(passDataSource) -- For Dif SessionFactory
HibernateTrasnactionManager.setSessionFactory(passSessionFactory) - For Diff Trasnaction
Hope it makes some sense

Related

Considering Axon in greenfield project

I'll be starting on a greenfield project in a few months.
The project will contain lot's of business logic, spread over several subdomains. Yes, we'll be using principles of Domain Driven Design.
Tech will consist of Spring, Spring Boot & Hibernate stack.
I was looking after some Java libs to cover infrastructural things like:
domain event publication
event store
event deduplication
resequencers on consumer side
projections
reliable publishing
reliable delivery & redelivery
...
I came across the Axon Framework. I already heard about it, didn't know it in details. So I read some blogposts, little bit of documentation and watched some broadcasts on Youtube.
It seems very promising, I'm considering to use it because I don't want to reinvent to wheel over and over again on the infrastructural side.
So I'm looking for someone to answer and clarify my questions:
Command handling
Axon use CommandHandlers with void methods. Is it possible to make them return a value (for instance a generated business id) or objects for notification purposes concerning the business operation? It's not a issue to me that the method will be I/O blocking by this.
Local vs remote domain events publication
I want to have a clear separation of local vs remote domain events.
Local domain events should only be visible and consumed to the local subdomain. Is it possible to configure event consumption sync and/or async?
My Local domain events can be 'fat'. They are allowed to carry more data because it won't cross the domain boundaries.
Remote domain events will be 'thin', so only the minimum data necessary for remote domains. This type op events need always to be handles async.
Is it possible to convert a local (fat) domain event to a remote (thin) domain event at the edge of a domain? By 'edge', I mean the infrastructural side.
By this, the domain model doesn't need to know distinction between local & remote domain events.
CQRS synchronously
My application will consist of 1 (maybe 2) core domains and several subdomains. Some domains contain lot's of business logic and will require CQRS.
Other domain will be more 'crudy' style.
Is it possible to do CQRS synchronously? I want to start this way before adding technical complexities like async handling. It this plossible with Axon?
This also means that domain events will be stored in a events store without using event sourcing. Can Axon's event store be used without event sourcing?
Same for projection stuff, I just want to projection domain events to build my read model.
Modular monolith
We'll use a modular monolith.
Not very trendy these days with all the microservices stuff. Although, I'm convinced of having a monolith where each domain is completely separated (application code & DB-schema), where operations will be handled with eventual consistency and domain events contain the necessary data.
Later on, and if necessary, it will be easier to migrate to a microservices architecture.
Is Axon a framework that fits in a modular monolith kind of architecture? Is there anything to take into account?
Fully separated domain model (persistence agnostic)
The domain model will be completely separated from the data model.
We need to have a repository that reads a data model (using Hibernate) and uses a data mapper to create an aggregate when it needs to be loaded.
The other way is also needed, an aggregate needs to be converted and saved into the data model (using data mapper).
Additionally, the aggregates's domain events need to be stored into an event store and published to local or remote event handlers.
This has some consequences:
we need to have full control of repository implementation that communicates with one or more DAO's (Spring data repositories) to take the necessary data out of Hibernate entities and construct an aggregate with it. An aggregate might be modeled in 2 or even 3 relational tables after all.
we don't need any Hibernate annotation in the domain model
Is this approach possible with Axon? I only see examples using direct JPA (domain model maps 1 to 1 to entities) or event sourcing.
This approach is really a deal breaker for us, a separated domain model gives so much more possibilities than mapping it directly to data entities.
Below an example of what I want to achieve:
Aggregate (without JPA) in some domain model package:
public class ScoringResultAggregate {
// members, constructor, operation omitted for brevity
}
Hibernate Entity in some infrastructure package:
#Entity
#Table(name ="SOME_TABLE_NAME)
public class ScoringResultEntity {
// member and getters & setters; no domain logic
}
Repository interface that belongs to the domain model:
public interface ScoringResultRepository {
void save(ScoringResultAggregate scoringResultAggregate);
ScoringResultAggregate findByApplicationNumber(ApplicationNumber applicationNumber);
}
Adapter that implements repository interface; responsible for mapping aggregate from/to data (JPA) model:
class ScoringResultAdapterRepository implements ScoringResultRepository {
private ScoringResultJpaRepository scoringResultJpaRepository;
ScoringResultJPARepository(ScoringResultJpaRepository scoringResultJpaRepository) {
this.scoringResultJpaRepository= scoringResultJpaRepository;
public void save(ScoringResultAggregate scoringResultAggregate) {
// converts aggregate to ScoringResultEntity and saves the state into DB
}
public ScoringResultAggregate findByApplicationNumber(ApplicationNumber applicationNumber) {
// loads an ScoringResultEntity from DB and converts it into an aggregate
}
}
Axon Server
Axon server looks very promising. Although, is it only useful for event sourcing?
Can it be used together with a Sql DB where aggregates are stored (state persistence) and domain events get persisted in Axon Server?
Lot of questions. Hopefully, someone with Axon experience can help me out :-)
I feel Jasper is saying the right things, but I also think I can emphasize them a little more:
Command handling - Yes you can have return values on command handlers. Just be mindful that you do no abuse this to return state of the to the user, as that would be mixing the Command Model (your Aggregate handling the command) with your Query Model.
Local vs remote domain events publication - Jasper states this clearly and he's right. Your hitting the desire to form bounded context's, for which Axon Server (Enterprise) has support. If you'd not use Axon, you'll have to build this infrastructure yourself.
CQRS synchronously - Axon provides handles for asynchronous and synchronous messaging just fine. The main difference is that you'll block on the result of sending your messages. The CommandGateway for example has a send and sendAndWait method, thus providing you with sync and async command dispatching. Lastly, it's perfectly fine to use Axon Server as the event store without doing Event Sourcing. Event Sourcing is an choice when using Axon, not a requirement.
Modular monolith - AxonIQ as a company actively encourages this approach to building software. So yes, you can do this, and no, I cannot think of anything you should think of prior to doing this.
Fully separated domain model (persistence agnostic) - From your Query Model you have full control over how you'd want to map your data model to and from the actual model you'd use. The Aggregate in Axon terms should be regarded as your Command Model, for which you can choose the Event Sourced storage approach or the State Stored storage approach. The state-stored implementation given by Axon Framework works based on JPA, which would thus require you to set some annotations along side the axon annotations in your Command Model. If you need to segregate this, I could imagine you'd create your own variant of the Repository and AggregateFactory. Then again, the Event Sourcing approach would make your domain model clear of persistence annotations altogether, so I'd go for that route to be honest.
Axon Server - Yes you can use Axon Server even if you go the state-stored approach for Aggregates. Know that Axon Server next to being an event store is a unified routing solution for commands, events and queries. If you would move from a modular monolith to a (micro) services set up, having Axon Server in place to perform all the message routing will make your life very, very easy.
I hope I can answer some of them, but I'm also not really experienced in using Axon:
Return values from command handler - Yes, thats possible. We had an example where we return the generated aggregate id (I'm not 100% sure about this answer)
Local vs remote domain events publication - Yes, Axon Server ENTERPRISE (!) supports multi-context thats build for this purpose. https://axoniq.io/product-overview/axon-server-enterprise
CQRS synchronously - The question is not totally clear but it's not necessary to model your complete system with CQRS. You can use CQRS for some domains and other architecture for subdomains.
Use Saga's for any kind of "transaction" like stuff. Rollbacks should be written by the developer. The system can't do this for you.
Modular monolith - Shouldn't be a technical problem.
Fully separated domain model (persistence agnostic) - The question is not totally clear but store only events in Axon Server. Aggregates are build up by a sequence of aggregates. Don't use any other data for it. The aggregate are used to do the command handling with state checks and apply new events.
I a system gets a command message, Axon Framework will look at the aggregate id and re-creates the aggregate by replay all the existing events for that aggregate. Then the method for #CommandHandler and command message type is called on the aggregate with the state of the system. Don't do this by yourself.
On the other hand. Create own custom projections (view models) by listening to the events (#EventHandler) and store the data in your own format to any kind of data models/repository. You can for example build a REST api on top of this to use the data.
Axon Server - Use it where it's built for. Use it as event store and not for other purposes.
See for more info and why: https://www.youtube.com/watch?v=zUSWsJteRfw

No transaction behaviour needed in Spring Data

I am trying to figure it out how to configure a method not to run within a transaction using Spring. I have read that the Spring Data repositories by default activate the transactional behaviour in its methods. I dont want this transaction because I have many "save" calls to a repository and each of them is independent from the other. I think creating a transaction for each call to a repository method can slow down the code and the performance of the app. So :
Is this possible or every service or dao method has to run within a transaction?
If it has, why?
If this is possible, how to configure a method not to run within a transaction? Just removing the Spring transactional annotation?
Thanks
Spring service beans by default are not transactional. You can add the #Transactional at a class or a method level to force it to be transactional. Here are a few links explaining in detail on how transactional in Spring works.
What is the difference between defining #Transactional on class vs method .
Spring - #Transactional - What happens in background? .
https://docs.spring.io/spring/docs/4.2.x/spring-framework-reference/html/transaction.html#tx-decl-explained .
It is also discussed in the below thread .
Is Spring #Service transactional?

Transaction Synchronization in Spring Kafka

I want to synchronize a kafka transaction with a repository transaction:
#Transactional
public void syncTransaction(){
myRepository.save(someObject)
kafkaTemplate.send(someEvent)
}
Since the merge (https://github.com/spring-projects/spring-kafka/issues/373) and according to the doc this is possible. Nevertheless i have problems to understand and implement that feature.
Looking at the example in https://docs.spring.io/spring-kafka/reference/html/#transaction-synchronization I have to create a MessageListenerContainer to listen to my own events.
Do I still have to send my events using the KafkaTemplate?
Does the MessageListenerContainer prohibit the sending to the broker?
And if i understand correctly the kafkaTemplate und the kafkaTransactionManager have to use the same producerFactory in which i have to enable Transaction setting a transactionIdPrefix. And in my example i have to set the TransactionManager of the messageListenerContainer to the DataSourceTransactionManager. Is that correct?
From my perspective it looks weird that I send an event via kafkaTemplate, listen to my own event and forward the event using the kafkaTemplate again.
I would really help me if i can get an example for a simple synchronization of a kafka transaction with a repository transaction and an explanation.
If the listener container is provisioned with a KafkaTransactionManager, the container will create a producer which will be used by any downstream kafka template and the container will send the offsets to the transaction for you.
If the container has some other transaction manager, the container can't send the offsets since it doesn't have access to the producer (or template).
Another solution is to annotate your method with #Transactional (with the datasource TM) and configure the container with a kafka TM.
That way, your DB tx will commit just before the thread returns to the container which will then send the offsets to the kafka transaction and commit it.
See the framework test cases for examples.
#Eike Behrends to have a db + kafka transaction, you can use ChainedTransactionManager and define it this way :
#Bean
public KafkaTransactionManager kafkaTransactionManager() {
KafkaTransactionManager ktm = new KafkaTransactionManager(producerFactory());;
ktm.setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
return ktm;
}
#Bean
#Primary
public JpaTransactionManager transactionManager(EntityManagerFactory em) {
return new JpaTransactionManager(em);
}
#Bean(name = "chainedTransactionManager")
public ChainedTransactionManager chainedTransactionManager(JpaTransactionManager jpaTransactionManager,
KafkaTransactionManager kafkaTransactionManager) {
return new ChainedTransactionManager(kafkaTransactionManager, jpaTransactionManager);
}
You need to annotate your transactional db+kafka methods #Transactional("chainedTransactionManager")
(you can see the issue on spring-kafka project : https://github.com/spring-projects/spring-kafka/issues/433 )
You say :
From my perspective it looks weird that I send an event via
kafkaTemplate, listen to my own event and forward the event using the
kafkaTemplate again.
Have you tried this ? If so can you provide an example please ?
For achieving your target you should use a different "eventually consistent" approach like CDC (Change Data Capture). There are no atomic transactions between Kafka writes and any other system (e.g. a database) - aka XA transactions. It is a complete paradigm swift when you have distributed services (some call them microservices) that in your case probably communicate by producing/ consuming to/ from Kafka topics.
TL;DR: just use upsert / merge.
Accidentally seen this old topic and after so many years people still struggle.
Just want to share simplest and most native approach to deal with such systems as kafka.
The real issue why people come here for an answer is old approach of distributed transactions. And most ones want to synchronize non-transactional (kafka named it's functionality as transactions but they are "special" actually) kafka with some ACID database.
If your service is working within idempotent environment - everything downstream should be idempotent too.
Just make sure your operations to underlying storage are idempontent, the simplest approach are upsert / merge (depends on the storage).
P.s. CDC is a thing, but it requires much more labor cost and is unnecessary in most typical cases.
MORE :
If you want to dig about why kafka "transactions" are special, here are good starting points (explained within eos):
for newer versions: https://www.youtube.com/watch?v=j0l_zUhQaTc
for older: https://www.youtube.com/watch?v=zm5A7z95pdE
EDIT
Very interesting why this answer got downvotes... Just check this issue/comments/related issues https://github.com/spring-projects/spring-data-commons/issues/2232 - thats why one would not want to use ChainedTransactionManager for business-critical Transactions (it can't act as a real 2PC by design).

Service Fabric - A web api in cluster who' only job is to serve data from reliable collection

I am new to Service Fabric and currently I am struggling to find out how to access data from reliable collection (That is defined, and initialized in a Statefull Service context) from a WEB API (that is, also living in the Service fabric cluster, as a separate application). The problem is very basic and I am sure I am missing something very obvious. So apologies to the community if this sounds lame.
I have a large XML, a portions of which I want to expose via a WEB API endpoints as results from various queries . Searched for similar questions, but couldn't find a suitable answer.
Would be happy to see how an experienced SF developer would do such task.
EDIT I posted the solution i have came up with
After reading around and observing others issues and Azure's samples, I have implemented a solution. Posting here the gotchas I had, hoping that will help other devs that are new to Azure Service fabric (Disclaimer: I am still a newbie in Service Fabric, so comments and suggestions are highly appreciated):
First, pretty simple - I ended up with a stateful service and a WEB Api Stateless service in an azure service fabric application:
DataStoreService - Stateful service that is reading the large XMLs and stores them into Reliable dictionary (happens in the RunAsync method).
Web Api provides an /api/query endpoint that filters out the Collection of XElements that is stored in the rteliable dictionary and serialize it back to the requestor
3 Gotchas
1) How to get your hands on the reliable dictionary data from the Stateless service, i.e how to get an instance of the Stateful service from Stateless one :
ServiceUriBuilder builder = new ServiceUriBuilder("DataStoreService");
IDataStoreService DataStoreServiceClient = ServiceProxy.Create<IDataStoreService>(builder.ToUri(), new ServicePartitionKey("Your.Partition.Name"));
Above code is already giving you the instance. I.e - you need to use a service proxy. For that purpose you need:
define an interface that your stateful service will implement, and use it when invoking the Create method of ServiceProxy (IDataStoreService)
Pass the correct Partition Key to Create method. This article gives very good intro on Azure Service Bus partiotions
2) Registering of Replica listeners - in order to avoid errors saying
The primary or stateless instance for the partition 'a67f7afa-3370-4e6f-ae7c-15188004bfa1' has invalid address, this means that right address from the replica/instance is not registered in the system
, you need to register replica listeners as stated in this post :
public DataStoreService(StatefulServiceContext context)
: base(context)
{
configurationPackage = Context.CodePackageActivationContext.GetConfigurationPackageObject("Config");
}
3) Service fabric name spacing and referencing services - the ServiceUriBuilder class I took from the service-fabric-dotnet-web-reference-app. Basically you need something to generate an Uri of the form:
new Uri("fabric:/" + this.ApplicationInstance + "/" + this.ServiceInstance);,
where ServiceInstance is the name of the service you want to get instance of (DataStoreService in this case)
You can use WebAPI with OWIN to setup a communication listener and expose data from your reliable collections. See Build a web front end for your app for info on how to set that up. Take a look at the WordCount sample in the Getting started sample apps, which feeds a bunch of random words into a stateful service and keeps a count of the words processed. Hope that helps.

Java EE (Spring) Multithreading to divide work

I have a simple scenario where in response to a user action:
JSP ===> Controller ===> Service ===> DAO
DAO essentially running a number of disparate queries. Is there a way for the service to call multiple DAOs or DAO methods concurrently, have each perform its one complex query and the have the service aggregate the result from each DAO?
Can use JMS, but how to return the results back to the originating service?
Same goes for spring-batch.
It appears these mechanisms are good to perform work which does not need to be 'returned' to a higher layer.
Any pointers will be greatly appreciated.
check out the #Async annotation combined with a returntype of Future.
http://static.springsource.org/spring/docs/3.0.x/reference/scheduling.html
you will still need to wait and collect all the result before returning it to the higher layer.
if this is not good enough, you might wanna look at websockets push or longpolling in the web-tier, but that makes the application more complex

Resources