Should I use event design when handling no-idempotent invocation? - domain-driven-design

I'm working on an air booking project.
The image below shows the domain model we develop so far.
We define a domain service (AirBookService) which encapsulates booking, ticketing and other operations. Our suppliers provides Remote-Procedure-Call api to handle these requests, so we implement the domain service by adding an anti-corruption-layer(we have multiple suppliers).
This solution works fine when dealing with imdenpotent rpc calls such as getting price. However, there are risks when dealing with non-imdenpotent rpc calls.
For example
public class TransactionalReservationHandlingServiceImpl .... {
#Transactional
#Override
public void modifyTraveler(String resId, String tktId, AirTravler traveler) {
AirReservation res = reservationRepository.findBy(resId);
res.modify(tktId, traveler);
airBookService.modify(res, traveler);
reservationRepository.store(res);
}
}
I place airBookService.modify() behind res.modify(), so the rpc call could be avoided if some local domain logic is broken. But what if the rpc call succeeds and local transaction fails? We have a disparity between traveler in our application and that in supplier's application.
Is it worth handling rpc calls and local modification in seperate transactions?
My concern is:
a) It will surely introduce some extra complexity if doing so. like messaging.
b) I don' have much experience in event handling.
c) The failure chances are very low even if we use rpc call in the transaction boundary, mostly caused by concurrency problem and contetion of AirReservation is relatively low in real world.
Below is my event attempt:
#Transactional
#Override
public void modifyTraveler(String resId, String tktId, AirTravler traveler) {
AirReservation res = reservationRepository.findBy(resId);
ModifyTravelerEvent event = airBookService.modify(res, traveler);
handlingEventRepository.store(event);
events.notifyTravelerModified(event);// using messaging
}
#Transactional
#Override
public void modifyTraveler(String eventSequence) {
ModifyTravelerEvent event = handlingEventRepository.of(eventSequence);
AirReservation res = reservationRepository.findBy(resId);
event.handle(res);
reservationRepository.store(res);
handlingEventRepository.store(event );
}
The advantage is local modification is seperated from rpc calls.
But this introduces:
1.Multiple resource management issue(datasource and messaging)
2.I have to create a lot of ad-hoc event for modify traveler, demand ticket and any other AirBookService operations.
I'm in a dilemma, not satisfied with current design but quite hesitate with the new event design.
Any idea is appreciated, thanks in advance.

In your first example you mix your local modification with your remote modification. You worry that if your local modification fails after your remote modification succeeds you cannot roll back your remote modification anymore. So unmixing your two modifcations is definitely the way to go.
The simplest way to do this would be to swap the lines airBookService.modify call and the reservationRepository.store call:
public class TransactionalReservationHandlingServiceImpl .... {
#Transactional
#Override
public void modifyTraveler(String resId, String tktId, AirTravler traveler) {
// local modification
AirReservation res = reservationRepository.findBy(resId);
res.modify(tktId, traveler);
reservationRepository.store(res); // <- remember this should not actually store anything until after the commit
// remote modification
airBookService.modify(res, traveler);
}
}
Since your local modification is transactional it will only commit after a successful remote modification. Any kind of local problems that could have occurred would probably already have occurred. Of course, there is still a minuscule chance that committing the transaction itself fails. Therefore to be truly transactional, you would have to be able to roll back your remote modification. Since, I take it, you are not able to do so, true transactionality is actually impossible. The above construct is therefore the safest possible way in terms of consistency to do a local and remote modifications at the same time, since the chance that the commit of the local modification itself fails is negligible. I say this, because even if you would introduce messaging there is still a similar slight chance that the message itself is not committed after the remote modification.
The above construct however does have one big issue: it would probably hamper performance pretty seriously (you don't want your transactions lingering too long). Messaging is therefore a very reasonable solution to this problem. It also has other advantages, like persistence, auditing and replaying of messages. Your messaging attempt therefore is quite legit in this respect. Your code however does seriously break the single responsibility rule since the messaging itself is mixed with the modification within the same method calls.
If you are concerned with too much boilerplate around messaging you should definitely check out akka.io. Spring JMS and ActiveMQ are also a pretty powerful combination if you are not looking for an entire paradigm shift, but just for a decent messaging solution. Using one of these two technologies I've suggested you can create a powerful framework for your local calls that are paired with remote calls that allows you to avoid a lot of boilerplate.
I hope this helps. Good luck!

Related

Coinex WebSocket market depth channel sometimes does not send message to remove price level

I'm writing a Node.js application which attempts to re-create the Coinex orderbook for ETH-USDT using their Websocket. My application subscribes to the market depth channel and keeps an ongoing, updated local copy of the orderbook stored in memory.
I have a working implementation, but I'm noticing that my local copy of the orderbook always seems to eventually fall out of sync with the exchange. I've run the application for up to 2 hours without seeing a single issue, but eventually and invariably, I end up with a "stale" price level in my local copy that was removed from the exchange at some point.
I'm fairly certain that the WS message to remove the level was never sent because I've run multiple tests where I log every WS message sent during the test period. I can probably rule out the obvious issue of there being some bug with my system.
Is this just an issue on the Coinex side? Is there some other way I need to be expiring messages? How can I best detect and handle this issue assuming it's a problem with the Coinex Websocket?
Are you making use of the 'complete result' flag in Coinex's depth.update response (i.e. - params[0])? The server occasionally sends a complete copy of the order book (up to the count limit in your request), setting this flag to true. Completely resetting your local order book state when this flag is set should keep you in sync.
Another thing it could be is how you're handling depth updates with 0 volume for a certain price (i.e. - setting the value at that price to 0 vs removing the key from your map).
Here's a Java example of how I'm handling updates (the maps' sizes stay consistent with the count limit initially sent in the request):
#AllArgsConstructor
#Data
public class LimitOrderBook {
private double last;
private final SortedMap<Double, Double> asks;
private final SortedMap<Double, Double> bids;
private int time;
public void handleUpdate(final LimitOrderBookUpdate update) {
if (update.isFullUpdate()) {
asks.clear();
bids.clear();
}
last = update.getLast();
update.getAskUpdates().forEach(p -> updateMap(asks, p));
update.getBidUpdates().forEach(p -> updateMap(bids, p));
time = update.getTime();
}
private void updateMap(final SortedMap<Double, Double> map, final Pair<Double, Double> pair) {
if (pair.getValue() == 0) {
map.remove(pair.getKey());
} else {
map.put(pair.getKey(), pair.getValue());
}
}
}
I looked up for a few resources which could be useful to tackle up your problem.But first thing first, executing WebSocket each time isn't a trustworthy solution. There are some other better approaches to do the same task. As you've asked for suggestions to dislodge messages, I would encourage you to lookup the demo application and implementation in coinex-exchange-api here.
IMO, using setInterval() will just be a makeshift to this solution. You need a better directive to do that.
CryptoCurrency eXchange WebSockets is another but better approach in invoking real-time data. It is a JavaScript library for connecting to realtime public APIs on all cryptocurrency exchanges. Install and use it using below command:
npm install ccxws
For more detail, visit: blocktap.io.
I had the same problem with other exchanges.
Unfortunately we can't 100% trust websocket APIs due to several reason (often bound to the exchange).
The best solution I found (and I'm still using) is to keep track of changes through websocket APIs to keep an almost aligned book on my server, plus (under a setInterval) I periodically download the full book to resync my book.
Hope this helps.

CQRS/EventStore - how do you manage a large tree if command should not fail?

I read that commands in CQRS are designed to not fail and should be async in nature.
In my case, I have a tree (think windows explorer) where users have folders that represent locations for video content and each child is a video/media file. Multiple users can all be working on the same branch of the tree moving folders and files around (and uploading new files and creating new folders as well as deleting files/folders).
If I was ignoring the async nature of commands, I could let the first user make their change and raise an exception on the second if say a folder the user is moving a video to is no longer there. It is now the responsibility of the second user to refresh part of his tree and then reapply his changes.
How would I do this with CQRS if I need instant feed back if my change has not been allowed (i.e. I try to move a video file to another folder and another user has deleted the folder or moved it elsewhere)?
Your command is supposed to be valid when you send it to your domain. Therefore, before sending it, you have to validate it on your client to know if the folder is still here or not. It allows you to tell the client what is happening exactly with a clear error message
This reduces also a lot the margin of error.The timeframe to have something that fails is reduced to the time to send the command over the network and the time to execute the command on the server.
If this risks is really low. We may only receive a fail command
answer (eg:enum) from the domain. For the user, it might end up in a
generic exception message and we could actualize its data to show him
that things are different and taht he cannot do what he intended to.
Such messages with a low percentage should not be a huge problem if
they occurs only 2-3 times in the year , I expect.
If this risks is really high or if validation is not possible from
the client but must occur only on the domain side, then I have no
answer to give you at the moment. I am myself learning CQRS, and i
cannot say more.
hope it helped,
[Edit]
I assumed command handling not to be async. If So, I try catch whatever exception during execution of the command and I can return some fail notification without saying what it is exactly to the client. The projection to the various readmodel remaining async.
public void Handle(ICommand command)
{
try
{
CommandService.Execute(command);
Bus.Return(ErrorCodes.None);
}
catch (Exception e)
{
Bus.Return(ErrorCodes.Fail);
}
}
My CommandServiceDelegate the right executor to do the job :
Public class TheGoodExecutor{
protected void Execute(IUOW context, MyCommand command)
{
var myDomainObject= context.GetById<DomainObjecType>(command.Id);
myDomainObject.DoStuff(Command.Data);
// Accept all the work we just did.
context.Accept();
}
}
If the good Executor goes in error then Bus.Return(ErrorCodes.Fail);
And this can be received by my client either synchronously or asynchronously.
If you wish to go full async (that's what I am trying to do, or at least I would like to explore that way), I would try on subscribing to events, the client might be interested in.
For Validation I think, listening to events does not make a lot of sense for most of the cases. But in the second case, I was speaking of, it might. In other cases, aside validation, it might too...
everything in Italic is some personal tryout, I have not read anything about it, nor finished anything working properly going in that sense. So take it with Big brackets.. he he!!
[/Edit]

How to handle serialization when using a worker thread from wicket?

In our wicket application I need to start a long-running operation. It will communicate with an external device and provide a result after some time (up to a few minutes).
Java-wise the long running operation is started by a method where I can provide a callback.
public interface LegacyThingy {
void startLegacyWork(WorkFinished callback);
}
public interface WorkFinished {
public void success(Whatever ...);
// failure never happens
}
On my Wicket Page I plan to add an Ajax Button to invoke startLegacyWork(...) providing an appropriate callback. For the result I'd display a panel that polls for the result using an AbstractTimerBehavior.
What boggles my mind is the following problem:
To keep state Wicket serializes the component tree along with the data, thus the data needs to be wrapped in serializable models (or detachable models).
So to keep the "connection" between the result panel and the WorkFinished callback I'd need some way to create a link between the "we serialize everything" world of Wicket and the "Hey I'm a Java Object and nobody manages my lifetime" world of the legacy interface.
Of course I could store ongoing operations in a kind of global map and use a Wicket detachable model that looks them up by id ... but that feels dirty and I don't assume that's the correct way. (It opens up a whole can of worms regarding lifetime of such things).
Or I'm looking at a completly wrong angle on how to do long running operations from wicket?
I think the approach with the global map is good. Wicket also uses something similar internally - org.apache.wicket.protocol.http.StoredResponsesMap. This is a special map that keeps the generated responses for REDIRECT_TO_BUFFER strategy. It has the logic to keep the entries for at most some pre-configured duration and also can have upper limit of entries.

When to be careful about multithreading in EJBs?

I understand that App Server takes care of the threading so the developer should only concentrate on the business logic...
but consider an example. A stateless EJB has a member of type CountManager.
#WebService
#Stateless
public class StatelessEJB {
private CountManager countManager;
...
public void incrementCount() {countManager.incrementCount();}
public int getCount(){return countManager.getCount();}
}
And the CountManager
public class CountManager {
public void increaseCount() {
// read count from database
// increase count
// save the new count in database table.
}
public int getCount() {
// returns the count value from database.
}
}
The developer should think about multi-threading here. If you make CountManager also an EJB, I guess problem won't go away.
What would be the general guideline for developer to watch out for?
Update:
Changed the code. Assume that the methods of EJB are exposed as webservice, so we have no control what order client calls them. Transaction attribute is default. Does this code behave correctly under multi threaded scenario?
The fact that EJB are thread-safe doesn't mean that different methods invocations will give you consistent results.
EJB gives you the certainty that every method in your particular EJB instance will be executed by exactly one thread. This doesn't save you from multiple users accessing different instances of your EJB and inconsistent results dangers.
Your CountManager seems to be a regular Java class which means that you hold a state in Stateless EJB. This is not good and EJB thread-safety won't protect you from anything in such case. Your object can be accessed through multiple EJB instances at the same time.
Between your client's first method invocation StatelessEJB.incrementCount() (which starts a transaction - default TransactionAttribute) and the second client's method invocation StatelessEJB.getCount() (which starts new transaction) many things might happen and the value of the count could be changed.
If you'd change it to be an EJB I don't think you'd be any more safe. If it's a SLSB than it still can't have any state. If the state is not realized as a EJB field variable but a database fetched data, than it's definitely better but still - the transaction is not a real help for you because your WebService client still executes these two methods separately therefore landing in two different transactions.
The simple solution would be to:
use the database (no state in SLSB) which can be synchronized with your EJB transaction,
execute both of these methods within the transaction (like incrementAndGet(-) method for WebService client).
Than you can be fairly sure that the results you get are consistent.
Notice that is not really a problem of synchronization or multi-threading, but of transactional behavior.
The above code, if run inside an EJB, will take care of race conditions by delegating transaction support to the data base. Depending on the isolation level and transactional attributes, the data base can take care of locking the underlying tables to ensure that the information remains consistent, even in the face of concurrent access and/or modifications.

Using transactionscope in application service

In my current application in payment gateway, I want to perform this steps
Transfer fund from paypal .
Save the payment record in database
Increase the user's fund(which is in our database) by the amount he transferred.
I am performing all this steps in app layer. Inside transaction scope
Below is the code
public void DepositFundInAdvertiser(PaymentInfo paymentInfo, RegistrationID advertiserRegistrationID)
{
using (TransactionScope scope = new TransactionScope())
{
PaymentResult paymentResult = paymentService.DepositFund(paymentInfo);
Advertiser advertiser = advertiserRepository.Find(advertiserRegistrationID);
TransactionNumber transactionNumber = paymentRepository.NextTransactionNumber();
Payment payment = PaymentFactory.NewPayment(advertiser.Person, transactionNumber, paymentInfo, paymentResult);
paymentRepository.Save(payment);
AdvertiserBalance newBalance = new AdvertiserBalance(advertiser.Balance.Amount + paymentInfo.PaymentTotal);//Increasing advertiser fund
advertiser.AddFund(newBalance);
advertiserRepository.Save(advertiser);
scope.Complete();
}
}
Problem : My question is can i use Transactionscope in app layer like this, because all this operation should be atomic ?
I would say it's a good compromise between complexity and robustness. You only have to keep in mind that by doing this you couple both repositories. You won't be able to change either one without considering the effect of this change to the other one and to transaction infrastructure. As long as both repositories use the same database instance or RDBMS that fully supports distributed transactions, everything is fine. Sort of.
As you probably know, the pure solution is async messaging, but it introduces a lot of complexity. The thing is, the complexity connected with async messaging tends to be flat -- it doesn't grow as you add more repositories, more data stores etc. It pays off in the long run. If you have one such use case, I would go with the solution you proposed. As I said, it's good compromise and making software is about good compromises.

Resources