Using transactionscope in application service - domain-driven-design

In my current application in payment gateway, I want to perform this steps
Transfer fund from paypal .
Save the payment record in database
Increase the user's fund(which is in our database) by the amount he transferred.
I am performing all this steps in app layer. Inside transaction scope
Below is the code
public void DepositFundInAdvertiser(PaymentInfo paymentInfo, RegistrationID advertiserRegistrationID)
{
using (TransactionScope scope = new TransactionScope())
{
PaymentResult paymentResult = paymentService.DepositFund(paymentInfo);
Advertiser advertiser = advertiserRepository.Find(advertiserRegistrationID);
TransactionNumber transactionNumber = paymentRepository.NextTransactionNumber();
Payment payment = PaymentFactory.NewPayment(advertiser.Person, transactionNumber, paymentInfo, paymentResult);
paymentRepository.Save(payment);
AdvertiserBalance newBalance = new AdvertiserBalance(advertiser.Balance.Amount + paymentInfo.PaymentTotal);//Increasing advertiser fund
advertiser.AddFund(newBalance);
advertiserRepository.Save(advertiser);
scope.Complete();
}
}
Problem : My question is can i use Transactionscope in app layer like this, because all this operation should be atomic ?

I would say it's a good compromise between complexity and robustness. You only have to keep in mind that by doing this you couple both repositories. You won't be able to change either one without considering the effect of this change to the other one and to transaction infrastructure. As long as both repositories use the same database instance or RDBMS that fully supports distributed transactions, everything is fine. Sort of.
As you probably know, the pure solution is async messaging, but it introduces a lot of complexity. The thing is, the complexity connected with async messaging tends to be flat -- it doesn't grow as you add more repositories, more data stores etc. It pays off in the long run. If you have one such use case, I would go with the solution you proposed. As I said, it's good compromise and making software is about good compromises.

Related

DDD: Can aggregates get other aggregates as parameters?

Assume that I have two aggregates: Vehicles and Drivers, And I have a rule that a vehicle cannot be assigned to a driver if the driver is on vacation.
So, my implementation is:
class Vehicle {
public void assignDriver(driver Driver) {
if (driver.isInVacation()){
throw new Exception();
}
// ....
}
}
Is it ok to pass an aggregate to another one as a parameter? Am I doing anything wrong here?
I'd say your design is perfectly valid and reflects the Ubiquitous Language very well. There's several examples in the Implementing Domain-Driven Design book where an AR is passed as an argument to another AR.
e.g.
Forum#moderatePost: Post is not only provided to Forum, but modified by it.
Group#addUser: User provided, but translated to GroupMember.
If you really want to decouple you could also do something like vehicule.assignDriver(driver.id(), driver.isInVacation()) or introduce some kind of intermediary VO that holds only the necessary state from Driver to make an assignation decision.
However, note that any decision made using external data is considered stale. For instance, what happens if the driver goes in vacation right after it's been assigned to a vehicule?
In such cases you may want to use exception reports (e.g. list all vehicules with an unavailable driver), flag vehicules for a driver re-assignation, etc. Eventual consistency could be done either through batch processing or messaging (event processing).
You could also seek to make the rule strongly-consistent by inverting the relationship, where Driver keeps a set of vehiculeId it drives. Then you could use a DB unique constraint to ensure the same vehicule doesn't have more than 1 driver assigned. You could also violate the rule of modifying only 1 AR per transaction and model the 2-way relationship to protect both invariants in the model.
However, I'd advise you to think of the real world scenario here. I doubt you can prevent a driver from going away. The system must reflect the real world which is probably the book of record for that scenario, meaning the best you can do with strong consistency is probably unassign a driver from all it's vehicules while he's away. In that case, is it really important that vehicules gets unassigned immediately in the same TX or a delay could be acceptable?
In general, an aggregate should keep its own boundaries (to avoid data-load issues and transaction-scoping issues, check this page for example), and therefore only reference another aggregate by identity, e.g. assignDriver(id guid).
That means you would have to query the driver prior to invoking assignDriver, in order to perform validation check:
class MyAppService {
public void execute() {
// Get driver...
if (driver.isInVacation()){
throw new Exception();
}
// Get vehicle...
vehicle.assignDriver(driver.id);
}
}
Suppose you're in a micro-services architecture,
you have a 'Driver Management' service, and an 'Assignation Service' and you're not sharing code between both apart from technical libraries.
You'll naturally have 2 classes for 'Driver',
An aggregate in 'Driver Management' which will hold the operations to manage the state of a driver.
And a value object in the 'Assignation Service' which will only contain the relevant information for assignation.
This separation is harder to see/achieve when you're in a monolithic codebase
I also agree with #plalx, there's more to it for the enforcement of the rule, not only a check on creation, for which you could implement on of the solutions he suggested.
I encourage you to think in events, what happens when:
a driver has scheduled vacation
when he's back from vacation
if he changes he vacation dates
Did you explore creating an Aggregate for Assignation?

Coinex WebSocket market depth channel sometimes does not send message to remove price level

I'm writing a Node.js application which attempts to re-create the Coinex orderbook for ETH-USDT using their Websocket. My application subscribes to the market depth channel and keeps an ongoing, updated local copy of the orderbook stored in memory.
I have a working implementation, but I'm noticing that my local copy of the orderbook always seems to eventually fall out of sync with the exchange. I've run the application for up to 2 hours without seeing a single issue, but eventually and invariably, I end up with a "stale" price level in my local copy that was removed from the exchange at some point.
I'm fairly certain that the WS message to remove the level was never sent because I've run multiple tests where I log every WS message sent during the test period. I can probably rule out the obvious issue of there being some bug with my system.
Is this just an issue on the Coinex side? Is there some other way I need to be expiring messages? How can I best detect and handle this issue assuming it's a problem with the Coinex Websocket?
Are you making use of the 'complete result' flag in Coinex's depth.update response (i.e. - params[0])? The server occasionally sends a complete copy of the order book (up to the count limit in your request), setting this flag to true. Completely resetting your local order book state when this flag is set should keep you in sync.
Another thing it could be is how you're handling depth updates with 0 volume for a certain price (i.e. - setting the value at that price to 0 vs removing the key from your map).
Here's a Java example of how I'm handling updates (the maps' sizes stay consistent with the count limit initially sent in the request):
#AllArgsConstructor
#Data
public class LimitOrderBook {
private double last;
private final SortedMap<Double, Double> asks;
private final SortedMap<Double, Double> bids;
private int time;
public void handleUpdate(final LimitOrderBookUpdate update) {
if (update.isFullUpdate()) {
asks.clear();
bids.clear();
}
last = update.getLast();
update.getAskUpdates().forEach(p -> updateMap(asks, p));
update.getBidUpdates().forEach(p -> updateMap(bids, p));
time = update.getTime();
}
private void updateMap(final SortedMap<Double, Double> map, final Pair<Double, Double> pair) {
if (pair.getValue() == 0) {
map.remove(pair.getKey());
} else {
map.put(pair.getKey(), pair.getValue());
}
}
}
I looked up for a few resources which could be useful to tackle up your problem.But first thing first, executing WebSocket each time isn't a trustworthy solution. There are some other better approaches to do the same task. As you've asked for suggestions to dislodge messages, I would encourage you to lookup the demo application and implementation in coinex-exchange-api here.
IMO, using setInterval() will just be a makeshift to this solution. You need a better directive to do that.
CryptoCurrency eXchange WebSockets is another but better approach in invoking real-time data. It is a JavaScript library for connecting to realtime public APIs on all cryptocurrency exchanges. Install and use it using below command:
npm install ccxws
For more detail, visit: blocktap.io.
I had the same problem with other exchanges.
Unfortunately we can't 100% trust websocket APIs due to several reason (often bound to the exchange).
The best solution I found (and I'm still using) is to keep track of changes through websocket APIs to keep an almost aligned book on my server, plus (under a setInterval) I periodically download the full book to resync my book.
Hope this helps.

DDD About a design decision

I have to solve a domain problem and I have some doubts about what is the better solution. I am going to present the problem:
I have Applications and each Application has many Process. An Application has some ProcessSettings too. I have some business rules when I have to create a Process, for example, based on the process settings of application, I have to apply some rules on some process properties.
I have considered Application as aggregate root and Process as other aggregate root, and ProcessSettings as a value object inside Application aggregate.
I have a use case to create processes, and the logic is to create a valid instance of process and persist it with ProcessRepository. Well, I think I have two options to apply the process settings:
In the use case, get the process settings from Application aggregate by ApplicationId through a domain service in Application aggregate, and pass ProcessSettings to process create method.
In the use case, to create the process and through a domain service in Application aggregate pass a copy of process (a value object) to apply the process settings.
What approach do you believe is most correct to use?, or do you implement it in another way?
Thanks in advance!
Our product owner told us that if the client paid for some settings in
a moment and created a process that settings will be valid for that
process if the client does not update it. If the client leave to paid
some settings then, when the client want to update that process our
system will not allow update it because the actual settings will not
be fit to the process data
That makes the implementation much easier, given that process settings-based validation only has to occur in process creation/update scenarios. Furthermore, I would guess that race conditions would also be irrelevant to the business, such as if settings are changed at the same time a process gets created/updated.
In light of this, we can assume that ProcessSettings and Process can be in distinct consistency boundaries. In other words, both can be part of separate aggregate roots.
Furthermore, it's important to recognize that the settings-based validation are not Process invariants, meaning the Process shouldn't be responsible for enforcing these rules itself. Since these aren't invariants you also shouldn't strive for an always-valid strategy and use a deferred validation strategy instead.
From that point there are many good ways of modeling this use case, which will all boil down to something like:
//Application layer service
void createProcess(processId, applicationId, data) {
application = applicationRepository.applicationOfId(applicationId);
process = application.createProcess(processId, data);
processRepository.add(process);
}
//Application AR
Process createProcess(processId, data) {
process = new Process(processId, this.id, data);
this.processSettings.ensureRespectedBy(process);
return process;
}
If ProcessSettings are part of the Application AR then it could make sense to put a factory method on Application for creating processes given it holds the necessary state to perform the validation, like in the above example. That removes the need from introducing a dedicated domain service for the task, such as a stand-alone factory.
If ProcessSettings can be it's own aggregate root you could always do the same, but introduce a lookup domain service for settings:
//Application AR
Process createProcess(processId, data, settingsLookupService) {
process = new Process(processId, this.id, data);
processSettings = settingsLookupService.findByApplicationId(this.id);
processSettings.ensureRespectedBy(process);
return process;
}
Some might say your aggregate is not pure anymore however, given it's performing indirect IO through calling the settingsLookupService. If you want to avoid such dependency then you may introduce a domain service such as ProcessDomainService to encapsulate the creation/update logic or you may even consider the lookup logic is not complex enough and put it directly in the application layer.
//Application layer service
void createProcess(processId, applicationId, data) {
processSettings = processRepository.findByApplicationId(applicationId);
process = application.createProcess(processId, data, processSettings);
processRepository.add(process);
}
There's no way for us to tell which approach is better in your specific scenario and sometimes there isin't even a perfect way and many various ways could be equally good. By experience it's a good idea to keep aggregates pure though as it's easier for unit tests (less mocking).

How to avoid concurrency issues when scaling writes horizontally?

Assume there is a worker service that receives messages from a queue, reads the product with the specified Id from a document database, applies some manipulation logic based on the message, and finally writes the updated product back to the database (a).
This work can be safely done in parallel when dealing with different products, so we can scale horizontally (b). However, if more than one service instance works on the same product, we might end up with concurrency issues, or concurrency exceptions from the database, in which case we should apply some retry logic (and still the retry might fail again and so on).
Question: How do we avoid this? Is there a way I can ensure two instances are not working on the same product?
Example/Use case: An online store has a great sale on productA, productB and productC that ends in an hour and hundreds of customers are buying. For each purchase, a message is enqueued (productId, numberOfItems, price). Goal: How can we run three instances of our worker service and make sure that all messages for productA will end up in instanceA, productB to instanceB and productC to instanceC (resulting in no concurrency issues)?
Notes: My service is written in C#, hosted on Azure as a Worker Role, I use Azure Queues for messaging, and I'm thinking to use Mongo for storage. Also, the Entity IDs are GUID.
It's more about the technique/design, so if you use different tools to solve the problem I'm still interested.
Any solution attempting to divide the load upon different items in the same collection (like orders) are doomed to fail. The reason is that if you got a high rate of transactions flowing you'll have to start doing one of the following things:
let nodes to talk each other (hey guys, are anyone working with this?)
Divide the ID generation into segments (node a creates ID 1-1000, node B 1001-1999) etc and then just let them deal with their own segment
dynamically divide a collection into segments (and let each node handle a segment.
so what's wrong with those approaches?
The first approach is simply replicating transactions in a database. Unless you can spend a large amount of time optimizing the strategy it's better to rely on transactions.
The second two options will decrease performance as you have to dynamically route messages upon ids and also change the strategy at run-time to also include newly inserted messages. It will fail eventually.
Solutions
Here are two solutions that you can also combine.
Retry automatically
Instead you have an entry point somewhere that reads from the message queue.
In it you have something like this:
while (true)
{
var message = queue.Read();
Process(message);
}
What you could do instead to get very simple fault tolerance is to retry upon failure:
while (true)
{
for (i = 0; i < 3; i++)
{
try
{
var message = queue.Read();
Process(message);
break; //exit for loop
}
catch (Exception ex)
{
//log
//no throw = for loop runs the next attempt
}
}
}
You could of course just catch db exceptions (or rather transaction failures) to just replay those messages.
Micro services
I know, Micro service is a buzz word. But in this case it's a great solution. Instead of having a monolithic core which processes all messages, divide the application in smaller parts. Or in your case just deactivate the processing of certain types of messages.
If you have five nodes running your application you can make sure that Node A receives messages related to orders, node B receives messages related to shipping etc.
By doing so you can still horizontally scale your application, you get no conflicts and it requires little effort (a few more message queues and reconfigure each node).
For this kind of a thing I use blob leases. Basically, I create a blob with the ID of an entity in some known storage account. When worker 1 picks up the entity, it tries to acquire a lease on the blob (and create the blob itself, if it doesn't exist). If it is successful in doing both, then I allow the processing of the message to occur. Always release the lease afterwards.
If I am not successfull, I dump the message back onto the queue
I follow the apporach originally described by Steve Marx here http://blog.smarx.com/posts/managing-concurrency-in-windows-azure-with-leases although tweaked to use new Storage Libraries
Edit after comments:
If you have a potentially high rate of messages all talking to the same entity (as your commend implies), I would redesign your approach somewhere.. either entity structure, or messaging structure.
For example: consider CQRS design pattern and store changes from processing of every message independently. Whereby, product entity is now an aggregate of all changes done to the entity by various workers, sequentially re-applied and rehydrated into a single object
If you want to always have the database up to date and always consistent with the already processed units then you have several updates on the same mutable entity.
In order to comply with this you need to serialize the updates for the same entity. Either you do this by partitioning your data at producers, either you accumulate the events for the entity on the same queue, either you lock the entity in the worker using an distributed lock or a lock at the database level.
You could use an actor model (in java/scala world using akka) that is creating a message queue for each entity or group of entities that process them serially.
UPDATED
You can try an akka port to .net and here.
Here you can find a nice tutorial with samples about using akka in scala.
But for general principles you should search more about [actor model]. It has drawbacks nevertheless.
In the end pertains to partition your data and ability to create a unique specialized worker(that could be reused and/or restarted in case of failure) for a specific entity.
I assume you have a means to safely access the product queue across all worker services. Given that, one simple way to avoid conflict could be using global queues per product next to the main queue
// Queue[X] is the queue for product X
// QueueMain is the main queue
DoWork(ProductType X)
{
if (Queue[X].empty())
{
product = QueueMain().pop()
if (product.type != X)
{
Queue[product.type].push(product)
return;
}
}else
{
product = Queue[X].pop()
}
//process product...
}
The access to queues need to be atomic
You should use session enabled service bus queue for ordering and concurrency.
1) Every high scale data solution that I can think of has something built in to handle precisely this sort of conflict. The details will depend on your final choice for data storage. In the case of a traditional relational database, this comes baked in without any add'l work on your part. Refer to your chosen technology's documentation for appropriate detail.
2) Understand your data model and usage patterns. Design your datastore appropriately. Don't design for scale that you won't have. Optimize for your most common usage patterns.
3) Challenge your assumptions. Do you actually have to mutate the same entity very frequently from multiple roles? Sometimes the answer is yes, but often you can simply create a new entity that's similar to reflect the update. IE, take a journaling/logging approach instead of a single-entity approach. Ultimately high volumes of updates on a single entity will never scale.

Should I use event design when handling no-idempotent invocation?

I'm working on an air booking project.
The image below shows the domain model we develop so far.
We define a domain service (AirBookService) which encapsulates booking, ticketing and other operations. Our suppliers provides Remote-Procedure-Call api to handle these requests, so we implement the domain service by adding an anti-corruption-layer(we have multiple suppliers).
This solution works fine when dealing with imdenpotent rpc calls such as getting price. However, there are risks when dealing with non-imdenpotent rpc calls.
For example
public class TransactionalReservationHandlingServiceImpl .... {
#Transactional
#Override
public void modifyTraveler(String resId, String tktId, AirTravler traveler) {
AirReservation res = reservationRepository.findBy(resId);
res.modify(tktId, traveler);
airBookService.modify(res, traveler);
reservationRepository.store(res);
}
}
I place airBookService.modify() behind res.modify(), so the rpc call could be avoided if some local domain logic is broken. But what if the rpc call succeeds and local transaction fails? We have a disparity between traveler in our application and that in supplier's application.
Is it worth handling rpc calls and local modification in seperate transactions?
My concern is:
a) It will surely introduce some extra complexity if doing so. like messaging.
b) I don' have much experience in event handling.
c) The failure chances are very low even if we use rpc call in the transaction boundary, mostly caused by concurrency problem and contetion of AirReservation is relatively low in real world.
Below is my event attempt:
#Transactional
#Override
public void modifyTraveler(String resId, String tktId, AirTravler traveler) {
AirReservation res = reservationRepository.findBy(resId);
ModifyTravelerEvent event = airBookService.modify(res, traveler);
handlingEventRepository.store(event);
events.notifyTravelerModified(event);// using messaging
}
#Transactional
#Override
public void modifyTraveler(String eventSequence) {
ModifyTravelerEvent event = handlingEventRepository.of(eventSequence);
AirReservation res = reservationRepository.findBy(resId);
event.handle(res);
reservationRepository.store(res);
handlingEventRepository.store(event );
}
The advantage is local modification is seperated from rpc calls.
But this introduces:
1.Multiple resource management issue(datasource and messaging)
2.I have to create a lot of ad-hoc event for modify traveler, demand ticket and any other AirBookService operations.
I'm in a dilemma, not satisfied with current design but quite hesitate with the new event design.
Any idea is appreciated, thanks in advance.
In your first example you mix your local modification with your remote modification. You worry that if your local modification fails after your remote modification succeeds you cannot roll back your remote modification anymore. So unmixing your two modifcations is definitely the way to go.
The simplest way to do this would be to swap the lines airBookService.modify call and the reservationRepository.store call:
public class TransactionalReservationHandlingServiceImpl .... {
#Transactional
#Override
public void modifyTraveler(String resId, String tktId, AirTravler traveler) {
// local modification
AirReservation res = reservationRepository.findBy(resId);
res.modify(tktId, traveler);
reservationRepository.store(res); // <- remember this should not actually store anything until after the commit
// remote modification
airBookService.modify(res, traveler);
}
}
Since your local modification is transactional it will only commit after a successful remote modification. Any kind of local problems that could have occurred would probably already have occurred. Of course, there is still a minuscule chance that committing the transaction itself fails. Therefore to be truly transactional, you would have to be able to roll back your remote modification. Since, I take it, you are not able to do so, true transactionality is actually impossible. The above construct is therefore the safest possible way in terms of consistency to do a local and remote modifications at the same time, since the chance that the commit of the local modification itself fails is negligible. I say this, because even if you would introduce messaging there is still a similar slight chance that the message itself is not committed after the remote modification.
The above construct however does have one big issue: it would probably hamper performance pretty seriously (you don't want your transactions lingering too long). Messaging is therefore a very reasonable solution to this problem. It also has other advantages, like persistence, auditing and replaying of messages. Your messaging attempt therefore is quite legit in this respect. Your code however does seriously break the single responsibility rule since the messaging itself is mixed with the modification within the same method calls.
If you are concerned with too much boilerplate around messaging you should definitely check out akka.io. Spring JMS and ActiveMQ are also a pretty powerful combination if you are not looking for an entire paradigm shift, but just for a decent messaging solution. Using one of these two technologies I've suggested you can create a powerful framework for your local calls that are paired with remote calls that allows you to avoid a lot of boilerplate.
I hope this helps. Good luck!

Resources