Save/Validate Entity - domain-driven-design

I'm a novice with domain driven design and learning to apply it in my current project. I hope some of you guys have already walked the path and can help me out.
I have a question with regard to saving UI changes back to an Entity (Order).
The scenario:
a. An approver opens the Order (Aggregate root) pending approval on the Web. Makes some changes and clicks the button "Approve".
b. The UI translates the Order changes to a DTO and posts it across to a Web service for processing.
c. The service pulls the Order from OrderRepository via say call to orderRep.GetByID(ApplicationNumber)
Question
1. How do I post the UI changes available in the OrderDTO to Order?
2. What are the different things I need to take care while hydrating the Order?
(If we have to ensure that the domain object (Order) doesn't land up in
invalid state due to changes)

Each user operation should correspond to a different command method in the application service layer. Much of the time it will correspond to exactly one call on a domain object.
You probably don't have fine-grained enough methods on your Order domain object.
Approve() should probably only be a method, not a public setter. Throw an exception within Approve() if it would place the Order object in an invalid state.

Related

How to propagate consequent changes inside the same Domain entity

In a Domain Driven Design implementation with CQRS and Domain events, when a Command's handler performs a "change" on a Domain entity, and this "change" also causes more "changes" inside the same entity, should all these "changes" be performed by the same Command handler, or is it better to create and send different Commands to perform each of the consequent changes on this entity?
For example, let's say my Domain entity is a Match which has a Minute and a Status properties, and an UpdateMatchMinuteCommand handler updates the Match's Minute value by invoking a class method of the entity. But based on some business logic, inside the class method, this Minute change causes also a change on the Status property of the Match. Shall both of these changes (Minute and Status) take place inside the same class method mentioned earlier?
Or is it more appropriate for the class method, after updating the Minute value, to publish a MatchMinuteChangedDomainEvent, which in turn when handled will send an UpdateMatchStatusCommand, which in turn when handled will invoke another class method of the Match entity to update the value of the Status property?
Which of the above approaches would be more suitable to propagate consequent state changes inside the same Domain entity?
Thanks in advance
I have the feeling that you're going to over complicate the problem. If it's part of the root entity, why should you split the changes in multiple commands?
Reusing Match, Minute and Status, there are (I think...) 3 different situations then:
Examining the domain, you find out that Match contains Minute and Status.
From the analysis you've got the logic that trigger the changes in both the properties of the entity, and per design (DDD) you've put it into Match. Hence, inside the function that update Minute you code also the eventual change of Status.
From the analysis of the domain you find that the Match contains Minute, but not Status, that belongs to another root entity.
This is a case, with 2 entities involved, where using multiple commands is a plausible solution. You update one, generating an event; the event is handled by an handler, that builds another command, which in turn performs the other change. Maybe you do it into a single transaction, or you use a saga; whatever you think it fits your needs.
The domain requires the property Minute into Match, but there're no domain requirements about Status.
Here the Status property it exists, but it's not part of the domain: it could be an extra property, that is used only in the UI, or something else, an helper field, used to perform faster queries. Anyway, Status cannot be inside the entity, because it's not present into the domain.
Given that 2 and 3 are not what you've described, it remains only 1. Hence, I think that both should be changed inside the same action (or function).
A final consideration. You're going to:
build a model of the domain based on your business rules, and
build it following the DDD indications
That means that:
the domain logic should (or must...) stay inside the domain object
nothing that belongs to the domain should be 'exported' in external services, except things like SAGAs or complex interaction between root entities (not this case)
so, as I've written at the beginning of the answer:
why do you split the domain logic, implementing it within the several object involved in this changes?
It means that wo reads the code, months later, to understand when, how and why Status changes has to go through the changes of Minute, then to the handler, checking also its code (and I suppose has some logic that, if put inside Match, would require less 'hops'), and, finally, the code that really change Status.

DDD: where should logic go that tests the existence of an entity?

I am in the process of refactoring an application and am trying to figure out where certain logic should fit. For example, during the registration process I have to check if a user exists based upon their email address. As this requires testing if the user exists in the database it seems as if this logic should not be tied to the model as its existence is dictated by it being in the database.
However, I will have a method on the repository responsible for fetching the user by email, etc. This handles the part about retrieval of the user if they exist. From a use case perspective, registration seems to be a use case scenario and accordingly it seems there should be a UserService (application service) with a register method that would call the repository method and perform if then logic to determine if the user entity returned was null or not.
Am I on the right track with this approach, in terms of DDD? Am I viewing this scenario the wrong way and if so, how should I revise my thinking about this?
This link was provided as a possible solution, Where to check user email does not already exits?. It does help but it does not seem to close the loop on the issue. The thing I seem to be missing from this article would be who would be responsible for calling the CreateUserService, an application service or a method on the aggregate root where the CreateUserService object would be injected into the method along with any other relevant parameters?
If the answer is the application service that seems like you are loosing some encapsulation by taking the domain service out of the domain layer. On the other hand, going the other way would mean having to inject the repository into the domain service. Which of those two options would be preferable and more in line with DDD?
I think the best fit for that behaviour is a Domain Service. DS could access to persistence so you can check for existence or uniquenes.
Check this blog entry for more info.
I.e:
public class TransferManager
{
private readonly IEventStore _store;
private readonly IDomainServices _svc;
private readonly IDomainQueries _query;
private readonly ICommandResultMediator _result;
public TransferManager(IEventStore store, IDomainServices svc,IDomainQueries query,ICommandResultMediator result)
{
_store = store;
_svc = svc;
_query = query;
_result = result;
}
public void Execute(TransferMoney cmd)
{
//interacting with the Infrastructure
var accFrom = _query.GetAccountNumber(cmd.AccountFrom);
//Setup value objects
var debit=new Debit(cmd.Amount,accFrom);
//invoking Domain Services
var balance = _svc.CalculateAccountBalance(accFrom);
if (!_svc.CanAccountBeDebitted(balance, debit))
{
//return some error message using a mediator
//this approach works well inside monoliths where everything happens in the same process
_result.AddResult(cmd.Id, new CommandResult());
return;
}
//using the Aggregate and getting the business state change expressed as an event
var evnt = Transfer.Create(/* args */);
//storing the event
_store.Append(evnt);
//publish event if you want
}
}
from http://blog.sapiensworks.com/post/2016/08/19/DDD-Application-Services-Explained
The problem that you are facing is called Set based validation. There are a lot of articles describing the possible solutions. I will give here an extract from one of them (the context is CQRS but it can be applied to some degree to any DDD architecture):
1. Locking, Transactions and Database Constraints
Locking, transactions and database constraints are tried and tested tools for maintaining data integrity, but they come at a cost. Often the code/system is difficult to scale and can be complex to write and maintain. But they have the advantage of being well understood with plenty of examples to learn from. By implication, this approach is generally done using CRUD based operations. If you want to maintain the use of event sourcing then you can try a hybrid approach.
2. Hybrid Locking Field
You can adopt a locking field approach. Create a registry or lookup table in a standard database with a unique constraint. If you are unable to insert the row then you should abandon the command. Reserve the address before issuing the command. For these sort of operations, it is best to use a data store that isn’t eventually consistent and can guarantee the constraint (uniqueness in this case). Additional complexity is a clear downside of this approach, but less obvious is the problem of knowing when the operation is complete. Read side updates are often carried out in a different thread or process or even machine to the command and there could be many different operations happening.
3. Rely on the Eventually Consistent Read Model
To some this sounds like an oxymoron, however, it is a rather neat idea. Inconsistent things happen in systems all the time. Event sourcing allows you to handle these inconsistencies. Rather than throwing an exception and losing someone’s work all in the name of data consistency. Simply record the event and fix it later.
As an aside, how do you know a consistent database is consistent? It keeps no record of the failed operations users have tried to carry out. If I try to update a row in a table that has been updated since I read from it, then the chances are I’m going to lose that data. This gives the DBA an illusion of data consistency, but try to explain that to the exasperated user!
Accepting these things happen, and allowing the business to recover, can bring real competitive advantage. First, you can make the deliberate assumption these issues won’t occur, allowing you to deliver the system quicker/cheaper. Only if they do occur and only if it is of business value do you add features to compensate for the problem.
4. Re-examine the Domain Model
Let’s take a simplistic example to illustrate how a change in perspective may be all you need to resolve the issue. Essentially we have a problem checking for uniqueness or cardinality across aggregate roots because consistency is only enforced with the aggregate. An example could be a goalkeeper in a football team. A goalkeeper is a player. You can only have 1 goalkeeper per team on the pitch at any one time. A data-driven approach may have an ‘IsGoalKeeper’ flag on the player. If the goalkeeper is sent off and an outfield player goes in the goal, then you would need to remove the goalkeeper flag from the goalkeeper and add it to one of the outfield players. You would need constraints in place to ensure that assistant managers didn’t accidentally assign a different player resulting in 2 goalkeepers. In this scenario, we could model the IsGoalKeeper property on the Team, OutFieldPlayers or Game aggregate. This way, maintaining the cardinality becomes trivial.
You seems to be on the right way, the only stuff I didn't get is what your UserService.register does.
It should take all the values to register a user as input, validate them (using the repository to check the existence of the email) and, if the input is valid store the new User.
Problems can arise when the validation involve complex queries. In that case maybe you need to create a secondary store with special indexes suited for queries that you can't do with your domain model, so you will have to manage two different stores that can be out of sync (a user exists in one but it isn't replicated in the other one, yet).
This kind of problem happens when you store your aggregates in something like a key-value store where you can search just with the id of the aggregate, but if you are using something like a sql database that permits to search using your entities fields, you can do a lot of stuff with simple queries.
The only thing you need to take care is avoid to mix query logic and commands logic, in your example the lookup you need to do is easy, is just one field and the result is a boolean, sometimes it can be harder like time operations, or query spanning multiple tables aggregating results, in these cases it is better to make your (command) service use a (query) service, that offers a simple api to do the calculation like:
interface UserReportingService {
ComplexResult aComplexQuery(AComplexInput input);
}
That you can implement with a class that use your repositories, or an implementation that executes directly the query on your database (sql, or whatever).
The difference is that if you use the repositories you "think" in terms of your domain object, if you write directly the query you think in terms of your db abstractions (tables/sets in case of sql, documents in case of mongo, etc..). One or the other depends on the query you need to do.
It is fine to inject repository into domain.
Repository should have simple inteface, so that domain objects could use it as simple collection or storage. Repositories' main idea is to hide data access under simple and clear interface.
I don't see any problems in calling domain services from usecase. Usecase is suppossed to be archestrator. And domain services are actions. It is fine (and even unavoidable) to trigger domain actions by usecase.
To decide, you should analyze Where is this restriction come from?
Is it business rule? Or maybe user shouldn't be a part of model at all?
Usualy "User" means authorization and authentification i.e behaviour, that for my mind should placed in usecase. I prefare to create separate entity for domain (e.g. buyer) and relate it with usecase's user. So when new user is registered it possible to trigger creation of new buyer.

Check command for validity with data from other aggregate

I am currently working on my first bigger DDD application. For now it works pretty well, but we are stuck with an issue since the early days that I cannot stop thinking about:
In some of our aggreagtes we keep references to another aggregate-root that is pretty essential for the whole application (based on their IDs, so there are no hard references - also the deletion is based on events/eventual consistency). Now when we create a new Entity "Entity1" we send a new CreateEntity1Command that contains the ID of the referenced aggregate-root.
Now how can I check if this referenced ID is a valid one? Right now we check it by reading from the other aggregate (without modifying anything there) - but this approach somehow feels dirty. I would like to just "trust" the commands, because the ID cannot be entered manually but must be selected. The problem is, that our application is a web-application and it is not really safe to trust the user input you get there (even though it is not accessibly by the public).
Did I overlook any possible solutions for this problems or should I just ignore the feeling that there needs to be a better solution?
Verifying that another referenced Aggregate exists is not the responsibility of an Aggregate. It would break the Single responsibility principle. When the CreateEntity1Command arrive at the Aggregate, it should be considered that the other referenced Aggregate is in a valid state, i.e. it exists.
Being outside the Aggregate's boundary, this checking is eventually consistent. This means that, even if it initially passes, it could become invalid after that (i.e. it is deleted, unpublished or any other invalid domain state). You need to ensure that:
the command is rejected, if the referenced Aggregate does not yet exists. You do this checking in the Application service that is responsible for the Use case, before dispatching the command to the Aggregate, using a Domain service.
if the referenced Aggregate enters an invalid state afterwards, the corrects actions are taken. You should do this inside a Saga/Process manager. If CQRS is used, you subscribe to the relevant events; if not, you use a cron. What is the correct action it depends on your domain but the main idea is that it should be modeled as a process.
So, long story short, the responsibilty of an Aggregate does not extend beyond its consistency boundary.
P.S. resist the temptation to inject services (Domain or not) into Aggregates (throught constructor or method arguments).
Direct Aggregate-to-Aggregate interaction is an anti-pattern in DDD. An aggregate A should not directly send a command or query to an aggregate B. Aggregates are strict consistency boundaries.
I can think of 2 solutions to your problem: Let's say you have 2 aggregate roots (AR) - A and B. Each AR has got a bunch of command handlers where each command raises 1 or more events. Your command handler in A depends on some data in B.
You can subscribe to the events raised by B and maintain the state of B in A. You can subscribe only to the events which dictate the validity.
You can have a completely independent service (S) coordinating between A and B. Instead of directly sending your request to A, send your request to S which would be responsible for a query from B (to check for validity of referenced ID) and then forward request to A. This is sometimes called a Process Manager (PM).
For Example in your case when you are creating a new Entity "Entity1", send this request to a PM whose job would be to validate if the data in your request is valid and then route your request to the aggregate responsible for creating "Entity1". Send a new CreateEntity1Command that contains the ID of the referenced aggregate-root to this PM which uses ID of the referenced AR to make sure it's valid and if it's valid then only it would pass your request forward.
Useful Links: http://microservices.io/patterns/data/saga.html
Did I overlook any possible solutions for this problems
You did. "Domain Services" give you a possible loop hole to play in.
Aggregates are consistency boundaries; their behaviors are constrained by
The current state of the aggregate
The arguments that they are passed.
If an aggregate needs to interact with something outside of its boundary, then you pass to the aggregate root a domain service to encapsulate that interaction. The aggregate, at its own discretion, can invoke methods provided by the domain service to achieve work.
Often, the domain service is just a wrapper around an application or infrastructure service. For instance, if the aggregate needed to know if some external data were available, then you could pass in a domain service that would support that query, checking against some cache of data.
But - here's the trick: you need to stay aware of the fact that data from outside of the aggregate boundary is necessarily stale. There might be another process changing the data even as you are querying a stale copy.
The problem is, that our application is a web-application and it is not really safe to trust the user input you get there (even though it is not accessibly by the public).
That's true, but it's not typically a domain problem. For instance, we might specify that an endpoint in our API requires a JSON representation of some command message -- but that doesn't mean that the domain model is responsible for taking a raw byte array and creating a DOM for it. The application layer would have that responsibility; the aggregate's responsibility is the domain concerns.
It can take some careful thinking to distinguish where the boundary between the different concerns is. Is this sequence of bytes a valid identifier for an aggregate? is clearly an application concerns. Is the other aggregate in a state that permits some behavior? is clearly a domain concern. Does the aggregate exist at all...? could go either way.

Domain-Driven Design: How to design relational aggregates with a dependency

My domain is about Program Management. I have a Program (Aggregate Root) that must have a Customer (Aggregate Root). So I require a CustomerID when creating a new Program as I have read aggregates should only hold reference to other aggregates by reference.
Here are my business rules:
Customers can become active and inactive over time.
If a Customer is inactivated for some reason, all programs associated with that Customer should also be inactivated.
A Program cannot be activated if its Customer is inactive.
Rules #1 & #2 I have implemented. It's #3 that is stumping me.
I can think of 3 solutions:
Program holds reference to the Customer aggregate.
Introduce a domain service that checks if the Customer is active and pass it to Program.Activate(CustomerActiveCheckService service).
Have the application service look up the Customer and pass it to Program.Activate(Customer customer).
Which is the best solution?
Update
I see both points of view made by #ConstaninGALBENU and #plalx, and I want to suggest a compromise. Can I created a CustomerStatusChecker service? The method would have the following signature: CustomerStatus CheckStatus(CustomerID id); I could then pass Programthe service like so: `Program.Activate(CustomerStatusChecker service);
Are there any problems with this design?
Which is the best solution?
There isn't a best solution; there are trade offs.
But one possible solution that is consistent with requirements #2 and #3 is that your existing model is wrong -- that Program entities are not isolated aggregates, but are part of the Customer entity, and therefore should be controlled by the same aggregate root.
Hints that this might be the case: that the life cycle of a Program fits within the life cycle of a Customer; that Programs don't normally migrate from one Customer to another, that there are limits to the count of active programs per customer.
Another possibility is that the requirements are "wrong". One way of exploring this is to review whether active/inactive is a decision made by the model, or if it is a decision made somewhere else and reported to the model. Another is to examine the cost to the business if this "rule" is violated.
If the model doesn't find out about the customer right away, or it is an inexpensive problem, then you probably have some room to detect the conflict and report it to a human, rather than trying to have the model do all of the work (See: Greg Young, Stop Over Engineering).
In these cases, having the main code path take a good guess, and implementing an alternative path that operators can use fix the mistakes is fine.
In choosing between solution #2 and #3 (I don't like #1 at all), I encourage keeping I/O actions out of the model. So unless you already have the latest version of the Customer in memory, I'm not fond of the domain service as a choice. Passing in a copy of the customer state to the domain model keeps the I/O concerns in the application component, where they belong (see Boundaries, by Gary Bernhardt, for more on this idea).
Solution 1: it breaks the rule about not holding references to other aggregate instances. That rule ensures that only one Aggregate is modified in a transaction. If you need to modify multiple aggregates in a single transaction then your design is definitely wrong.
Solution 2: I really don't like injecting services inside aggregates. My aggregates are pure functions with no touching of the outside world (I/O, repositories or the like).
Solution 3: is somehow equivalent to 1, even it is a temporary reference (Program could call command methods on Customer thus modifying Customer in the same transaction boundary as Program) .
My solution: make that check inside the Application service, before that call to Program.activate () or pass a customerStatus to Program.activate () and let Program aggregate decide if it throws an exception or emit events.
Update:
The idea is that you should pass only read-only/imutable data to Program AR to ensure that it does not modify other ARs in its transactional boundary. Also, we should not make Program dependent on what it does not need, like the entire Customer AR.
Also, if the architecture is event-driven then by listening to the right events emited by Customer you could keep the Program AR in sync: you make it "non activable" if not already activated or you deactivate it if it is activated already, using by example a Saga.

Do you apply events to the domain model immediately when there is still the possibility of useless events or undo?

Every example of event sourcing that I see is for the web. It seems to click especially well with the MVC architecture where views on the client side aren't running domain code and interactivity is limited. I'm not entirely sure how to extrapolate to a rich desktop application, where a user might be editing a list or performing some other long running task.
The domain model is persistence-agnostic and presentation-agnostic and is only able to be mutated by applying domain events to an aggregate root. My specific question is should the presentation code mutate the domain model while the user makes uncommitted changes?
If the presentation code does not mutate the domain model, how do you enforce domain logic? It would be nice to have instant domain validation and domain calculations to bubble up to the presentation model as the user edits. Otherwise you have to duplicate non-trivial logic in the view model.
If the presentation code does mutate the domain model, how do you implement undo? There's no domain undelete event since the concept of undo only exists in an uncommited editing session, and I'd be loathe to add an undo version of every event. Even worse, I need the ability to undo events out-of-order. Do you just remove the event and replay everything on each undo? (An undo also happens if a text field returns to its previous state, for example.)
If the presentation code does mutate the domain model, is it better to persist every event the user performs or just condense the user's activity to the simplest set of events possible? For a simple example, imagine changing a comments field over and
over before saving. Would you really persist each intermediate
CommentChangedEvent on the same field during the same editing
session? Or for a more complicated example, the user will be
changing parameters, running an optimization calculation, adjusting
parameters, rerunning the calculation, etc until this user is
satisfied with the most recent result and commits the changes. I
don't think anyone would consider all the intermediate events worth
storing. How would you keep this condensed?
There is complicated collaborative domain logic, which made me think DDD/ES was the way to go. I need a picture of how rich client view models and domain models interact and I'm hoping for simplicity and elegance.
I don't see desktop DDD applications as much different from MVC apps, you can basically have the same layers except they're mostly not network separated.
CQRS/ES applications work best with a task-based UI where you issue commands that reflect the user's intent. But by task we don't mean each action the user can take on the screen, it has to have a meaning and purpose in the domain. As you rightly point out in 3., no need to model each micro modification as a full fledged DDD command and the associated event. It could pollute your event stream.
So you would basically have two levels :
UI level action
These can be managed in the presentation layer entirely. They stack up to eventually be mapped to a single command, but you can undo them individually quite easily. Nothing prevents you from modelling them as micro-events that encapsulate closures for do and undo for instance. I've never seen "cherrypickable" undos in any UI, nor do I really see the point, but this should be feasible and user comprehensible as long as the actions are commutative (their effect is not dependent on the order of execution).
Domain level task
Coarser-grained activity represented by a command and a corresponding event. If you need to undo these, I would rather append a new rollback event to the event stream than try to remove an existing one ("don't change the past").
Reflecting domain invariants and calculations in the UI
This is where you really have to get the distinction between the two types of tasks right, because UI actions will typically not update anything on the screen apart from a few basic validations (required fields, string and number formats, etc.) Issuing commands, on the other hand, will result in a refreshed view of the model, but you may have to materialize the action by some confirmation button.
If your UIs are mostly about displaying calculated numbers and projections to the user, this can be a problem. You could put the calculations in a separate service called by the UI, then issue a modification command with all the updated calculated values when the user saves. Or, you could just submit a command with the one parameter you change and have the domain call the same calculation service. Both are actually closer to CRUD and will probably lead to an anemic domain model though IMO.
I've wound up doing something like this, though having the repository manage transactions.
Basically my repositories all implement
public interface IEntityRepository<TEntityType, TEventType>{
TEntityType ApplyEvents(IEnumerable<TEventType> events);
Task Commit();
Task Cancel();
}
So while ApplyEvents will update and return the entity, I also keep the start version internally, until Commit is called. If Cancel is called, I just swap them back, discarding the events.
A very nice feature of this is I only push events to the web, DB, etc once the transaction is complete. The implementation of the repository will have a dependency on the database or web service services - but all the calling code needs to know is to Commit or Cancel.
EDIT
To do the cancel you store the old entity, updated entity, and events since in a memory structure. Something like
public class EntityTransaction<TEntityType, TEventType>
{
public TEntityType oldVersion{get;set;}
public TEntityType newVersion{get;set;}
public List<TEventType> events{get;set;}
}
Then your ApplyEvents would look like, for a user
private Dictionary<Guid, EntityTransaction<IUser, IUserEvents>> transactions;
public IUser ApplyEvents(IEnumerable<IUserEvent> events)
{
//get the id somehow
var id = GetUserID(events);
if(transactions.ContainsKey(id) == false){
var user = GetByID(id);
transactions.Add(id, new EntityTransaction{
oldVersion = user;
newVersion = user;
events = new List<IUserEvent>()
});
}
var transaction = transactions[id];
foreach(var ev in events){
transaction.newVersion.When(ev);
transaction.events.Add(ev);
}
}
Then in your cancel, you simply substitute the old version for the new if you're cancelling a transaction.
Make sense?

Resources