Doesn't DDD and CQRS/ES break the persistence agnosticity of DDD? - domain-driven-design

The Domain model in DDD should be persistence agnostic.
CQRS dictates me to fire events for everything I wan't to have in my read model. (And by the way to split my model into a write model and at least one read model).
ES dictates me to fire events for everything that changes state and that my aggregate roots must handle the events itself.
This seems not to be very persistence agnostic to me.
So how could DDD and CQRS/ES be combined without heavy impact of this persistence technology to the domain model?
Is the read model also in the DDD domain model? Or outside of it?
Are the CQRS/ES events the same as DDD domain events?
Edit:
What I took out of the answers is the following:
Yes, for ORM the implementation of the domain model objecs will differ than that with using ES.
The question is the false way around. First write the domain model objects, then decide how to persist (more event like => ES, more data like => ORM, ...).
But I doubt that you will ever be able to use ES (without big additions/changes to your domain objects) if you did not make this decision front up, and also to use ORM without decide it front up will cause very much pain. :-)

Commands
Boiled down to its essence, CQRS means that you should split your reads from your writes.
Typically, a command arrives in the system and is handled by some sort of function that then returns zero, one, or many events resulting from that command:
handle : cmd:Command -> Event list
Now you have a list of events. All you have to do with them is to persist them somewhere. A function to do that could look like this:
persist : evt:Event -> unit
However, such a persist function is purely an infrastructure concern. The client will typically only see a function that takes a Command as input and returns nothing:
attempt : cmd:Command -> unit
The rest (handle, followed by persist) is handled asynchronously, so the client never sees those functions.
Queries
Given a list of events, you can replay them in order to aggregate them into the desired result. Such a function essentially looks something like this:
query : target:'a -> events:Event list -> Result
Given a list of events and a target to look for (e.g. an ID), such a function can fold the events into a result.
Persistence ignorance
Does this force you to use a particular type of persistence?
None of these functions are defined in terms of any particular persistence technology. You can implement such a system with
In-memory lists
Actors
Event Stores
Files
Blobs
Databases, even
etc.
Conceptually, it does force you to think about persistence in terms of events, but that's no different than the approach with ORMs that force you to think about persistence in terms of Entities and relationships.
The point here is that it's quite easy to decouple a CQRS+ES architecture from most implementation details. That's usually persistent-ignorant enough.

A lot of the premises in your question present as very binary/black-and-white. I don't think DDD, CQRS, or Event Sourcing are that prescriptive--there are many possible interpretations and implementations.
That said, only one of your premises bother me (emphasis mine):
ES dictates me to fire events for everything that changes state and
that my aggregate roots must handle the events itself.
Usually ARs emit events--they don't handle them.
In any case, CQRS and ES can be implemented to be completely persistence agnostic (and usually are). Events are stored as a stream, which can be stored in a relational database, a NoSQL database, the file system, in-memory, etc. The storing of events is usually implemented at the boundaries of the application (I think of this as infrastructure), and domain models have no knowledge of how their streams are stored.
Similarly, read models can be stored in any imaginable storage medium. You can have 10 different read models and projections, with each of them stored in a different database and different format. Projections just handle/read the event stream, and are otherwise completely decoupled from the domain.
It does not get any more persistence agnostic than that.

Not sure how orthodox this is, but a current event sourced entity model I have does something like this, which might illustrate the difference . . . (C# example)
public interface IEventSourcedEntity<IEventTypeICanRespondTo>{
void When(IEventTypeICanRespondTo event);
}
public interface IUser{
bool IsLoggedIn{get;}
}
public class User : IUser, IEventSourcedEntity<IUserEvent>{
public bool IsLoggedIn{get;private set;}
public virtual void When(IUserEvent event){
if(event is LoggedInEvent){
IsLoggedIn = true;
}
}
}
Very simple example - but you can see here that how (or even IF) the event is persisted is outside the domain object. You can easily do that through a repository. Likewise CQRS is respected, because how I read the value is separate from how I set it. So for example, say I have multiple devices for a user, and only want them logged in once there's more than two?
public class MultiDeviceUser : IUser, IEventSourcedEntity<IUserEvent>{
private IEnumerable<LoggedInEvent> _logInEvents = . . .
public bool IsLoggedIn{
get{
return _logInEvents.Count() > MIN_NUMBER_OF_LOGINS;
}
}
public void When(IUserEvent ev){
if(ev is LoggedInEvent){
_logInEvents.Add(ev);
}
}
}
To the calling code, though, your actions are the same.
var ev = new LoggedInEvent();
user.When(ev);
if(user.IsLoggedIn) . . . .

I think you can still decouple your domain from the persistence mechanism by using a satellite POCO. Then you can implement your specific persistence mechanism around that POCO, and let your domain use it as a snapshot/memento/state.

Related

What is the purpose of child entity in Aggregate root?

[ Follow up from this question & comments: Should entity have methods and if so how to prevent them from being called outside aggregate ]
As the title says: i am not clear about what is the actual/precise purpose of entity as a child in aggregate?
According to what i've read on many places, these are the properties of entity that is a child of aggregate:
It has identity local to aggregate
It cannot be accessed directly but through aggregate root only
It should have methods
It should not be exposed from aggregate
In my mind, that translates to several problems:
Entity should be private to aggregate
We need a read only copy Value-Object to expose information from an entity (at least for a repository to be able to read it in order to save to db, for example)
Methods that we have on entity are duplicated on Aggregate (or, vice versa, methods we have to have on Aggregate that handle entity are duplicated on entity)
So, why do we have an entity at all instead of Value Objects only? It seams much more convenient to have only value objects, all methods on aggregate and expose value objects (which we already do copying entity infos).
PS.
I would like to focus to child entity on aggregate, not collections of entities.
[UPDATE in response to Constantin Galbenu answer & comments]
So, effectively, you would have something like this?
public class Aggregate {
...
private _someNestedEntity;
public SomeNestedEntityImmutableState EntityState {
get {
return this._someNestedEntity.getState();
}
}
public ChangeSomethingOnNestedEntity(params) {
this._someNestedEntity.someCommandMethod(params);
}
}
You are thinking about data. Stop that. :) Entities and value objects are not data. They are objects that you can use to model your problem domain. Entities and Value Objects are just a classification of things that naturally arise if you just model a problem.
Entity should be private to aggregate
Yes. Furthermore all state in an object should be private and inaccessible from the outside.
We need a read only copy Value-Object to expose information from an entity (at least for a repository to be able to read it in order to save to db, for example)
No. We don't expose information that is already available. If the information is already available, that means somebody is already responsible for it. So contact that object to do things for you, you don't need the data! This is essentially what the Law of Demeter tells us.
"Repositories" as often implemented do need access to the data, you're right. They are a bad pattern. They are often coupled with ORM, which is even worse in this context, because you lose all control over your data.
Methods that we have on entity are duplicated on Aggregate (or, vice versa, methods we have to have on Aggregate that handle entity are duplicated on entity)
The trick is, you don't have to. Every object (class) you create is there for a reason. As described previously to create an additional abstraction, model a part of the domain. If you do that, an "aggregate" object, that exist on a higher level of abstraction will never want to offer the same methods as objects below. That would mean that there is no abstraction whatsoever.
This use-case only arises when creating data-oriented objects that do little else than holding data. Obviously you would wonder how you could do anything with these if you can't get the data out. It is however a good indicator that your design is not yet complete.
Entity should be private to aggregate
Yes. And I do not think it is a problem. Continue reading to understand why.
We need a read only copy Value-Object to expose information from an entity (at least for a repository to be able to read it in order to
save to db, for example)
No. Make your aggregates return the data that needs to be persisted and/or need to be raised in a event on every method of the aggregate.
Raw example. Real world would need more finegrained response and maybe performMove function need to use the output of game.performMove to build propper structures for persistence and eventPublisher:
public void performMove(String gameId, String playerId, Move move) {
Game game = this.gameRepository.load(gameId); //Game is the AR
List<event> events = game.performMove(playerId, move); //Do something
persistence.apply(events) //events contains ID's of entities so the persistence is able to apply the event and save changes usign the ID's and changed data wich comes in the event too.
this.eventPublisher.publish(events); //notify that something happens to the rest of the system
}
Do the same with inner entities. Let the entity return the data that changed because its method call, including its ID, capture this data in the AR and build propper output for persistence and eventPublisher. This way you do not need even to expose public readonly property with entity ID to the AR and the AR neither about its internal data to the application service. This is the way to get rid of Getter/Setters bag objects.
Methods that we have on entity are duplicated on Aggregate (or, vice versa, methods we have to have on Aggregate that handle entity
are duplicated on entity)
Sometimes the business rules, to check and apply, belongs exclusively to one entity and its internal state and AR just act as gateway. It is Ok but if you find this patter too much then it is a sign about wrong AR design. Maybe the inner entity should be the AR instead a inner entity, maybe you need to split the AR into serveral AR's (inand one the them is the old ner entity), etc... Do not be affraid about having classes that just have one or two methods.
In response of dee zg comments:
What does persistance.apply(events) precisely do? does it save whole
aggregate or entities only?
Neither. Aggregates and entities are domain concepts, not persistence concepts; you can have document store, column store, relational, etc that does not need to match 1 to 1 your domain concepts. You do not read Aggregates and entities from persitence; you build aggregates and entities in memory with data readed from persistence. The aggregate itself does not need to be persisted, this is just a possible implementation detail. Remember that the aggregate is just a construct to organize business rules, it's not a meant to be a representation of state.
Your events have context (user intents) and the data that have been changed (along with the ID's needed to identify things in persistence) so it is incredible easy to write an apply function in the persistence layer that knows, i.e. what sql instruction in case of relational DB, what to execute in order to apply the event and persist the changes.
Could you please provide example when&why its better (or even
inevitable?) to use child entity instead of separate AR referenced by
its Id as value object?
Why do you design and model a class with state and behaviour?
To abstract, encapsulate, reuse, etc. Basic SOLID design. If the entity has everything needed to ensure domain rules and invariants for a operation then the entity is the AR for that operation. If you need extra domain rules checkings that can not be done by the entity (i.e. the entity does not have enough inner state to accomplish the check or does not naturaly fit into the entity and what represents) then you have to redesign; some times could be to model an aggregate that does the extra domain rules checkings and delegate the other domain rules checking to the inner entity, some times could be change the entity to include the new things. It is too domain context dependant so I can not say that there is a fixed redesign strategy.
Keep in mind that you do not model aggregates and entities in your code. You model just classes with behaviour to check domain rules and the state needed to do that checkings and response whith the changes. These classes can act as aggregates or entities for different operations. These terms are used just to help to comunicate and understand the role of the class on each operation context. Of course, you can be in the situation that the operation does not fit into a entity and you could model an aggregate with a V.O. persistence ID and it is OK (sadly, in DDD, without knowing domain context almost everything is OK by default).
Do you wanna some more enlightment from someone that explains things much better than me? (not being native english speaker is a handicap for theese complex issues) Take a look here:
https://blog.sapiensworks.com/post/2016/07/14/DDD-Aggregate-Decoded-1
http://blog.sapiensworks.com/post/2016/07/14/DDD-Aggregate-Decoded-2
http://blog.sapiensworks.com/post/2016/07/14/DDD-Aggregate-Decoded-3
It has identity local to aggregate
In a logical sense, probably, but concretely implementing this with the persistence means we have is often unnecessarily complex.
We need a read only copy Value-Object to expose information from an
entity (at least for a repository to be able to read it in order to
save to db, for example)
Not necessarily, you could have read-only entities for instance.
The repository part of the problem was already addressed in another question. Reads aren't an issue, and there are multiple techniques to prevent write access from the outside world but still allow the persistence layer to populate an entity directly or indirectly.
So, why do we have an entity at all instead of Value Objects only?
You might be somewhat hastily putting concerns in the same basket which really are slightly different
Encapsulation of operations
Aggregate level invariant enforcement
Read access
Write access
Entity or VO data integrity
Just because Value Objects are best made immutable and don't enforce aggregate-level invariants (they do enforce their own data integrity though) doesn't mean Entities can't have a fine-tuned combination of some of the same characteristics.
These questions that you have do not exist in a CQRS architecture, where the Write model (the Aggregate) is different from a Read model. In a flat architecture, the Aggregate must expose read/query methods, otherwise it would be pointless.
Entity should be private to aggregate
Yes, in this way you are clearly expressing the fact that they are not for external use.
We need a read only copy Value-Object to expose information from an entity (at least for a repository to be able to read it in order to save to db, for example)
The Repositories are a special case and should not be see in the same way as Application/Presentation code. They could be part of the same package/module, in other words they should be able to access the nested entities.
The entities can be viewed/implemented as object with an immutable ID and a Value object representing its state, something like this (in pseudocode):
class SomeNestedEntity
{
private readonly ID;
private SomeNestedEntityImmutableState state;
public getState(){ return state; }
public someCommandMethod(){ state = state.mutateSomehow(); }
}
So you see? You could safely return the state of the nested entity, as it is immutable. There would be some problem with the Law of Demeter but this is a decision that you would have to make; if you break it by returning the state you make the code simpler to write for the first time but the coupling increases.
Methods that we have on entity are duplicated on Aggregate (or, vice versa, methods we have to have on Aggregate that handle entity are duplicated on entity)
Yes, this protect the Aggregate's encapsulation and also permits the Aggregate to protect it's invariants.
I won't write too much. Just an example. A car and a gear. The car is the aggregate root. The gear is a child entity

DDD - Should optimistic concurrency property (etag or timestamp) ever be a part of domain?

Theoretically speaking, if we implement optimistic concurrency on Aggregate Root level (changing entities in AR changes version on AR) and lets say we use timestamp for version property (just for simplicity) - should timeline ever be a property on AR or should it be a part of read model on one side and on other (for example, updates) be a separate argument to application service like:
[pseudo]
public class AppService{
.
.
.
public void UpdateSomething(UpdateModelDTO model, int timestamp)
{
repository.GetModel(model.Identifier);
model.UpdateSomething(model.something);
repository.ConcurrencySafeModelUpdate(model, timestamp);
}
}
I see pros/cons for both but wondering which is by-the-book solution?
[UPDATE]
To answer question from #guillaume31, i expect usual scenario:
On read, version identifier is read and sent to client
On update, client sends back the identifier and repository returns some kind of an error if version identifier is not the same.
I don't know if its important but i want to leave responsibility for creating/updating version identifiers themselves to my database system.
I assume, else you won't be asking this question, you are using your Domain model as Data model (i.e. Hibernate entity), then you have already introduced infrastructural concerns into your domain model, so I would suggest to go ahead and add the timestamp to AR.
There's no by-the-book solution. Sometimes you'll already have a field in your aggregate that fits the role nicely (e.g. LastUpdatedOn), sometimes you can make it non-domain data. For performance reasons, it may be a good idea to select the field as part of the same query that fetches the aggregate though.
Some ORMs provide facilities to detect concurrency conflicts. Some DBMS's can create and update a version column automatically for you. You should probably look for guidelines about optimistic concurrency in your specific stack.
The Aggregate should not care about anything else but business related concerns. It should be pure, without any infrastructure concerns.
The version column in the database/persistence is an infrastructure concern so it should not be reflected in the Aggregate class from the Domain layer.
P.S. using timestamps for optimistic locking is weird; you would better use an integer that is incremented every time an Aggregate is mutated and that is expected to have the (loaded-version + 1) when a save is executed.
public void UpdateSomething(UpdateModelDTO model)
{
(model, version) = repository.GetModel(model.Identifier);
model.UpdateSomething(something);
repository.ConcurrencySafeModelUpdate(model, version + 1);
}
Update: here is an implementation example in java.

How to keep domain model pure using ES approach?

I have flicked through few popular Event Sourcing frameworks written in a variety of different common languages. I have got the impression all of them affect the domain models to a really high degree. As far as I understand ES is just an infrastructure concern - a way of persisting aggregate state. Of course, it facilitates message driven inter-context integration but in core domain's point of view is negligible. I consider commands and events to be part of the domain itself so it looks perfectly fine that aggregate creates events (but not publishes them) or handles commands.
The problem is that all of DDD building blocks tend to be polluted by ES framework. Events must inherit from some base class. Aggregates at least are supposed to implement foreign interfaces. I wonder if domain models should be even aware of using ES approach within an application. In my opinion, even necessity of providing apply() methods indicates that other layer shapes our domain.
How you approach this issue in your projects?
My answer applies only when CQRS is involved (write and read models are split and they communicate using domain events).
As far as I understand ES is just an infrastructure concern - a way of persisting aggregate state
Event sourcing is indeed an infrastructure concern, a kind of repository but event-based Aggregates are not. I consider them to be an architectural style, different from the classical style.
So, the fact that an Aggregate, in reaction to an command, generates zero or more domain events that are applied onto itself in order to build its internal (private) state used to decide what events to generate in the future is just a different mode of thinking and designing an Aggregate. This is a perfect valid style, along with classical style (the one not using events but only objects) or functional programming style.
Event sourcing just means that every time a command reaches an Aggregate, its entire internal state is rebuild instead of being loaded from a flat persistence. Of course there are other huge advantages (!) but they do not affect the design of an Aggregate.
... but not publishes them ...
I like the frameworks that permit us to just return (or better yield - Aggregate's command methods are just generators!) the events.
Events must inherit from some base class
It's sad that some frameworks require that but this is not necessarily. In general, a framework needs one mean of detecting an event class. However, they can be implemented to detect an event by other means instead of using marker interfaces. For example, the client (as in YOU) could provide a filter method that rejects non-event classes.
However, there is one thing that I couldn't avoid in my framework (yes, I know, I'm guilty, I have one): the Command interface with only one method: getAggregateId.
Aggregates at least are supposed to implement foreign interfaces.
Again, like with events, this is not a necessity. A framework could be given a custom client event-applier-on-aggregates function or a convention can be used (i.e. all event-applier methods have the form applyEventClassNameOrType.
I wonder if domain models should be even aware of using ES approach within an application
Of ES not, but event-based YES, so the apply method must still exists.
As far as I understand ES is just an infrastructure concern - a way of persisting aggregate state.
No, events are really core to the domain model.
Technically, you could store diffs in a domain agnostic way. For example, you could look at an aggregate and say "here is the representation before the change, here is the representation after, we'll compute the difference and store that.
The difference between patches and events is the fact that you switch from a domain agnostic spelling to a domain specific spelling. Doing that is normally going to require being intimate with the domain model itself.
The problem is that all of DDD building blocks tend to be polluted by ES framework.
Yup, there's a lot of crap framework in the examples you find in the wild. Sturgeon's Law at work.
Thinking about the domain model from a functional perspective can help a lot. At it's core, the most general form of the model is a function that accepts current state as an input, and returns a list of events as the output.
List<Event> change(State current)
From there, if you want to save current state, you just wrap this function in something that knows how to do the fold
State current = ...
List<Event> events = change(current)
State updated = State.fold(current, events)
Similarly, you can get current state by folding over the previous history
List<Event> savedHistory = ...
State current = State.reduce(savedHistory)
List<Event> events = change(current)
State updated = State.fold(current, events)
Another way of saying the same thing; the "events" are already there in your (not event sourced) domain model -- they are just implicit. If there is business value in tracking those events, then you should replace the implementation of your domain model with one that makes those events explicit. Then you can decide which persisted representation to use independent of the domain model.
Core of my problem is that domain Event inherits from framework Event and aggregate implements some foreign interface (from framework). How to avoid this?
There are a couple of possibilities.
1) Roll your own: take a close look at the framework -- what is it really buying you? If your answer is "not much", then maybe you can do without it.
From what I've seen, the "win" of these frameworks tends to be in taking a heterogeneous collection of events and managing the routing for you. That's not nothing -- but it's a bit magic, and you might be happier having that code explicit, rather than relying on implicit framework magic
2) Suck it up: if the framework is unobtrusive, then it may be more practical to accept the tradeoffs that it imposes and live with them. To some degree, event frameworks are like object relational mappers or databases; sure, in theory you should be able to change them out freely. In practice? how often do you derive benefit from the investment in that flexibility
3) Interfaces: if you squint a little bit, you can see that your domain behaviors don't usually depend on in memory representations, but instead on the algebra of the domain itself.
For example, in the domain model, we deposit Money into an Account updating its Balance. We don't typically care whether those are integers, or longs, or floats, or JSON documents. We can satisfy the model with any implementation that satisfies the constraints of the algebra.
So you can use the framework to provide the implementation (which also happens to have all the hooks the framework needs); the behavior just interacts with the interface it defined itself.
In a strongly typed implementation, this can get really twisty. In Java, for instance, if you want the strong type checks you need to be comfortable with the magic of generics and type erasure.
The real answer to this is that DDD is overrated. It is not true that you have to have one model to rule them all. You may have different views on the state of your world, depending on your current needs. One part of the application has one view, another part - completely different view.
To put it another way, your model is not "what is", but "what happened so far". The actual data model of your application is the event stream itself. Everything else you derive from there.

DDD: where should logic go that tests the existence of an entity?

I am in the process of refactoring an application and am trying to figure out where certain logic should fit. For example, during the registration process I have to check if a user exists based upon their email address. As this requires testing if the user exists in the database it seems as if this logic should not be tied to the model as its existence is dictated by it being in the database.
However, I will have a method on the repository responsible for fetching the user by email, etc. This handles the part about retrieval of the user if they exist. From a use case perspective, registration seems to be a use case scenario and accordingly it seems there should be a UserService (application service) with a register method that would call the repository method and perform if then logic to determine if the user entity returned was null or not.
Am I on the right track with this approach, in terms of DDD? Am I viewing this scenario the wrong way and if so, how should I revise my thinking about this?
This link was provided as a possible solution, Where to check user email does not already exits?. It does help but it does not seem to close the loop on the issue. The thing I seem to be missing from this article would be who would be responsible for calling the CreateUserService, an application service or a method on the aggregate root where the CreateUserService object would be injected into the method along with any other relevant parameters?
If the answer is the application service that seems like you are loosing some encapsulation by taking the domain service out of the domain layer. On the other hand, going the other way would mean having to inject the repository into the domain service. Which of those two options would be preferable and more in line with DDD?
I think the best fit for that behaviour is a Domain Service. DS could access to persistence so you can check for existence or uniquenes.
Check this blog entry for more info.
I.e:
public class TransferManager
{
private readonly IEventStore _store;
private readonly IDomainServices _svc;
private readonly IDomainQueries _query;
private readonly ICommandResultMediator _result;
public TransferManager(IEventStore store, IDomainServices svc,IDomainQueries query,ICommandResultMediator result)
{
_store = store;
_svc = svc;
_query = query;
_result = result;
}
public void Execute(TransferMoney cmd)
{
//interacting with the Infrastructure
var accFrom = _query.GetAccountNumber(cmd.AccountFrom);
//Setup value objects
var debit=new Debit(cmd.Amount,accFrom);
//invoking Domain Services
var balance = _svc.CalculateAccountBalance(accFrom);
if (!_svc.CanAccountBeDebitted(balance, debit))
{
//return some error message using a mediator
//this approach works well inside monoliths where everything happens in the same process
_result.AddResult(cmd.Id, new CommandResult());
return;
}
//using the Aggregate and getting the business state change expressed as an event
var evnt = Transfer.Create(/* args */);
//storing the event
_store.Append(evnt);
//publish event if you want
}
}
from http://blog.sapiensworks.com/post/2016/08/19/DDD-Application-Services-Explained
The problem that you are facing is called Set based validation. There are a lot of articles describing the possible solutions. I will give here an extract from one of them (the context is CQRS but it can be applied to some degree to any DDD architecture):
1. Locking, Transactions and Database Constraints
Locking, transactions and database constraints are tried and tested tools for maintaining data integrity, but they come at a cost. Often the code/system is difficult to scale and can be complex to write and maintain. But they have the advantage of being well understood with plenty of examples to learn from. By implication, this approach is generally done using CRUD based operations. If you want to maintain the use of event sourcing then you can try a hybrid approach.
2. Hybrid Locking Field
You can adopt a locking field approach. Create a registry or lookup table in a standard database with a unique constraint. If you are unable to insert the row then you should abandon the command. Reserve the address before issuing the command. For these sort of operations, it is best to use a data store that isn’t eventually consistent and can guarantee the constraint (uniqueness in this case). Additional complexity is a clear downside of this approach, but less obvious is the problem of knowing when the operation is complete. Read side updates are often carried out in a different thread or process or even machine to the command and there could be many different operations happening.
3. Rely on the Eventually Consistent Read Model
To some this sounds like an oxymoron, however, it is a rather neat idea. Inconsistent things happen in systems all the time. Event sourcing allows you to handle these inconsistencies. Rather than throwing an exception and losing someone’s work all in the name of data consistency. Simply record the event and fix it later.
As an aside, how do you know a consistent database is consistent? It keeps no record of the failed operations users have tried to carry out. If I try to update a row in a table that has been updated since I read from it, then the chances are I’m going to lose that data. This gives the DBA an illusion of data consistency, but try to explain that to the exasperated user!
Accepting these things happen, and allowing the business to recover, can bring real competitive advantage. First, you can make the deliberate assumption these issues won’t occur, allowing you to deliver the system quicker/cheaper. Only if they do occur and only if it is of business value do you add features to compensate for the problem.
4. Re-examine the Domain Model
Let’s take a simplistic example to illustrate how a change in perspective may be all you need to resolve the issue. Essentially we have a problem checking for uniqueness or cardinality across aggregate roots because consistency is only enforced with the aggregate. An example could be a goalkeeper in a football team. A goalkeeper is a player. You can only have 1 goalkeeper per team on the pitch at any one time. A data-driven approach may have an ‘IsGoalKeeper’ flag on the player. If the goalkeeper is sent off and an outfield player goes in the goal, then you would need to remove the goalkeeper flag from the goalkeeper and add it to one of the outfield players. You would need constraints in place to ensure that assistant managers didn’t accidentally assign a different player resulting in 2 goalkeepers. In this scenario, we could model the IsGoalKeeper property on the Team, OutFieldPlayers or Game aggregate. This way, maintaining the cardinality becomes trivial.
You seems to be on the right way, the only stuff I didn't get is what your UserService.register does.
It should take all the values to register a user as input, validate them (using the repository to check the existence of the email) and, if the input is valid store the new User.
Problems can arise when the validation involve complex queries. In that case maybe you need to create a secondary store with special indexes suited for queries that you can't do with your domain model, so you will have to manage two different stores that can be out of sync (a user exists in one but it isn't replicated in the other one, yet).
This kind of problem happens when you store your aggregates in something like a key-value store where you can search just with the id of the aggregate, but if you are using something like a sql database that permits to search using your entities fields, you can do a lot of stuff with simple queries.
The only thing you need to take care is avoid to mix query logic and commands logic, in your example the lookup you need to do is easy, is just one field and the result is a boolean, sometimes it can be harder like time operations, or query spanning multiple tables aggregating results, in these cases it is better to make your (command) service use a (query) service, that offers a simple api to do the calculation like:
interface UserReportingService {
ComplexResult aComplexQuery(AComplexInput input);
}
That you can implement with a class that use your repositories, or an implementation that executes directly the query on your database (sql, or whatever).
The difference is that if you use the repositories you "think" in terms of your domain object, if you write directly the query you think in terms of your db abstractions (tables/sets in case of sql, documents in case of mongo, etc..). One or the other depends on the query you need to do.
It is fine to inject repository into domain.
Repository should have simple inteface, so that domain objects could use it as simple collection or storage. Repositories' main idea is to hide data access under simple and clear interface.
I don't see any problems in calling domain services from usecase. Usecase is suppossed to be archestrator. And domain services are actions. It is fine (and even unavoidable) to trigger domain actions by usecase.
To decide, you should analyze Where is this restriction come from?
Is it business rule? Or maybe user shouldn't be a part of model at all?
Usualy "User" means authorization and authentification i.e behaviour, that for my mind should placed in usecase. I prefare to create separate entity for domain (e.g. buyer) and relate it with usecase's user. So when new user is registered it possible to trigger creation of new buyer.

should a domain model keep itself consistent using events?

I am working on an application where we try to use a Domain Model. The idea is to keep the business logic inside the objects in the Domain Model. Now a lot is done by objects subscribing to related objects to react to changes in them. This is done through PropertyChanged and CollectionChanged. This work OK except in the following:
Complex actions : Where a lot of changes should be handled as a group (and not individual property/collection changes). Should I / how can I 'build' transactions?
Persistency : I use NHibernate for persistency and this also uses the public property setters of classes. When NHibernate hits the property a lot of bussiness logic is done (which seems unnecessary). Should I use custom setters for NHibernate?
Overal it seems that pushing all logic in the domain model makes the domain model rather complex. Any ideas???
Here's a 'sample' problem (sorry for the crappy tooling i use):
You can see the Project my container and objects below it are reacting to each other by subscribing. Now changes to Network are done via NetworkEditor but this editor has no knowledge of NetworkData. This data might even be defined in a another assembly sometimes. The flow goes from user->NetworkEditor->Network->NetworkData and the all other object interested. This does not seem to scale.
I fear that combination of DDD and PropertyChanged/CollactionChanged events might now be the best idea. The problem is, that if you base your logic around these events it is extremely hard to manage the complexity as one PropertyChanged leads to another and another and soon enough you loose control.
Another reason why ProportyChanged events and DDD doesn't exactly fit is that in DDD every business operation should be as explicit as possible. Keep in mind that DDD is supposed to bring technical stuff into the world of business, not the other way around. And basing on the PropertyChanged/CollectionChanged doesn't seem very explicit.
In DDD the main goal is to keep consistency inside aggregate, in other words, you need to model the aggregate in such way, that whatever operation you invoke the aggregate is valid and consistent (if the operation succeeds of course).
If you build your model right that there's no need to worry about 'building' transaction - an operation on aggregate should be a transaction itself.
I don't know how your model looks like, but you might consider moving the responsibilities one level 'up' in the aggregate tree, quite possibly adding additional logical entities in the process, instead of relying on the PropertyChanged events.
Example:
Lets assume you have a collection of payments with statuses and whenever a payment changes, you want to recalculate the total balance of customer orders. Instead of subscribing changes to the payments collection and calling a method on customer when collection changes, you might do something like this:
public class CustomerOrder
{
public List<Payment> Payments { get; }
public Balance BalanceForOrder { get; }
public void SetPaymentAsReceived(Guid paymentId)
{
Payments.First(p => p.PaymentId == paymentId).Status = PaymentStatus.Received;
RecalculateBalance();
}
}
You might have noticed, that we recalculate the balance of single order and not the balance of entire customer - and in most cases that's ok as customer belongs to another aggregate and its balance can be simply queried when needed. That is exactly the part that shows this 'consistency only within aggregate' thingy - we don't care about any other aggregate at this point, we only deal with single order. If that's not ok for requirements, then the domain is modeled incorrectly.
My point is, that in DDD there's no single good model for every scenario - you have to understand how the business works to be successful.
If you take a look at the example above, you'll see that there is no need to 'build' the transaction - entire transaction is located in SetPaymentAsReceived method. In most cases, one user action should lead to one particular method on an entity withing aggregate - this method explicitly relates to business operation (of course this method may call other methods).
As for events in DDD, there is a concept of Domain Events, however these are not directly related with PropertyChanged/CollectionChanged technical events. Domain Events indicate the business operations (transactions) that have been completed by an aggregate.
Overal it seems that pushing all logic in the domain model makes the
domain model rather complex
Of course it does as it is supposed to be used for scenarios with complex business logic. However if the domain is modeled correctly then it is easy to manage and control this complexity and that's one of the advantages of DDD.
Added after providing example:
Ok, and what about creating an aggregate root called Project - when you build aggregate root from Repository, you can fill it with NetworkData and the operation might look like this:
public class Project
{
protected List<Network> networks;
protected List<NetworkData> networkDatas;
public void Mutate(string someKindOfNetworkId, object someParam)
{
var network = networks.First(n => n.Id == someKindOfNetworkId);
var someResult = network.DoSomething(someParam);
networkDatas.Where(d => d.NetworkId == someKindOfNetworkId)
.ToList()
.ForEach(d => d.DoSomething(someResult, someParam));
}
}
NetworkEditor would not operate on Network directly, rather through Project using NetworkId.

Resources