Where to validate business rules when using event-sourcing - domain-driven-design

I implemented event sourced entities ( in Domain driven design it's called aggregate). It's a good practice to create a rich domain model. Domain driven design (DDD) suggests putting all business related things when possible into core entities and value objects.
But there is an issue when using such an approach in combination with event sourcing. In comparison to traditional approaches in an event sourced systems events are stored first and later all events are applied when building the entity to execute some methods.
Based upon that, the big question is where to put the business logic. Usually, I would like to have a method like:
public void addNewAppointment(...)
In this case, I would expect that the method makes sure that no business rules are violated. If this is the case an exception would be thrown.
But when using event sourcing I would have to create an event:
Event event = new AppointmentAddedEvent(...);
event store.save(event);
Right now, I explored 2 approaches to check business rules before storing the event.
First, check business rules within the application layer. The application layer in DDD is a delegation layer. Actually, it should contain no business logic. It should only delegate things like getting core entities, calling methods and saving things back. In this example this rule would be violated:
List<Event> events = store.getEventsForConference(id);
// all events are applied to create the conference entity
Conference conf = factory.build(events);
if(conf.getState() == CANCELED) {
throw new ConferenceClosed()
}
Event event = new AppointmentAddedEvent(...);
event store.save(event);
Obviously, the business rule adding appointments to canceled conferences should not be possible leaked into the non-core component.
The second approach I know is to add process methods of commands to core entities:
class Conference {
// ...
public List<Event> process(AddAppointmentCommand command) {
if(this.state == CANCELED) {
throw new ConferenceClosed()
}
return Array.asList(new AppointmentAddedEvent(...));
}
// ...
}
In this case, the benefit is that the business rules are part of the core entity. But there is a violation of separation of concerns principle. Now, the entity is responsible for creating events that are stored in an event store. Besides that, it feels weird that an entity is responsible for creating events. I can argue for why it's natural that an entity can process events. But the creation of domain events for storing, not for natural publishing, feels wrong.
Did anyone of you experience similar issues? And how did you solve these?
For now, I will just go with the business rules within the application service solution. It is still one place and ok-ish but it violates some of the DDD principles.
I am looking forward to your ideas and experiences about DDD, event sourcing and the validation of incoming changes.
Thanks in advance

I love this question. When I first asked it, that was the break between just following the patterns and challenging myself to understand what is really going on.
the big question is where to put the business logic
The usual answer is "the same place you did before" -- in methods of the domain entities. Your "second approach" is the usual idea.
But there is a violation of separation of concerns principle.
It isn't really, but it certainly looks weird.
Consider what we normally do, when saving off current state. We run some query (usually via the repository) to get the original state out of the book of record. We use that state to create an entity. We then run the command, in which the entity creates new state. We then save the object in the repository, which replaces the original state with the new state in the book of record.
In code, it looks something like
state = store.get(id)
conf = ConferenceFactory.build(state)
conf.state.appointments.add(...)
store.save(id, conf.state)
What we are really doing in event sourcing is replacing a mutable state with a persistent collection of events
history = store.get(id)
conf = ConferenceFactory.build(history)
conf.history.add(AppointmentScheduled(...))
store.save(id, conf.history)
In mature business domains, like accounting or banking, the ubiquitous language include event histories: journal, ledger, transaction history,... that sort of thing. In those cases, event histories are an inherent part of the domain.
In other domains -- like calendar scheduling -- we don't (yet?) have analogous entities in the domain language, so it feels like we are doing something weird when we change to events. But the core pattern is the same -- we pull history out of the book of record, we manipulate that history, we save the updates to the book of record.
So the business logic happens in the same place that it always did.
Which is to say that yes, the domain logic knows about events.
An exercise that may help: let go of the "object oriented" constraint, and just think in terms of functions....
static final List<Event> scheduleAppointment(List<Event> history, AddAppointmentCommand addAppointment) {
var state = state(history)
if(state == CANCELED) {
throw new ConferenceClosed()
}
return Array.asList(new AppointmentAddedEvent(...));
}
private static final State state(List<Event> history) {...}

Related

DDD: where should logic go that tests the existence of an entity?

I am in the process of refactoring an application and am trying to figure out where certain logic should fit. For example, during the registration process I have to check if a user exists based upon their email address. As this requires testing if the user exists in the database it seems as if this logic should not be tied to the model as its existence is dictated by it being in the database.
However, I will have a method on the repository responsible for fetching the user by email, etc. This handles the part about retrieval of the user if they exist. From a use case perspective, registration seems to be a use case scenario and accordingly it seems there should be a UserService (application service) with a register method that would call the repository method and perform if then logic to determine if the user entity returned was null or not.
Am I on the right track with this approach, in terms of DDD? Am I viewing this scenario the wrong way and if so, how should I revise my thinking about this?
This link was provided as a possible solution, Where to check user email does not already exits?. It does help but it does not seem to close the loop on the issue. The thing I seem to be missing from this article would be who would be responsible for calling the CreateUserService, an application service or a method on the aggregate root where the CreateUserService object would be injected into the method along with any other relevant parameters?
If the answer is the application service that seems like you are loosing some encapsulation by taking the domain service out of the domain layer. On the other hand, going the other way would mean having to inject the repository into the domain service. Which of those two options would be preferable and more in line with DDD?
I think the best fit for that behaviour is a Domain Service. DS could access to persistence so you can check for existence or uniquenes.
Check this blog entry for more info.
I.e:
public class TransferManager
{
private readonly IEventStore _store;
private readonly IDomainServices _svc;
private readonly IDomainQueries _query;
private readonly ICommandResultMediator _result;
public TransferManager(IEventStore store, IDomainServices svc,IDomainQueries query,ICommandResultMediator result)
{
_store = store;
_svc = svc;
_query = query;
_result = result;
}
public void Execute(TransferMoney cmd)
{
//interacting with the Infrastructure
var accFrom = _query.GetAccountNumber(cmd.AccountFrom);
//Setup value objects
var debit=new Debit(cmd.Amount,accFrom);
//invoking Domain Services
var balance = _svc.CalculateAccountBalance(accFrom);
if (!_svc.CanAccountBeDebitted(balance, debit))
{
//return some error message using a mediator
//this approach works well inside monoliths where everything happens in the same process
_result.AddResult(cmd.Id, new CommandResult());
return;
}
//using the Aggregate and getting the business state change expressed as an event
var evnt = Transfer.Create(/* args */);
//storing the event
_store.Append(evnt);
//publish event if you want
}
}
from http://blog.sapiensworks.com/post/2016/08/19/DDD-Application-Services-Explained
The problem that you are facing is called Set based validation. There are a lot of articles describing the possible solutions. I will give here an extract from one of them (the context is CQRS but it can be applied to some degree to any DDD architecture):
1. Locking, Transactions and Database Constraints
Locking, transactions and database constraints are tried and tested tools for maintaining data integrity, but they come at a cost. Often the code/system is difficult to scale and can be complex to write and maintain. But they have the advantage of being well understood with plenty of examples to learn from. By implication, this approach is generally done using CRUD based operations. If you want to maintain the use of event sourcing then you can try a hybrid approach.
2. Hybrid Locking Field
You can adopt a locking field approach. Create a registry or lookup table in a standard database with a unique constraint. If you are unable to insert the row then you should abandon the command. Reserve the address before issuing the command. For these sort of operations, it is best to use a data store that isn’t eventually consistent and can guarantee the constraint (uniqueness in this case). Additional complexity is a clear downside of this approach, but less obvious is the problem of knowing when the operation is complete. Read side updates are often carried out in a different thread or process or even machine to the command and there could be many different operations happening.
3. Rely on the Eventually Consistent Read Model
To some this sounds like an oxymoron, however, it is a rather neat idea. Inconsistent things happen in systems all the time. Event sourcing allows you to handle these inconsistencies. Rather than throwing an exception and losing someone’s work all in the name of data consistency. Simply record the event and fix it later.
As an aside, how do you know a consistent database is consistent? It keeps no record of the failed operations users have tried to carry out. If I try to update a row in a table that has been updated since I read from it, then the chances are I’m going to lose that data. This gives the DBA an illusion of data consistency, but try to explain that to the exasperated user!
Accepting these things happen, and allowing the business to recover, can bring real competitive advantage. First, you can make the deliberate assumption these issues won’t occur, allowing you to deliver the system quicker/cheaper. Only if they do occur and only if it is of business value do you add features to compensate for the problem.
4. Re-examine the Domain Model
Let’s take a simplistic example to illustrate how a change in perspective may be all you need to resolve the issue. Essentially we have a problem checking for uniqueness or cardinality across aggregate roots because consistency is only enforced with the aggregate. An example could be a goalkeeper in a football team. A goalkeeper is a player. You can only have 1 goalkeeper per team on the pitch at any one time. A data-driven approach may have an ‘IsGoalKeeper’ flag on the player. If the goalkeeper is sent off and an outfield player goes in the goal, then you would need to remove the goalkeeper flag from the goalkeeper and add it to one of the outfield players. You would need constraints in place to ensure that assistant managers didn’t accidentally assign a different player resulting in 2 goalkeepers. In this scenario, we could model the IsGoalKeeper property on the Team, OutFieldPlayers or Game aggregate. This way, maintaining the cardinality becomes trivial.
You seems to be on the right way, the only stuff I didn't get is what your UserService.register does.
It should take all the values to register a user as input, validate them (using the repository to check the existence of the email) and, if the input is valid store the new User.
Problems can arise when the validation involve complex queries. In that case maybe you need to create a secondary store with special indexes suited for queries that you can't do with your domain model, so you will have to manage two different stores that can be out of sync (a user exists in one but it isn't replicated in the other one, yet).
This kind of problem happens when you store your aggregates in something like a key-value store where you can search just with the id of the aggregate, but if you are using something like a sql database that permits to search using your entities fields, you can do a lot of stuff with simple queries.
The only thing you need to take care is avoid to mix query logic and commands logic, in your example the lookup you need to do is easy, is just one field and the result is a boolean, sometimes it can be harder like time operations, or query spanning multiple tables aggregating results, in these cases it is better to make your (command) service use a (query) service, that offers a simple api to do the calculation like:
interface UserReportingService {
ComplexResult aComplexQuery(AComplexInput input);
}
That you can implement with a class that use your repositories, or an implementation that executes directly the query on your database (sql, or whatever).
The difference is that if you use the repositories you "think" in terms of your domain object, if you write directly the query you think in terms of your db abstractions (tables/sets in case of sql, documents in case of mongo, etc..). One or the other depends on the query you need to do.
It is fine to inject repository into domain.
Repository should have simple inteface, so that domain objects could use it as simple collection or storage. Repositories' main idea is to hide data access under simple and clear interface.
I don't see any problems in calling domain services from usecase. Usecase is suppossed to be archestrator. And domain services are actions. It is fine (and even unavoidable) to trigger domain actions by usecase.
To decide, you should analyze Where is this restriction come from?
Is it business rule? Or maybe user shouldn't be a part of model at all?
Usualy "User" means authorization and authentification i.e behaviour, that for my mind should placed in usecase. I prefare to create separate entity for domain (e.g. buyer) and relate it with usecase's user. So when new user is registered it possible to trigger creation of new buyer.

Can an Aggregate Root factory method return a command instead of publishing an event?

In Vaughn Vernon's Implementing Domain-Driven Design book, he described the use of factory method in an Aggregate Root. One example was that of a Forum aggregate root which had startDiscussion factory method which returned a Discussion aggregate root.
public class Forum extends Entity {
...
public Discussion startDiscussion(
DiscussionId aDiscussionId, Author anAuthor, String aSubject) {
if (this.isClosed()) {
throw new IllegalStateException("Forum is closed.");
}
Discussion discussion = new Discussion(
this.tenant(), this.forumId(), aDiscussionId, anAuthor, aSubject);
DomainEventPublisher.instance().publish(new DiscussionStarted(...));
return discussion;
}
How would one implement this factory pattern in an event sourcing system, specifically in Axon?
I believe conventionally, it may be implemented in this way:
StartDiscussionCommand -> DiscussionStartedEvent -> CreateDiscussionCommand -> DiscussionCreatedEvent
We fire a StartDiscussionCommand to be handled by the Forum, Forum then publishes a DiscussionStartedEvent. An external event handler would catch the DiscussionStartedEvent, convert it, and fire a CreateDiscussionCommand. Another handler would instantiate a Discussion using the CreateDiscussionCommand and Discussion would fire a DiscussionCreatedEvent.
Alternatively, can we instead have:
StartDiscussionCommand -> CreateDiscussionCommand -> DiscussionCreatedEvent
We fire StartDiscussionCommand, which would trigger a command handler and invoke Forum's startDiscussion() method that will return the CreateDiscussionCommand. The handler will then dispatch this CreateDiscussionCommand. Another handler receives the command and use this to instantiate Discussion. Discussion would then fire the DiscussionCreatedEvent.
The first practice involves 4 DTOs, whilst the second one involves only 3 DTOs.
Any thoughts on which practice should be preferred? Or is there another way to do this?
The best approach to a problem like this, is to consider your aggregates (in fact, the entire system) as a black box first. Just look at the API.
Given a Forum (that is not closed),
When I send a StartedDiscussionCommand for that forum,
A new Discussion is started.
But also
Given a Forum that was closed
When I send a CreateDiscussionCommand for that forum,
An exception is raised
Note that the API you suggested is too technical. In 'real life', you don't create a discussion, you start one.
This means state of the forum is involved in the creation of the discussion. So ideally (when looking into the black box), such a scenario would be implemented in the Forum aggregate, and apply an event which represents the creation event for a Discussion aggregate. This is under the assumption that other factors require the Forum and Discussion to be two distinct aggregates.
So you don't really want the Command handler to return/send a command, you want that handler to make a decision on whether to create an aggregate or not.
Unfortunately, Axon doesn't support this feature, yet. At the moment, Axon cannot apply an event that belongs to another aggregate, through its regular APIs.
However, there is a way to get it done. In Axon 3, you don't have to apply an Event, you can also publish one directly to the Event Bus (which in the case of Event Sourcing would be an Event Store implementation). So to implement this, you could directly publish a DomainEventMessage that contains a DiscussionCreatedEvent. The ID for the discussion can be any UUID and the sequence number of the event is 0, as it is the creation event of the discussion.
Any thoughts on which practice should be preferred?
The motivation for a command is to direct the application to update the book of record. A command that you don't expect to produce an event is pretty weird.
That is, if your flow is
Forum.startDiscussion -> []
Discussion.create -> [ DiscussionCreated ]
One is bound to ask why the Forum is involved at all?
if (this.isClosed()) {
throw new IllegalStateException("Forum is closed.");
}
This here is an illusion -- we're looking at the state of the Forum at some arbitrary point in the past to process the Discussion command. In other words, after this check, the state of Forum could change, and our processing in Discussion would not know. So it would be just as correct to make this check when validating the command, or by checking the read model from within Discussion.
(Everything we get from the book of record is a representation of the past; it has to be, in order to already be in the book of record for us to read. The only moment that we act in the present is that point where we update the book of record. More precisely, its at the moment of the write that we discover if the assumptions we've made about the past still hold. When we write the changes to Discussion, we are proving that Discussion hasn't changed since we read the data; but that tells us nothing of whether Forum has changed).
What command->command looks like is an api compatibility adapter; in the old API, we used a Forum.startDiscussion command. We changed the model, but continue to support the old command for backwards compatibility. It would all still by synchronous with the request.
That's a real thing (we want the design to support aggressive updates to the model without requiring that clients/consumers be constantly updating), but it's not a good fit for your process flow.

Do you apply events to the domain model immediately when there is still the possibility of useless events or undo?

Every example of event sourcing that I see is for the web. It seems to click especially well with the MVC architecture where views on the client side aren't running domain code and interactivity is limited. I'm not entirely sure how to extrapolate to a rich desktop application, where a user might be editing a list or performing some other long running task.
The domain model is persistence-agnostic and presentation-agnostic and is only able to be mutated by applying domain events to an aggregate root. My specific question is should the presentation code mutate the domain model while the user makes uncommitted changes?
If the presentation code does not mutate the domain model, how do you enforce domain logic? It would be nice to have instant domain validation and domain calculations to bubble up to the presentation model as the user edits. Otherwise you have to duplicate non-trivial logic in the view model.
If the presentation code does mutate the domain model, how do you implement undo? There's no domain undelete event since the concept of undo only exists in an uncommited editing session, and I'd be loathe to add an undo version of every event. Even worse, I need the ability to undo events out-of-order. Do you just remove the event and replay everything on each undo? (An undo also happens if a text field returns to its previous state, for example.)
If the presentation code does mutate the domain model, is it better to persist every event the user performs or just condense the user's activity to the simplest set of events possible? For a simple example, imagine changing a comments field over and
over before saving. Would you really persist each intermediate
CommentChangedEvent on the same field during the same editing
session? Or for a more complicated example, the user will be
changing parameters, running an optimization calculation, adjusting
parameters, rerunning the calculation, etc until this user is
satisfied with the most recent result and commits the changes. I
don't think anyone would consider all the intermediate events worth
storing. How would you keep this condensed?
There is complicated collaborative domain logic, which made me think DDD/ES was the way to go. I need a picture of how rich client view models and domain models interact and I'm hoping for simplicity and elegance.
I don't see desktop DDD applications as much different from MVC apps, you can basically have the same layers except they're mostly not network separated.
CQRS/ES applications work best with a task-based UI where you issue commands that reflect the user's intent. But by task we don't mean each action the user can take on the screen, it has to have a meaning and purpose in the domain. As you rightly point out in 3., no need to model each micro modification as a full fledged DDD command and the associated event. It could pollute your event stream.
So you would basically have two levels :
UI level action
These can be managed in the presentation layer entirely. They stack up to eventually be mapped to a single command, but you can undo them individually quite easily. Nothing prevents you from modelling them as micro-events that encapsulate closures for do and undo for instance. I've never seen "cherrypickable" undos in any UI, nor do I really see the point, but this should be feasible and user comprehensible as long as the actions are commutative (their effect is not dependent on the order of execution).
Domain level task
Coarser-grained activity represented by a command and a corresponding event. If you need to undo these, I would rather append a new rollback event to the event stream than try to remove an existing one ("don't change the past").
Reflecting domain invariants and calculations in the UI
This is where you really have to get the distinction between the two types of tasks right, because UI actions will typically not update anything on the screen apart from a few basic validations (required fields, string and number formats, etc.) Issuing commands, on the other hand, will result in a refreshed view of the model, but you may have to materialize the action by some confirmation button.
If your UIs are mostly about displaying calculated numbers and projections to the user, this can be a problem. You could put the calculations in a separate service called by the UI, then issue a modification command with all the updated calculated values when the user saves. Or, you could just submit a command with the one parameter you change and have the domain call the same calculation service. Both are actually closer to CRUD and will probably lead to an anemic domain model though IMO.
I've wound up doing something like this, though having the repository manage transactions.
Basically my repositories all implement
public interface IEntityRepository<TEntityType, TEventType>{
TEntityType ApplyEvents(IEnumerable<TEventType> events);
Task Commit();
Task Cancel();
}
So while ApplyEvents will update and return the entity, I also keep the start version internally, until Commit is called. If Cancel is called, I just swap them back, discarding the events.
A very nice feature of this is I only push events to the web, DB, etc once the transaction is complete. The implementation of the repository will have a dependency on the database or web service services - but all the calling code needs to know is to Commit or Cancel.
EDIT
To do the cancel you store the old entity, updated entity, and events since in a memory structure. Something like
public class EntityTransaction<TEntityType, TEventType>
{
public TEntityType oldVersion{get;set;}
public TEntityType newVersion{get;set;}
public List<TEventType> events{get;set;}
}
Then your ApplyEvents would look like, for a user
private Dictionary<Guid, EntityTransaction<IUser, IUserEvents>> transactions;
public IUser ApplyEvents(IEnumerable<IUserEvent> events)
{
//get the id somehow
var id = GetUserID(events);
if(transactions.ContainsKey(id) == false){
var user = GetByID(id);
transactions.Add(id, new EntityTransaction{
oldVersion = user;
newVersion = user;
events = new List<IUserEvent>()
});
}
var transaction = transactions[id];
foreach(var ev in events){
transaction.newVersion.When(ev);
transaction.events.Add(ev);
}
}
Then in your cancel, you simply substitute the old version for the new if you're cancelling a transaction.
Make sense?

Doesn't DDD and CQRS/ES break the persistence agnosticity of DDD?

The Domain model in DDD should be persistence agnostic.
CQRS dictates me to fire events for everything I wan't to have in my read model. (And by the way to split my model into a write model and at least one read model).
ES dictates me to fire events for everything that changes state and that my aggregate roots must handle the events itself.
This seems not to be very persistence agnostic to me.
So how could DDD and CQRS/ES be combined without heavy impact of this persistence technology to the domain model?
Is the read model also in the DDD domain model? Or outside of it?
Are the CQRS/ES events the same as DDD domain events?
Edit:
What I took out of the answers is the following:
Yes, for ORM the implementation of the domain model objecs will differ than that with using ES.
The question is the false way around. First write the domain model objects, then decide how to persist (more event like => ES, more data like => ORM, ...).
But I doubt that you will ever be able to use ES (without big additions/changes to your domain objects) if you did not make this decision front up, and also to use ORM without decide it front up will cause very much pain. :-)
Commands
Boiled down to its essence, CQRS means that you should split your reads from your writes.
Typically, a command arrives in the system and is handled by some sort of function that then returns zero, one, or many events resulting from that command:
handle : cmd:Command -> Event list
Now you have a list of events. All you have to do with them is to persist them somewhere. A function to do that could look like this:
persist : evt:Event -> unit
However, such a persist function is purely an infrastructure concern. The client will typically only see a function that takes a Command as input and returns nothing:
attempt : cmd:Command -> unit
The rest (handle, followed by persist) is handled asynchronously, so the client never sees those functions.
Queries
Given a list of events, you can replay them in order to aggregate them into the desired result. Such a function essentially looks something like this:
query : target:'a -> events:Event list -> Result
Given a list of events and a target to look for (e.g. an ID), such a function can fold the events into a result.
Persistence ignorance
Does this force you to use a particular type of persistence?
None of these functions are defined in terms of any particular persistence technology. You can implement such a system with
In-memory lists
Actors
Event Stores
Files
Blobs
Databases, even
etc.
Conceptually, it does force you to think about persistence in terms of events, but that's no different than the approach with ORMs that force you to think about persistence in terms of Entities and relationships.
The point here is that it's quite easy to decouple a CQRS+ES architecture from most implementation details. That's usually persistent-ignorant enough.
A lot of the premises in your question present as very binary/black-and-white. I don't think DDD, CQRS, or Event Sourcing are that prescriptive--there are many possible interpretations and implementations.
That said, only one of your premises bother me (emphasis mine):
ES dictates me to fire events for everything that changes state and
that my aggregate roots must handle the events itself.
Usually ARs emit events--they don't handle them.
In any case, CQRS and ES can be implemented to be completely persistence agnostic (and usually are). Events are stored as a stream, which can be stored in a relational database, a NoSQL database, the file system, in-memory, etc. The storing of events is usually implemented at the boundaries of the application (I think of this as infrastructure), and domain models have no knowledge of how their streams are stored.
Similarly, read models can be stored in any imaginable storage medium. You can have 10 different read models and projections, with each of them stored in a different database and different format. Projections just handle/read the event stream, and are otherwise completely decoupled from the domain.
It does not get any more persistence agnostic than that.
Not sure how orthodox this is, but a current event sourced entity model I have does something like this, which might illustrate the difference . . . (C# example)
public interface IEventSourcedEntity<IEventTypeICanRespondTo>{
void When(IEventTypeICanRespondTo event);
}
public interface IUser{
bool IsLoggedIn{get;}
}
public class User : IUser, IEventSourcedEntity<IUserEvent>{
public bool IsLoggedIn{get;private set;}
public virtual void When(IUserEvent event){
if(event is LoggedInEvent){
IsLoggedIn = true;
}
}
}
Very simple example - but you can see here that how (or even IF) the event is persisted is outside the domain object. You can easily do that through a repository. Likewise CQRS is respected, because how I read the value is separate from how I set it. So for example, say I have multiple devices for a user, and only want them logged in once there's more than two?
public class MultiDeviceUser : IUser, IEventSourcedEntity<IUserEvent>{
private IEnumerable<LoggedInEvent> _logInEvents = . . .
public bool IsLoggedIn{
get{
return _logInEvents.Count() > MIN_NUMBER_OF_LOGINS;
}
}
public void When(IUserEvent ev){
if(ev is LoggedInEvent){
_logInEvents.Add(ev);
}
}
}
To the calling code, though, your actions are the same.
var ev = new LoggedInEvent();
user.When(ev);
if(user.IsLoggedIn) . . . .
I think you can still decouple your domain from the persistence mechanism by using a satellite POCO. Then you can implement your specific persistence mechanism around that POCO, and let your domain use it as a snapshot/memento/state.

should a domain model keep itself consistent using events?

I am working on an application where we try to use a Domain Model. The idea is to keep the business logic inside the objects in the Domain Model. Now a lot is done by objects subscribing to related objects to react to changes in them. This is done through PropertyChanged and CollectionChanged. This work OK except in the following:
Complex actions : Where a lot of changes should be handled as a group (and not individual property/collection changes). Should I / how can I 'build' transactions?
Persistency : I use NHibernate for persistency and this also uses the public property setters of classes. When NHibernate hits the property a lot of bussiness logic is done (which seems unnecessary). Should I use custom setters for NHibernate?
Overal it seems that pushing all logic in the domain model makes the domain model rather complex. Any ideas???
Here's a 'sample' problem (sorry for the crappy tooling i use):
You can see the Project my container and objects below it are reacting to each other by subscribing. Now changes to Network are done via NetworkEditor but this editor has no knowledge of NetworkData. This data might even be defined in a another assembly sometimes. The flow goes from user->NetworkEditor->Network->NetworkData and the all other object interested. This does not seem to scale.
I fear that combination of DDD and PropertyChanged/CollactionChanged events might now be the best idea. The problem is, that if you base your logic around these events it is extremely hard to manage the complexity as one PropertyChanged leads to another and another and soon enough you loose control.
Another reason why ProportyChanged events and DDD doesn't exactly fit is that in DDD every business operation should be as explicit as possible. Keep in mind that DDD is supposed to bring technical stuff into the world of business, not the other way around. And basing on the PropertyChanged/CollectionChanged doesn't seem very explicit.
In DDD the main goal is to keep consistency inside aggregate, in other words, you need to model the aggregate in such way, that whatever operation you invoke the aggregate is valid and consistent (if the operation succeeds of course).
If you build your model right that there's no need to worry about 'building' transaction - an operation on aggregate should be a transaction itself.
I don't know how your model looks like, but you might consider moving the responsibilities one level 'up' in the aggregate tree, quite possibly adding additional logical entities in the process, instead of relying on the PropertyChanged events.
Example:
Lets assume you have a collection of payments with statuses and whenever a payment changes, you want to recalculate the total balance of customer orders. Instead of subscribing changes to the payments collection and calling a method on customer when collection changes, you might do something like this:
public class CustomerOrder
{
public List<Payment> Payments { get; }
public Balance BalanceForOrder { get; }
public void SetPaymentAsReceived(Guid paymentId)
{
Payments.First(p => p.PaymentId == paymentId).Status = PaymentStatus.Received;
RecalculateBalance();
}
}
You might have noticed, that we recalculate the balance of single order and not the balance of entire customer - and in most cases that's ok as customer belongs to another aggregate and its balance can be simply queried when needed. That is exactly the part that shows this 'consistency only within aggregate' thingy - we don't care about any other aggregate at this point, we only deal with single order. If that's not ok for requirements, then the domain is modeled incorrectly.
My point is, that in DDD there's no single good model for every scenario - you have to understand how the business works to be successful.
If you take a look at the example above, you'll see that there is no need to 'build' the transaction - entire transaction is located in SetPaymentAsReceived method. In most cases, one user action should lead to one particular method on an entity withing aggregate - this method explicitly relates to business operation (of course this method may call other methods).
As for events in DDD, there is a concept of Domain Events, however these are not directly related with PropertyChanged/CollectionChanged technical events. Domain Events indicate the business operations (transactions) that have been completed by an aggregate.
Overal it seems that pushing all logic in the domain model makes the
domain model rather complex
Of course it does as it is supposed to be used for scenarios with complex business logic. However if the domain is modeled correctly then it is easy to manage and control this complexity and that's one of the advantages of DDD.
Added after providing example:
Ok, and what about creating an aggregate root called Project - when you build aggregate root from Repository, you can fill it with NetworkData and the operation might look like this:
public class Project
{
protected List<Network> networks;
protected List<NetworkData> networkDatas;
public void Mutate(string someKindOfNetworkId, object someParam)
{
var network = networks.First(n => n.Id == someKindOfNetworkId);
var someResult = network.DoSomething(someParam);
networkDatas.Where(d => d.NetworkId == someKindOfNetworkId)
.ToList()
.ForEach(d => d.DoSomething(someResult, someParam));
}
}
NetworkEditor would not operate on Network directly, rather through Project using NetworkId.

Resources