I'm learning DDD and Hexagonal architecture, I think I got the basics. However, there's one thing I'm not sure how to solve: how am I showing data to the user?
So, for example, I got a simple domain with a Worker entity with some functionality (some methods cause the entity to change) and a WorkerRepository so I can persist Workers. I got an application layer with some commands and command bus to manipulate the domain (like creating Workers and updating their work hours, persisting the changes), and an infrastructure layer which has the implementation of the WorkerRepository and a GUI application.
In this application I want to show all workers with some of their data, and be abe to modify them. How do I show the data?
I could give it a reference to the implementation of WorkerRepository.
I think it's not a good solution because this way I could insert new Workers in the repository skipping the command bus. I want all changes going through the command bus.
Okay then, I'd split the WorkerRepository into WorkerQueryRepository and WorkerCommandRepository (as per CQRS), and give reference only to the WorkerQueryRepository. It's still not a good solution because the repo gives back Worker entities which have methods that change them, and how are these changes will be persisted?
Should I create two type of Repositories? One would be used in the domain and application layer, and the other would be used only for providing data to the outside world. The second one wouldn't return full-fledged Worker entities, only WorkerDTOs containing only the data the GUI needs. This way, the GUI has no other way to change Workers, only through the command bus.
Is the third approach the right way? Or am I wrong forcing that the changes must go through the command bus?
Should I create two type of Repositories? One would be used in the domain and application layer, and the other would be used only for providing data to the outside world. The second one wouldn't return full-fledged Worker entities, only WorkerDTOs containing only the data the GUI needs.
That's the CQRS approach; it works pretty well.
Greg Young (2010)
CQRS is simply the creation of two objects where there was previously only one. The separation occurs based upon whether the methods are a command or a query (the same definition that is used by Meyer in Command and Query Separation, a command is any method that mutates state and a query is any method that returns a value).
The current term for the WorkerDTO you propose is "Projection". You'll often have more than one; that is to say, you can have a separate projection for each view of a worker in the GUI. (That has the neat side effect of making the view easier -- it doesn't need to think about the data that it is given, because the data is already formatted usefully).
Another way of thinking of this, is that you have a "write-only" representation (the aggregate) and "read-only" representations (the projections). In both cases, you are reading the current state from the book of record (via the repository), and then using that state to construct the representation you need.
As the read models don't need to be saved, you are probably better off thinking factory, rather than repository, on the read side. (In 2009, Greg Young used "provider", for this same reason.)
Once you've taken the first step of separating the two objects, you can start to address their different use cases independently.
For instance, if you need to scale out read performance, you have the option to replicate the book of record to a bunch of slave copies, and have your projection factory load from the slaves, instead of the master. Or to start exploring whether a different persistence store (key value store, graph database, full text indexer) is more appropriate. Udi Dahan reviews a number of these ideas in CQRS - but different (2015).
"read models don't need to be saved" Is not correct.
It is correct; but it isn't perhaps as clear and specific as it could be.
We don't need to create a durable representation of a read model, because all of the information that describes the variance between instances of the read model has already been captured by our writes.
We will often want to cache the read model (or a representation of it), so that we can amortize the work of creating the read model across many queries. And various trade offs may indicate that the cached representations should be stored durably.
But if a meteor comes along and destroys our cache of read models, we lose a work investment, but we don't lose information.
Related
I was asked to implement CQRS/Event sourcing patterns into a legacy web application, in order to prepare to migrate it from a monolithic/state oriented model to a distributed, service oriented app.
I have some questions on how I can design a Domain oriented code bundle that would connect the legacy entities strongly coupled to database, with a new Event sourced model.
The first things I did were:
writing a small "framework" for CQRS/ES, with classes like AggregateRoot, DomainEvent, Command, Handlers, Messaging, Eventstore, AggregateIds, etc.
trying to group and "migrate" the legacy Entities into some Aggregates to reconstruct all the history and states of the app into EventSoourced Aggregates
plug some Commands dispatching in the old controllers in order to let the app work as is, but also to feed the new CQRS/ES system on the side.
The context:
The legacy app contains several entities, mapped to database, that hold the model layer. (Our domain is Human resources (manpower).
Let's say we have those existing entities:
Worker, with various fields and related entities (OneToOne, OneToMany), like
name
address 1-1
competences 1-N
Society, in which worker works, with various fields and related entities (OneToOne, OneToMany), like
name
address 1-1
hours
Contract, with various fields and related entities (OneToOne, OneToMany), like
address 1-1
Worker 1-1
Society 1-1
documents 1-N
days 1-N
hours
etc.
From this legacy model, I designed a MissionAggregate that holds:
A db independent ID, like UUID
some Value objects: address, days (they were an entity in the legacy model, they became VOs here)
I also designed a WorkerAggregate and a SocietyAggregate, with fields and UUIDS, and in the MissionAggregate I added:
a reference to WorkerAggregate's UUID
a reference to SocietyAggregate's UUID
As I said earlier, my aim is to leave the legacy app as is, but just introduce in the CRUD controller's methods some calls to dispatch Commands to the new CQRS system.
For example:
After flushing newly created Contract in bdd, I want to dispatch a "CreateMissionCommand" to the new command bus.
It targets the appropriate Command Handler, that handles all the command's data, passes it to a newly created Aggregate with a new UUID and stores "MissionCreatedDomainEvent" in the EventStore.
The DomainEvent is indexed with an AggregateId, a playhead, and has a payload which contains the fields necessary to be applied to and build the MissionAggregate.
The newly Contract created in the app has now its former lifecycle, as usual, with all the updates that the legacy app does on it. But I also need to reflects all those changes to the corresponding EventSourcedAggregate, so every time there is a flush in database in the app, I dispatch a Command that translates the "crud like operations" of the legacy app into a Domain oriented /Command oriented pattern.
To sum up the workflow is:
A Crud legacy operation occurs and flushes some changes on the Contract Entity
In just a row of code in the controller, I dispatch a command built with necessary fields (AggregateId of the MissionAggregate... that I need to have stored somewhere... see next problems) to the Domain command bus, so that the impact on the existing code base is very low.
The bus passes the command to the corresponding command handler
The handler loads the aggregate and applies the changes it by calling the appropriate Aggregate method
then after some validation, the aggregate raises and stores the appropriate event
My problems and questions (some of them at least) are:
I feel like I am rewriting all big portions of the legacy app, with the same kind of relations between the Aggregates that I have between the Entities, and with the same type of validations, checks etc.
Having references, to both WorkerAggregate and SocietyAggregate UUID in MissionAggregate implies that I have to build those aggregate also (hence to dispatch commands from legacy app when the Worker and Society entities are flushed). Can't I have only references to Worker's entity id and Society's entity id?
How can I avoid having a eternally growing MissionAggregate? The Contract Entity is quite huge, it has a lot of fields that are constantly updated (hours, days, documents, etc.) If I want to store all those events, I need to have a large MissionAggregate to reflect all those changes; and so I need to have a tons of CommandHandlers that react to all the Commands of add, update, etc. that I am going to dispatch from the legacy app.
How "free" is an Aggregate from the Root entity it is supposed to refer to ? For example, a Contract Entity needs to relate somewhere to it's related Mission Aggregate, like for example when I want to dispatch a Command from the app, just after the legacy code having flushed something on the Entity. Where to store this relation? In the Entity itself, in a AggregateId field? in the Aggregate, should I have a ContractId field? Or should I have some kind of Mapping Table somewhere that holds the relationship between Contract ID and MissionAggregate ID?
What to do with the past? Should I migrate all the existing data through a script that generates Aggregates and events on all the historical data?
Thanks in advance for your time.
You have a huge task ahead of you, let's try to break it down.
It's best to build this new part of the system in isolation from the legacy codebase, otherwise you're going to have your hands tied in every turn of the way.
Create a separate layer in your project for these new requirements. We're going to call it "bubble" from now on. This bubble will be like a greenfield project, with its own structure, dependencies, etc. There will be no direct communication between the bubble and the legacy; communication will happen through another dedicated translation layer, which we'll call "Anti-Corruption Layer" (ACL).
ACL
It is like an API between two systems.
It translates calls from the bubble to the legacy and vice-versa. Its purpose is to prevent one system from corrupting or influencing the other. This way you can keep building/maintaining each system independently from each other.
At the same time, the ACL allows one system to consume the other, and reuse logic, validations, rules, etc.
To answer your questions directly:
I feel like i am rewriting all big portions of the legacy app, with the same kind of relations between the Aggregates that i have between the Entities, and with the same type of validations, checks etc.
With the ACL, you can resort to calling validations and reuse implementations from the legacy code. This will allow you time to rewrite things as needed or as possible.
You may not need to rewrite the entire system, though. If your goal is to implement CQRS and Event Sourcing and you can achieve this goal by keeping most or part of the legacy system, I would say you do it. Unless, of course, one of the goals is to completely replace the old system. Otherwise, keep it; write as less code as possible.
Suggested workflow:
Keep the CQRS and Event Sourcing system in the bubble
Do not bring these new frameworks into legacy
Make the lagacy Controller issue method calls to the ACL
The ACL will convert these calls into Commands and dispatch them
Any events will be caught by your Event Sourcing framework
Results will be persisted to the bubble's database
The bubble's database can be a different schema in the same database or can be a different database altogether. But you'll have to think about synchronization, and that's a topic of its own. To reduce complexity, I recommend a different schema in the same database.
Having references, to both WorkerAggregate and SocietyAggregate UUID in MissionAggregate implies that i have to build those aggregate also (hence to dispatch commands from legacy app when the Worker and Society entities are flushed). Can't i have only references to Worker's entity id and Society's entity id?
How can i avoid having a eternally growing MissionAggregate ? The Contract Entity is quite huge, it has a looot of fields that are constantly updated (hours, days, documents, etc.) If i want to store all those events, i need to have a large MissionAggregate to reflect all those changes; and so i need to have a tons of CommandHandlers that react to all the Commands of add, update, etc that i am going to dispatch from the legacy app.
You should aim for small aggregates. Huge aggregates are likely to degrade performance and cause concurrency problems.
If you anticipate having a huge aggregate, it is best to rethink it and try to break it down. Ask what fields/properties change together - these are possibly a different aggregate.
Also, when you speak about CQRS, you generally lean towards a task-based way of doing things in your system.
Think of a traditional web application, where you have a huge page with lots of fields that are all sent to the server in one batch when the user saves.
Now, contrast it with a modern web app where the user changes small portions of data at each step. If you think about your system this way you'll find those smaller aggregates.
PS. you don't need to rebuild your interfaces for this. If your legacy system has those huge pages, you could have logic in the controllers to detect which fields were changed and issue the appropriate commands.
How "free" is an Aggregate from the Root entity it is supposed to refer to ? For example, a Contract Entity needs to relate somewhere to it's related Mission Aggregate, like for example when i want to dispatch a Command from the app, just after the legacy code having flushed something on the Entity. Where to store this relation ? In the Entity itself, in a AggregateId field ? in the Aggregate, should i have a ContratId field ? Or should i have some kind of Mapping Table somewhere that holds the relationship between Contract ID and MissionAggregate ID?
Aggregates represent a conceptual whole. They are like atoms, indivisible things. You should always refer to an aggregate by its Root Entity Id, and never to a Child Entity Id: looking from the outside, there are no children.
An aggregate should be loaded as a whole and persisted as a whole. One more reason to have small aggregates.
An aggregate can be comprised of a single entity. Or it can have more entities and value objects, forming a graph, but one entity will be elected as the Root and will hold references to its children. Child entities and value objects should not hold references to their parents. The dependency is not bi-directional.
If Contract is an entity inside the Mission aggregate, the Contract should not have a reference to its parent.
But, if your Contract and Mission are different aggregates, then they can reference each other by their Ids.
What to do with the past? Should i migrate all the existing datas through a script that generates Aggregates and events on all the historical data?
That's a question for the business experts. Do they need it? If they don't, then don't implement it just for the sake of doing so. Every decision you make should be geared towards satisfying a business need and generating real value for it, considering the costs and tradeoffs.
Some people say that code is a liability, not an asset, and I aggre to some extent: every line of code you write needs to be tested and supported. Don't write any code that is not really necessary.
Also, have a look at this article about the Strangler Pattern, which shows how to migrate a legacy system by gradually replacing specific pieces of functionality with new applications and services.
If you have a chance, watch this course at Pluralsight (paid): Domain-Driven Design: Working with Legacy Projects. The author presents practical approaches for dealing with this kind of task.
I hope this has given you some insight.
I don't want to spoil your game. Everybody knows how cool it is to rewrite something from scratch. It's a challenge, it's fun, it's exciting. However...
migrate it from a monolithic/state oriented model to a distributed, service oriented app
CQRS/Event Sourcing won't solve any of your problems and it won't help you distribute the app in any reasonable way. If you just generate events on the CRUD operations you'll have a large tangled mess of dependencies between each part. Every part that needs data will have to call a couple of "services" (i.e. tables) to get it, than push data elsewhere, generate events1 that some other parts will react to. It will be a mess. Usually this is called a distributed monolith.
This is also the reason you already see problems with it. These problems won't go away, because you are essentially building the same system in the same way, but this time it'll be more complex.
Where to go from here
The very first thing is always: have a clear goal. You want a service oriented architecture you said. Why? Are there parts that need different scaling, different resources? Are they managed by different teams with different life-cycles? Etc.? Maybe you already have all this, I don't know, but if not, that's your first task.
Then. The parts you do want to pull out can't be just CRUD things. Those will not be independent, so whether your goal (see point above!) is scaling or different team, you won't reach your goal! To be independent you'll have to pull out the behavior with the data, and in a way that the service can operate on its own.
You can't just throw buzzwords at it and hope for the best. I'd suggest to just ignore all the hype and buzzwords and think about the goal you want to reach.
For example: I need a million workers to log their time in under 10 minutes total. So that means I need a "service" to enable worker to log their time with a web interface. So let's create that as a complete independent piece with its own database so it can be scaled to a 100 nodes when it needs to be. Export data to billing automatically every hour or so.
I've read a few posts relating to this, but i still can't quite grasp how it all works.
Let's say for example i was building a site like Stack Overflow, with two pages => one listing all the questions, another where you ask/edit a question. A simple, CRUD-based web application.
If i used CQRS, i would have a seperate system for the read/writes, seperate DB's, etc..great.
Now, my issue comes to how to update the read state (which is, after all in a DB of it's own).
Flow i assume is something like this:
WebApp => User submits question
WebApp => System raises 'Write' event
WriteSystem => 'Write' event is picked up and saves to 'WriteDb'
WriteSystem => 'UpdateState' event raised
ReadSystem => 'UpdateState' event is picked up
ReadSystem => System updates it's own state ('ReadDb')
WebApp => Index page reads data from 'Read' system
Assuming this is correct, how is this significantly different to a CRUD system read/writing from same DB? Putting aside CQRS advantages like seperate read/write system scaling, rebuilding state, seperation of domain boundaries etc, what problems are solved from a persistence standpoint? Lock contention avoided?
I could achieve a similar advantage by either using queues to achieve single-threaded saves in a multi-threaded web app, or simply replicate data between a read/write DB, could i not?
Basically, I'm just trying to understand if i was building a CRUD-based web application, why i would care about CQRS, from a pragmatic standpoint.
Thanks!
Assuming this is correct, how is this significantly different to a CRUD system read/writing from same DB? Putting aside CQRS advantages like seperate read/write system scaling, rebuilding state, seperation of domain boundaries etc, what problems are solved from a persistence standpoint? Lock contention avoided?
The problem here is:
"Putting aside CQRS advantages …"
If you take away its advantages, it's a little bit difficult to argue what problems it solves ;-)
The key in understanding CQRS is that you separate reading data from writing data. This way you can optimize the databases as needed: Your write database is highly normalized, and hence you can easily ensure consistency. Your read database in contrast is denormalized, which makes your reads extremely simple and fast: They all become SELECT * FROM … effectively.
Under the assumption that a website as StackOverflow is way more read from than written to, this makes a lot of sense, as it allows you to optimize the system for fast responses and a great user experience, without sacrificing consistency at the same time.
Additionally, if combined with event-sourcing, this approach has other benefits, but for CQRS alone, that's it.
Shameless plug: My team and I have created a comprehensive introduction to CQRS, DDD and event-sourcing, maybe this helps to improve understanding as well. See this website for details.
A good starting point would be to review Greg Young's 2010 essay, where he tries to clarify the limited scope of the CQRS pattern.
CQRS is simply the creation of two objects where there was previously only one.... This separation however enables us to do many interesting things architecturally, the largest is that it forces a break of the mental retardation that because the two use the same data they should also use the same data model.
The idea of multiple data models is key, because you can now begin to consider using data models that are fit for purpose, rather than trying to tune a single data model to every case that you need to support.
Once we have the idea that these two objects are logically separate, we can start to consider whether they are physically separate. And that opens up a world of interesting trade offs.
what problems are solved from a persistence standpoint?
The opportunity to choose fit for purpose storage. Instead of supporting all of your use cases in your single read/write persistence store, you pull documents out of the key value store, and run graph queries out of the graph database, and full text search out of the document store, events out of the event stream....
Or not! if the cost benefit analysis tells you the work won't pay off, you have the option of serving all of your cases from a single store.
It depends on your applications needs.
A good overview and links to more resources here: https://learn.microsoft.com/en-us/azure/architecture/patterns/cqrs
When to use this pattern:
Use this pattern in the following situations:
Collaborative domains where multiple operations are performed in parallel on the same data. CQRS allows you to define commands with
enough granularity to minimize merge conflicts at the domain level
(any conflicts that do arise can be merged by the command), even when
updating what appears to be the same type of data.
Task-based user interfaces where users are guided through a complex process as a series of steps or with complex domain models.
Also, useful for teams already familiar with domain-driven design
(DDD) techniques. The write model has a full command-processing stack
with business logic, input validation, and business validation to
ensure that everything is always consistent for each of the aggregates
(each cluster of associated objects treated as a unit for data
changes) in the write model. The read model has no business logic or
validation stack and just returns a DTO for use in a view model. The
read model is eventually consistent with the write model.
Scenarios where performance of data reads must be fine tuned separately from performance of data writes, especially when the
read/write ratio is very high, and when horizontal scaling is
required. For example, in many systems the number of read operations
is many times greater that the number of write operations. To
accommodate this, consider scaling out the read model, but running the
write model on only one or a few instances. A small number of write
model instances also helps to minimize the occurrence of merge
conflicts.
Scenarios where one team of developers can focus on the complex domain model that is part of the write model, and another team can
focus on the read model and the user interfaces.
Scenarios where the system is expected to evolve over time and might contain multiple versions of the model, or where business rules
change regularly.
Integration with other systems, especially in combination with event sourcing, where the temporal failure of one subsystem shouldn't
affect the availability of the others.
This pattern isn't recommended in the following situations:
Where the domain or the business rules are simple.
Where a simple CRUD-style user interface and the related data access operations are sufficient.
For implementation across the whole system. There are specific components of an overall data management scenario where CQRS can be
useful, but it can add considerable and unnecessary complexity when it
isn't required.
We all heard that injecting repository into aggregate is a bad idea, but almost no one tells why.
I will try to write here all disadvantages of doing this, so we can measure rightness of this statement.
First thing that comes into my head is Single Responsibility Principle.
It's true that by injecting repository into AR we are violating SRP, because retrieving and persisting of aggregate is not responsibility of aggregate itself. But it says only about "aggregate itself", not about other aggregates. So does it apply for retrieving from repository aggregates referenced by id? And what about storing them?
I used to think that aggregate shouldn't even know that there is some sort of persistence in system, because it doesn't have to exist. Aggregates can be created just for one procedure call and then get rid of.
Now when I think of it, it's not right, because aggregate root is an entity, and entity has sense only if it has some unique identity. So why would we need unique identity if not for persisting? Even if it's just a persistence in a memory. Maybe for comparing, but in my opinion it's not a main reason behind the identity.
Ok, let's assume that we retrieve and store OTHER aggregates from inside of our aggregate using injected repositories. What are other consequences beside SRP violation?
For sure there is a problem with having no control over persisting of aggregates and retrieving is some kind of lazy loading, which is bad for the same reason (no control).
Because of no control we can come into situation when we persist the same aggregate few times, where it could be persisted only once, or the same aggregate is loaded one hundred times where it could be loaded once, hence performance is worse. Also there might be problem with stale data.
These reasons practically disqualifies ability to inject repository into aggregate.
Here comes my main question - why can we inject repositories into domain service then?
Not the same reasons applies here? It's just like moving logic out of aggregate into separate function and pretend it to be something different.
To be honest, when I stared to write this SO question, I had no good answer for that. But after hours of investigating this problem and writing of this question I came to solution. Rubber duck debugging.
I'll post this question anyway for others having the same problems. Of course with my answer below.
Here are the places where I'd recommend to fetch aggregates (i.e. call Repository.Get...()), in preference order :
Application Service
Domain Service
Aggregate
We don't want Aggregates to fetch other Aggregates most of the time, because this blurs the lines, giving them orchestration powers which normally belong to the Application layer. You also raise the risk of the Aggregate trespassing its jurisdiction by modifying other Aggregates, which can result in contention and performance problems, not to mention that transactions become more difficult to analyze and the code base to reason about.
Domain Services are IMO a good place to fetch Aggregates when determining which aggregates to modify is domain logic per se. In your game example (which might not be the ideal context for DDD by the way), which units are affected by another unit's attack might be considered domain logic, thus you may not want to place it at the Application Service level. This rarely happens in my experience though.
Finally, Application Services are the default place where I call Repository.Get(...) for uniformity's sake and because this is the natural place to get a hold of the actors of the use case (usually only one Aggregate per transaction) and orchestrate calls to them.
That doesn't mean Aggregates should never be injected Repositories, there are exceptions, but other alternatives are almost always better.
So as I wrote in a question, I've found my answer already in the process of writing that question.
The best way to show this is by example:
When we have a simple (superficially) behavior like unit attacking other unit, we can write something like that.
unit.attack_unit(other_unit)
Problem is that, to attack an unit, we have to calculate damage and to do that we need another aggregates, like weapon and armor, which are referenced by id inside of unit. Since we cannot inject repository inside of aggregate, then we have to move that attack_unit logic into domain service, because we can inject repository there. Now where is the difference between injecting it into domain service, and not into unit aggregate.
Answer is - there is no difference. All consequences I described in question won't bite us. In both cases we will load both units once, attacking unit weapon once and armor of unit being attacked once. Also there won't be stale data, even if we mutate weapon object during process and store it, because that weapon is retrieved and stored in one place.
Problem shows up in different example.
Lets create an use case where unit can attack all other units in game in one process.
Problem lies in how we implement it. If we will use already defined unit.attack_unit and we will call it on all units in game (iterating over them), then weapon that is used to compute damage will be retrieved from unit aggregate, number of times equal to count of units in game! But it could be retrieved only once!
It doesn't matter if unit.attack_unit will be method of unit aggregate, or if it will be domain service unit_attack_unit. It will be still the same, weapon will be loaded too many times. To fix that we simply have to change implementation and with that probably interface too.
Now at least we have an answer to question "does moving logic from aggregate method to domain service (because we want to access repository there) fixes problem?". No, it does not change a thing.
Injecting repositories into domain service can be as dangerous as injecting it into aggregate if used wrong.
This answers my SO question, but we still don't have solution to real problem.
What can we do if we have two use cases: one where unit attacks one other unit, and second where unit attacks all other units, without duplicating domain logic.
One way is to put all needed aggregates as parameters to our aggregate method.
unit.attack_unit(unit, weapon, armor)
But what if we will need like five or more aggregates there? It's not a good way. Also application logic will have to know that all these aggregates are needed for an attack, which is knowledge leak. When attack_unit implementation will change we would also might to update interface of that method. What is the purpose of encapsulation then?
So, if we can't access repository to get needed aggregate, how can we smuggle it then?
We can get rid of idea with referencing aggregates by ids, or pass all needed aggregates from application layer (which means knowledge leak).
Or maybe reason of these problems is bad modelling?
Attacking of other unit is indeed an unit responsibility, but is damage calculation its responsibility? Of course not.
Maybe we need another object, like value object MeleeAttack(weapon, armor), yet when we add more properties that can change result of an attack, like enchantments on unit, it gets more complicated.
Also I think that we are now creating objects based on performance, not our on domain.
So from domain driven design, we get performance driven design. Is that what we want? I don't think so.
"So why would we need unique identity if not for persisting?" - think of an account scenario, where several John Smiths exist in your system. Imagine John Smith and John Smith Jr (who didn't enter the Jr in signup) both live at the same address. How do you tell them apart? Imagine I'm trying to write a recommendation engine based upon their past purchases . . . .
Identity is a quality of equality in DDD. If you don't have an identity unique from your fields, then you're a ValueObject.
What are consequences of using repository inside of aggregate vs inside of domain service?
There's a reasonably strong argument that you shouldn't do either.
Riddle: when does an aggregate need to see the state of another aggregate?
The responsibility of an aggregate is to control change. Any command that would change the state of the domain model is dispatched to the aggregate root responsible for the integrity of the state in question. By definition, all of the state required to ensure that the command is currently permitted is contained within the aggregate boundary.
So there is never any need to peek at the data outside of the aggregate when making a change to the model.
In which case, you don't ever need to load another aggregate, which makes the "where" question moot.
Two clarifications:
Queries will often combine the state of multiple aggregates, and will often need to follow a reference from one aggregate to another. The principle above is satisfied because queries treat the domain model as read-only. You need the state to answer the query, but you don't need the invariant enforcement because you aren't changing anything.
Another case is when you need state from another aggregate to process a command properly, but small latency in the data is an acceptable risk to the data. In that case, you query the "other" aggregate to get state. If you were to run that query within the domain model itself, the right way to do so would be via a domain service.
In most cases, though, you'll be equally well served to run the query when generating the command (ie, in the client), or when handling the command (in the application, outside the domain). It would be very unusual for a business to consider domain service latency to be acceptable but client latency to be unacceptable.
(Disconnected clients are one case where that can be especially problematic; when the command is generated and then queued for a long period of time before being dispatched to the server).
I'll admit that I am still quite a newbie with DDD and even more so with CQRS. I also realize that DDD and/or CQRS might not be the right approach to every problem. Nevertheless, I like the principals but have some questions in the context of a current project.
The solution is a simulator that generates performance data based on the current configuration. Administrators can create and modify the specifications for simulations. Testers set some environmental conditions and run the simulator. The results are captured, aggregated and reported.
The solution consists of 3 component areas each with their own use-cases, domain logic and supporting data structure. As a result, a modular designed seems appealing as a way to segregate logic and separate concerns.
The first area would be the administrative aspect which allows users to create and modify the specifications. This would be a CRUD heavy 'module'.
The second area would be for executing the simulations. The domain model would be similar to the first area but optimized for executing the simulation as opposed to providing a convenient model for editing.
The third area is reporting.
From this I believe that I have three Bounding Contexts, yes? I have three clear entry points into the application, three sets of domain logic and three different data models to support the domain logic.
My first instinct is to follow these lines and create three modules (assemblies) that encapsulate the domain layer for each area. Should I also have three separate databases? Maybe more than three to support write versus read?
I gather this may be preferred for CQRS but am not sure how to go about it. It appears to me that CQRS suggests a set of back-end processes that move data around. But if that's the case, and data persistence is cross-cutting (as DDD suggests), then doesn't my data access code need awareness of all of the domain objects? If so, then is there a benefit to having separate modules?
Finally, something I failed to mention earlier is that specifications are considered 'drafts' until published, which makes then available for simulation. My PublishingService needs to have knowledge of the domain model for both the first and second areas so that when it responds to the SpecificationPublishedEvent, it can read the specification, translate the model and persist it for execution. This makes me think I don't have three bounding contexts after all. Or am I missing something in my analysis?
You may have a modular UI for this, but I don't see three separate domains in what you are describing necessarily.
First off, in CQRS reporting is not directly a domain model concern, it is a facet of the separated Read Model which takes on the responsibility of presenting the domain state optimized for reporting.
Second just because you have different things happening in the domain is not necessarily a reason to bound them away from each other. I'd take a read through the blue DDD book to get a bit better feel for what BCs look like.
I don't really understand your domain well enough but I'll try to give some general suggestions.
Start with where you talked about your PublishingService. I see a Specification aggregate root which takes a few commands that probably look like CreateNewSpecification, UpdateSpecification and PublishSpecification.
The events look similar and probably feel redundant: SpecificationCreated, SpecificationUpdated, SpecificationPublished. Which kind of sucks but a CRUD heavy model doesn't have very interesting behaviors. I'd also suggest finding an automated way to deal with model/schema changes on this aggregate which will be tedious if you don't use code generation, or handle the changes in a dynamic *emphasized text*way that doesn't require you to build new events each time.
Also you might just consider not using event sourcing for such an aggregate root since it is so CRUD heavy.
The second thing you describe seems to be about starting a simulation which will run based on a Specification and produce data during that simulation (I assume). An event driven architecture makes sense here to decouple updating the reporting data from the process that is producing the data. This has huge benefits if you are producing large amounts of data to process.
However it doesn't sound like a Simulation is necessarily the kind of AR that would benefit from Event Sourcing either. For a couple reasons:
Simulation really takes only one Command which is something like StartSimulation
Simulation then produces events over it's life-time which represent what is happening internally with the simulation
Simulation doesn't seem to ever receive any other Commands that could depend on the current state of the Simulation
Simulation is not interacted with by multiple clients/users simultaneously and as we pointed out it isn't really interacted with at all
In general, domain modeling is very specific to each individual project so it's hard to give you all the information you need to build your domain model. It will come as a result of spending a great deal of time trying to understand your user's needs and the problem they are trying to solve with the software. It likely will go through multiple refinements as you develop insights into their process.
In "DDD" what is the best patterns for handling different versions of your entities, e.g. Entities in a list vs the full object. I would like to avoid the overhead of getting properties I do not need when displaying the entities in a list
Would you have a separate entity type used in lists or just fill up your full entity type partially?
Would you use inheritance?
I understand your urge to create "views" of models in the domain, but would recommend against it. Personally, I use the entire entity inside of the domain, regardless of the situation. The entity is the entity, and anything less or more just does not feel clean. That does not mean that I can't use a reference to the entity to help focus my use of the items in the list, though.
The entity does not cross the domain boundary in my implementation. Instead, I return a type of DTO and have application services that can abstract a view from it. This allows, for example, allowing a presenter to generate the correct view model from a DTO and provide it to the view. I don't know if you are talking about operations in the domain services or in the application services, but there are a couple of things you can do that could be applied to either (or both).
You can do certain things to reduce the performance penalty of working with the entire entity in the domain layers, as well. One thing to look at is implementing some sort of cache-aside implementation. When an entity is requested, check to see if it is cached. If it is, return the cached version. If it isn't, pull it and then cache it before returning. When the entity is updated, evict it from the cache and do your update. I have purposely created my concrete repository implementations to be cache-aware to facilitate this. One other thing to consider using an approach like this is that it is beneficial to do as many fine-grained operations as possible. While that seems illogical at first, if entities are commonly "gotten" from your data store, it is easy to set up some logging to measure the number of cache hits to cache misses.
Coming full circle, to your question... Most lists I deal with are small, so I incur the penalty of loading up the entity in its entirety. Assuming that most use cases will involve the user drilling into one or more of the items, they are pre-cached because of the cache-aside implementation. The number of items is fluid, but I generally apply this approach to anything less than twenty five entities in a list.
For larger lists, I just use IDs. Most likely, the use case here is some sort of search result. Search results are commonly paged, for example, and this does not fit into the above pattern. Instead, I use the larger list of IDs as a sliding range window of entities I am interested in that I then pass to a GetRangeById() method that all of my repositories have - written to purposely take a list of identifiers and load them one at a time so they are cached. In essence, this will take a larger lightweight list and zero in just on the area I am interested in at a given point in time.
With an approach like this, the important thing to realize is that it is highly scalable. It might not baseline as fast as a non-cached approach with small sets of data, but will perform better with larger sets of data. There is an implied performance overhead of operation at play here, but it degrades at a slower rate than a standard "load 'em up" pattern, as well.
You can use CQRS pattern to separate query processing and command processing. And you can do it even on a single database. In such a case you would map you view models directly to the tables in databse (via NHibernate for example). Commands (writes) would go through real domain model and would be persisted in the DB. Queries (like get me a list of entities) would bypass the domain a go straight do DB. There is no point in querying domain object because you actually don't invoke any business logic in them, just retrieving some data.
You can also extend this solution to full-featured CQRS by having separate stores for command side and for query side. Query side would be synchronized by means of replication or pub/sub messaging.