i learning CQRS and DDD for a while. My question is how to manage commands. Specially commands, because commands can be more complex then query. How can i write commands with nested dto's?
The first thing you have to do is define your Domain Models. CQRS can be applied better when you're using DDD. Don't think about the persistence mechanism for now (eg. your DB). That, surprisingly, will become almost an "implementation detail".
Commands should contain only the bare minimum information needed by the system for performing the operation at hand. For example, let's imagine a "Create Customer" command.
It will expose Name, EMail and the Customer ID. The Command instance has to be immutable: once initialized, it cannot be changed.
The Command Handler receives the Command instance, validates it against the business invariants and if anything is fine, persists the data.
Command Handlers don't return data, they're "fire and forget".
Usually, with CQRS there are 2 distinct persistence storages, one for the Write side and another one for the Read side. Focus on your Writes first.
Related
When reading about CQRS it is often mentioned that the write model should not depend on any read model (assuming there is one write model and up to N read models). This makes a lot of sense, especially since read models usually only become eventually consistent with the write model. Also, we should be able to change or replace read models without breaking the write model.
However, read models might contain valuable information that is aggregated across many entities of the write model. These aggregations might even contain non-trivial business rules. One can easily imagine a business policy that evaluates a piece of information that a read model possesses, and in reaction to that changes one or many entities via the write model. But where should this policy be located/implemented? Isn't this critical business logic that tightly couples information coming from one particular read model with the write model?
When I want to implement said policy without coupling the write model to the read model, I can imagine the following strategy: Include a materialized view in the write model that gets updated synchronously whenever a relevant part of the involved entities changes (when using DDD, this could be done via domain events). However, this denormalizes the write model, and is effectively a special read model embedded in the write model itself.
I can imagine that DDD purists would say that such a policy should not exist, because it represents a business invariant/rule that encompasses multiple entities (a.k.a. aggregates). I could probably agree in theory, but in practice, I often encounter such requirements anyway.
Finally, my question is simply: How do you deal with requirements that change data in reaction to certain conditions whose evaluation requires a read model?
First, any write model which validates commands is a read model (because at some point validating a command requires a read), albeit one that is optimized for the purpose of validating commands. So I'm not sure where you're seeing that a write model shouldn't depend on a read model.
Second, a domain event is implicitly a command to the consumers of the event: "process/consider/incorporate this event", in which case a write model processor can subscribe to the events arising from a different write model: from the perspective of the subscribing write model, these are just commands.
Having read a lot about the topic and having thought hard about it myself, I attempt to answer my own question.
First, a clarification about the terms used. The write and read models themselves never have any dependency to one another. The corresponding command and query components might have instead. I will therefore call the entirety of the command component and its write model the command side, and the entirety of one particular query component and its read model a query side (of which there might be many).
So consider a command handler that is responsible for evaluating and executing a business policy. It takes a command DTO, validates it, loads part of the write model into memory, and applies changes to it in one atomic transaction. The question specifically was, whether this handler is allowed to query one of the query sides in order to inform its decision about what to do in the write model.
The answer would be a resounding NO. Here's why:
The command side would depend on one particular query side (it doesn't matter if you hide the dependency behind an interface – it is still there), so the query side cannot change independently.
Who actually guarantees that the command handler runs when it has to? The query side is certainly not the one responsible for it, and clients aren't either.
The command request is prolonged by performing a nested query request, which might be detrimental to the performance.
Instead, we can do the following:
Work with domain events raised by the write model, register a domain event handler in the command side that evaluates the policy. This way it is guaranteed that the policy will be executed whenever it has to be.
If the performance allows it, this domain event handler can simply load as much of the write model as it requires to evaluate the business condition. Don't prematurely optimize – maybe the entities are small and can easily be loaded into memory.
If the performance does not allow it, denormalize the write model and maintain the required statistics using domain events. No one says that the write model cannot itself contain query-oriented data. Being a write model simply says that it is a model designed to do writes, and this necessarily must include some means to read as well.
Finally, if applying the policy is not an integral part of the domain logic itself, but rather just a use case, consider putting the responsibility of calling it into a client or another microservice, where it is totally fine to first query one of our query sides, and afterwards calling our command side with the appropriate parameters.
I was asked to implement CQRS/Event sourcing patterns into a legacy web application, in order to prepare to migrate it from a monolithic/state oriented model to a distributed, service oriented app.
I have some questions on how I can design a Domain oriented code bundle that would connect the legacy entities strongly coupled to database, with a new Event sourced model.
The first things I did were:
writing a small "framework" for CQRS/ES, with classes like AggregateRoot, DomainEvent, Command, Handlers, Messaging, Eventstore, AggregateIds, etc.
trying to group and "migrate" the legacy Entities into some Aggregates to reconstruct all the history and states of the app into EventSoourced Aggregates
plug some Commands dispatching in the old controllers in order to let the app work as is, but also to feed the new CQRS/ES system on the side.
The context:
The legacy app contains several entities, mapped to database, that hold the model layer. (Our domain is Human resources (manpower).
Let's say we have those existing entities:
Worker, with various fields and related entities (OneToOne, OneToMany), like
name
address 1-1
competences 1-N
Society, in which worker works, with various fields and related entities (OneToOne, OneToMany), like
name
address 1-1
hours
Contract, with various fields and related entities (OneToOne, OneToMany), like
address 1-1
Worker 1-1
Society 1-1
documents 1-N
days 1-N
hours
etc.
From this legacy model, I designed a MissionAggregate that holds:
A db independent ID, like UUID
some Value objects: address, days (they were an entity in the legacy model, they became VOs here)
I also designed a WorkerAggregate and a SocietyAggregate, with fields and UUIDS, and in the MissionAggregate I added:
a reference to WorkerAggregate's UUID
a reference to SocietyAggregate's UUID
As I said earlier, my aim is to leave the legacy app as is, but just introduce in the CRUD controller's methods some calls to dispatch Commands to the new CQRS system.
For example:
After flushing newly created Contract in bdd, I want to dispatch a "CreateMissionCommand" to the new command bus.
It targets the appropriate Command Handler, that handles all the command's data, passes it to a newly created Aggregate with a new UUID and stores "MissionCreatedDomainEvent" in the EventStore.
The DomainEvent is indexed with an AggregateId, a playhead, and has a payload which contains the fields necessary to be applied to and build the MissionAggregate.
The newly Contract created in the app has now its former lifecycle, as usual, with all the updates that the legacy app does on it. But I also need to reflects all those changes to the corresponding EventSourcedAggregate, so every time there is a flush in database in the app, I dispatch a Command that translates the "crud like operations" of the legacy app into a Domain oriented /Command oriented pattern.
To sum up the workflow is:
A Crud legacy operation occurs and flushes some changes on the Contract Entity
In just a row of code in the controller, I dispatch a command built with necessary fields (AggregateId of the MissionAggregate... that I need to have stored somewhere... see next problems) to the Domain command bus, so that the impact on the existing code base is very low.
The bus passes the command to the corresponding command handler
The handler loads the aggregate and applies the changes it by calling the appropriate Aggregate method
then after some validation, the aggregate raises and stores the appropriate event
My problems and questions (some of them at least) are:
I feel like I am rewriting all big portions of the legacy app, with the same kind of relations between the Aggregates that I have between the Entities, and with the same type of validations, checks etc.
Having references, to both WorkerAggregate and SocietyAggregate UUID in MissionAggregate implies that I have to build those aggregate also (hence to dispatch commands from legacy app when the Worker and Society entities are flushed). Can't I have only references to Worker's entity id and Society's entity id?
How can I avoid having a eternally growing MissionAggregate? The Contract Entity is quite huge, it has a lot of fields that are constantly updated (hours, days, documents, etc.) If I want to store all those events, I need to have a large MissionAggregate to reflect all those changes; and so I need to have a tons of CommandHandlers that react to all the Commands of add, update, etc. that I am going to dispatch from the legacy app.
How "free" is an Aggregate from the Root entity it is supposed to refer to ? For example, a Contract Entity needs to relate somewhere to it's related Mission Aggregate, like for example when I want to dispatch a Command from the app, just after the legacy code having flushed something on the Entity. Where to store this relation? In the Entity itself, in a AggregateId field? in the Aggregate, should I have a ContractId field? Or should I have some kind of Mapping Table somewhere that holds the relationship between Contract ID and MissionAggregate ID?
What to do with the past? Should I migrate all the existing data through a script that generates Aggregates and events on all the historical data?
Thanks in advance for your time.
You have a huge task ahead of you, let's try to break it down.
It's best to build this new part of the system in isolation from the legacy codebase, otherwise you're going to have your hands tied in every turn of the way.
Create a separate layer in your project for these new requirements. We're going to call it "bubble" from now on. This bubble will be like a greenfield project, with its own structure, dependencies, etc. There will be no direct communication between the bubble and the legacy; communication will happen through another dedicated translation layer, which we'll call "Anti-Corruption Layer" (ACL).
ACL
It is like an API between two systems.
It translates calls from the bubble to the legacy and vice-versa. Its purpose is to prevent one system from corrupting or influencing the other. This way you can keep building/maintaining each system independently from each other.
At the same time, the ACL allows one system to consume the other, and reuse logic, validations, rules, etc.
To answer your questions directly:
I feel like i am rewriting all big portions of the legacy app, with the same kind of relations between the Aggregates that i have between the Entities, and with the same type of validations, checks etc.
With the ACL, you can resort to calling validations and reuse implementations from the legacy code. This will allow you time to rewrite things as needed or as possible.
You may not need to rewrite the entire system, though. If your goal is to implement CQRS and Event Sourcing and you can achieve this goal by keeping most or part of the legacy system, I would say you do it. Unless, of course, one of the goals is to completely replace the old system. Otherwise, keep it; write as less code as possible.
Suggested workflow:
Keep the CQRS and Event Sourcing system in the bubble
Do not bring these new frameworks into legacy
Make the lagacy Controller issue method calls to the ACL
The ACL will convert these calls into Commands and dispatch them
Any events will be caught by your Event Sourcing framework
Results will be persisted to the bubble's database
The bubble's database can be a different schema in the same database or can be a different database altogether. But you'll have to think about synchronization, and that's a topic of its own. To reduce complexity, I recommend a different schema in the same database.
Having references, to both WorkerAggregate and SocietyAggregate UUID in MissionAggregate implies that i have to build those aggregate also (hence to dispatch commands from legacy app when the Worker and Society entities are flushed). Can't i have only references to Worker's entity id and Society's entity id?
How can i avoid having a eternally growing MissionAggregate ? The Contract Entity is quite huge, it has a looot of fields that are constantly updated (hours, days, documents, etc.) If i want to store all those events, i need to have a large MissionAggregate to reflect all those changes; and so i need to have a tons of CommandHandlers that react to all the Commands of add, update, etc that i am going to dispatch from the legacy app.
You should aim for small aggregates. Huge aggregates are likely to degrade performance and cause concurrency problems.
If you anticipate having a huge aggregate, it is best to rethink it and try to break it down. Ask what fields/properties change together - these are possibly a different aggregate.
Also, when you speak about CQRS, you generally lean towards a task-based way of doing things in your system.
Think of a traditional web application, where you have a huge page with lots of fields that are all sent to the server in one batch when the user saves.
Now, contrast it with a modern web app where the user changes small portions of data at each step. If you think about your system this way you'll find those smaller aggregates.
PS. you don't need to rebuild your interfaces for this. If your legacy system has those huge pages, you could have logic in the controllers to detect which fields were changed and issue the appropriate commands.
How "free" is an Aggregate from the Root entity it is supposed to refer to ? For example, a Contract Entity needs to relate somewhere to it's related Mission Aggregate, like for example when i want to dispatch a Command from the app, just after the legacy code having flushed something on the Entity. Where to store this relation ? In the Entity itself, in a AggregateId field ? in the Aggregate, should i have a ContratId field ? Or should i have some kind of Mapping Table somewhere that holds the relationship between Contract ID and MissionAggregate ID?
Aggregates represent a conceptual whole. They are like atoms, indivisible things. You should always refer to an aggregate by its Root Entity Id, and never to a Child Entity Id: looking from the outside, there are no children.
An aggregate should be loaded as a whole and persisted as a whole. One more reason to have small aggregates.
An aggregate can be comprised of a single entity. Or it can have more entities and value objects, forming a graph, but one entity will be elected as the Root and will hold references to its children. Child entities and value objects should not hold references to their parents. The dependency is not bi-directional.
If Contract is an entity inside the Mission aggregate, the Contract should not have a reference to its parent.
But, if your Contract and Mission are different aggregates, then they can reference each other by their Ids.
What to do with the past? Should i migrate all the existing datas through a script that generates Aggregates and events on all the historical data?
That's a question for the business experts. Do they need it? If they don't, then don't implement it just for the sake of doing so. Every decision you make should be geared towards satisfying a business need and generating real value for it, considering the costs and tradeoffs.
Some people say that code is a liability, not an asset, and I aggre to some extent: every line of code you write needs to be tested and supported. Don't write any code that is not really necessary.
Also, have a look at this article about the Strangler Pattern, which shows how to migrate a legacy system by gradually replacing specific pieces of functionality with new applications and services.
If you have a chance, watch this course at Pluralsight (paid): Domain-Driven Design: Working with Legacy Projects. The author presents practical approaches for dealing with this kind of task.
I hope this has given you some insight.
I don't want to spoil your game. Everybody knows how cool it is to rewrite something from scratch. It's a challenge, it's fun, it's exciting. However...
migrate it from a monolithic/state oriented model to a distributed, service oriented app
CQRS/Event Sourcing won't solve any of your problems and it won't help you distribute the app in any reasonable way. If you just generate events on the CRUD operations you'll have a large tangled mess of dependencies between each part. Every part that needs data will have to call a couple of "services" (i.e. tables) to get it, than push data elsewhere, generate events1 that some other parts will react to. It will be a mess. Usually this is called a distributed monolith.
This is also the reason you already see problems with it. These problems won't go away, because you are essentially building the same system in the same way, but this time it'll be more complex.
Where to go from here
The very first thing is always: have a clear goal. You want a service oriented architecture you said. Why? Are there parts that need different scaling, different resources? Are they managed by different teams with different life-cycles? Etc.? Maybe you already have all this, I don't know, but if not, that's your first task.
Then. The parts you do want to pull out can't be just CRUD things. Those will not be independent, so whether your goal (see point above!) is scaling or different team, you won't reach your goal! To be independent you'll have to pull out the behavior with the data, and in a way that the service can operate on its own.
You can't just throw buzzwords at it and hope for the best. I'd suggest to just ignore all the hype and buzzwords and think about the goal you want to reach.
For example: I need a million workers to log their time in under 10 minutes total. So that means I need a "service" to enable worker to log their time with a web interface. So let's create that as a complete independent piece with its own database so it can be scaled to a 100 nodes when it needs to be. Export data to billing automatically every hour or so.
In CQRS + ES and DDD, is it a good thing to have small read model in aggregate to get data from other aggregate or bounded context?
For example, in order validation (In Order aggregate), there is a business rules which validate order only if customer is not flagged. The flag information is put in read model (specific to the aggregate) via synchronous domain events.
What do you think about this ?
is it a good thing to have small read model in aggregate to get data from other aggregate or bounded context?
It's not ideal. Aggregates, due to their nature, are not good at enforcing consistency that involves state outside of themselves.
What this usually means is that the business is going to need some way to respond when two aggregates produce an unacceptable state.
You also have the option of checking for the flag before you run the placeOrder command on the aggregate. That check for the flag could be done in the command handler, or in the client -- basically, you have was of "validating" that the command should succeed before passing it to the aggregate.
That said, if it were critical to try to consult the read model while processing the command, a way to do it would be to use a "domain service"; you pass a service provider to the aggregate as part of the command, and let the interface abstract away the fact that running the query requires looking outside of the aggregate.
That gives you some of the decoupling you need to keep the aggregate testable.
It's doable, but not in the form of a read model, rather a Value Object in the Aggregate (since we're on the Write side).
If you already have a CustomerId in Order, you just have to compose a VO with it and a Flagged member.
Of course, this remains prone to all the problems of cross-aggregate communication since the data originates from Customer. Order has to be kept in sync with the flagged status of its Customer, which can require quite a bit of work.
In any case, you should probably first determine with your domain expert whether immediate consistency is an absolute requirement (in which case you have to somehow wrap Customer + Order in a transaction) or if you can afford a small delay in Flagged freshness when enforcing that invariant.
If the latter, you can choose between duplicating Flagged in the Order aggregate or the first option given by #VoiceOfUnreason - the main difference being probably that if the data is in the aggregate, you'll get it for free at the Domain level should you need it in multiple occasions, instead of duplicating the check in multiple use cases/command handlers at the application level.
I'm learning DDD and Hexagonal architecture, I think I got the basics. However, there's one thing I'm not sure how to solve: how am I showing data to the user?
So, for example, I got a simple domain with a Worker entity with some functionality (some methods cause the entity to change) and a WorkerRepository so I can persist Workers. I got an application layer with some commands and command bus to manipulate the domain (like creating Workers and updating their work hours, persisting the changes), and an infrastructure layer which has the implementation of the WorkerRepository and a GUI application.
In this application I want to show all workers with some of their data, and be abe to modify them. How do I show the data?
I could give it a reference to the implementation of WorkerRepository.
I think it's not a good solution because this way I could insert new Workers in the repository skipping the command bus. I want all changes going through the command bus.
Okay then, I'd split the WorkerRepository into WorkerQueryRepository and WorkerCommandRepository (as per CQRS), and give reference only to the WorkerQueryRepository. It's still not a good solution because the repo gives back Worker entities which have methods that change them, and how are these changes will be persisted?
Should I create two type of Repositories? One would be used in the domain and application layer, and the other would be used only for providing data to the outside world. The second one wouldn't return full-fledged Worker entities, only WorkerDTOs containing only the data the GUI needs. This way, the GUI has no other way to change Workers, only through the command bus.
Is the third approach the right way? Or am I wrong forcing that the changes must go through the command bus?
Should I create two type of Repositories? One would be used in the domain and application layer, and the other would be used only for providing data to the outside world. The second one wouldn't return full-fledged Worker entities, only WorkerDTOs containing only the data the GUI needs.
That's the CQRS approach; it works pretty well.
Greg Young (2010)
CQRS is simply the creation of two objects where there was previously only one. The separation occurs based upon whether the methods are a command or a query (the same definition that is used by Meyer in Command and Query Separation, a command is any method that mutates state and a query is any method that returns a value).
The current term for the WorkerDTO you propose is "Projection". You'll often have more than one; that is to say, you can have a separate projection for each view of a worker in the GUI. (That has the neat side effect of making the view easier -- it doesn't need to think about the data that it is given, because the data is already formatted usefully).
Another way of thinking of this, is that you have a "write-only" representation (the aggregate) and "read-only" representations (the projections). In both cases, you are reading the current state from the book of record (via the repository), and then using that state to construct the representation you need.
As the read models don't need to be saved, you are probably better off thinking factory, rather than repository, on the read side. (In 2009, Greg Young used "provider", for this same reason.)
Once you've taken the first step of separating the two objects, you can start to address their different use cases independently.
For instance, if you need to scale out read performance, you have the option to replicate the book of record to a bunch of slave copies, and have your projection factory load from the slaves, instead of the master. Or to start exploring whether a different persistence store (key value store, graph database, full text indexer) is more appropriate. Udi Dahan reviews a number of these ideas in CQRS - but different (2015).
"read models don't need to be saved" Is not correct.
It is correct; but it isn't perhaps as clear and specific as it could be.
We don't need to create a durable representation of a read model, because all of the information that describes the variance between instances of the read model has already been captured by our writes.
We will often want to cache the read model (or a representation of it), so that we can amortize the work of creating the read model across many queries. And various trade offs may indicate that the cached representations should be stored durably.
But if a meteor comes along and destroys our cache of read models, we lose a work investment, but we don't lose information.
In attempt to understand CQRS I created a small application which has Command Executor and event source. By my understanding the changes in domain model are triggered through commands. The domain model then generates the events to update the read model using denormalizer.
But in many cases there may be updates which are non-trivial for the domain. Like user changing his own profile picture. For requirements like these, what is the best way to implement?
I believe that using command will be overkill because the domain model as such doesn't change.
I tried to search for this question but didn't find the answer...
Don't mix CQRS and CRUD. Either the Bounded Context is suitable for CQRS or it's not. Your pet project probably isn't. But once you decide to apply the CQRS architecture style, you should stick with it.
Commands are trivial. And since you're already using Event Sourcing as well (which is not a prerequisite for CQRS btw.) you shouldn't bypass it for single use cases. Things rapidly become quite messy once you have multiple philosophies in place.
As far as directly writing to the Read Model goes: What if your Read Model goes out of synch, gets corrupted or must be modified and you have to rebuild it? If there's no related event, how should the Read Model know that something happened then?
There is one thing you can bypass if there's no domain behavior: You can just use a Transaction Script (POAA) in your command handler and publish the event from there without invoking the domain.
Long story short: You can happily mix styles in multiple isolated parts of your application (i.e. CQRS in one BC, CRUD in another) but inside a single BC you should stay consistent.