I’ve got problem with DDD. I just started using it so I don’t have a lot of experience.
There are two bounded contexts: Maintenance and Clients. Each Client has list of parts of an engine. In Maintenance are stored Companies whose occupation is repairs. Clients may choose preferred company to each part.
Administrator can suspend the company. There are changes in two aggregates. At the first, it change company status and next company should be removed from clients who prefer it.
What is the best pattern to deal with it?
I can create two handlers in aggregates but how I rollback changes when one of handlers throw exception?
It's looks like you need to revise your consistency boundaries in aggregates.
But if after revising you still need to change two aggregates in one transaction you can think about eventually consistent system and use domain events (but in CQRS you already do this, isn't it?).
Vaughn Vernon in his book "Implementing Domain-Driven Design" suggest next method for working with eventual consistency:
aggregate publish domain event which is delivered to one or more subscribers. Each subscriber is executed in it's own transaction (so you still change one aggregate in transaction). If transaction fails (subscriber don't acknowledge success in timeout time) aggregate send message again or execute some rollback routines.
Since you are using Event Sourcing you can mark "failed" event as rejected and use Fowler's Retroactive Event mechanism.
Related
When modeling a typical chat application (with infinite chats), should each message be treated as an aggregate instance?
Aggregates should be kept small, and can not think of some other decent and small candidate to contain user messages. But at the same time, I just wonder I should use an aggregate concept for such a small object of the system.
should each message be treated as an aggregate instance?
This is a good question, but asked to wrong group of people, as we don't know your business :)
Aggregate is a synonym for the boundary of a transactional
consistency. [...] Properly designed Aggregate is the one, that can be
modified in the way business needs to modify it, while providing
business rules to be consistent as part of a single transaction. [...]
Aggregates are mostly about consistency boundaries, and their design
should not controlled by the need to create object graphs. [...] ~ Implementing Domain-Driven Design, Vaughn Vernon
Aggregates are mostly about transactional consistency in business rules. You should ask business if there are any rules regarding a single chat message. In a typical chat application probably not, but you have to ask business.
In a simplest chat application I can imagine, my chat message would rather be a Value Object. Or I would not even use DDD as Tseng mentioned. I can't think of any business rules I would need and it would definitly be immutable.
On the Service Fabric Reliable Actors Introduction, documentation provided by Microsoft, it states that Actors should not, "block callers with unpredictable delays by issuing I/O operations."
I'm a bit unsure on how to interpret this.
Does this imply that I/O is ok so long as the latency of the request is predictable?
or
Does this imply that the best practice is that Actors should not make any I/O operations outside of Service Fabric? Like for example: to some REST API or to write to some sort of DB, data lake or event hub.
Technically, is a bit of both.
Because actors are Single Threaded, only a single operation can happen in the actor at same time.
The SF Actor uses the Ask approach, where every call expect an answer, the callers will make calls and keep waiting for answers, if the actor receives too many calls from clients, and this actor depends too much on external components, it will take too long to process each call and all other client calls will be enqueued and will probably fail at some point because they will wait for too long and timeout.
This wouldn't be a big issue for actors using the Tell approach, like Akka, because it does not wait for an answer, it just send the message to the mailbox and receive a message back with the answer(when applicable). but the latency between request and response will still be an issue because too many messages are pending to process by a single actor. On the other hand, could increase the complexity if one command fail, and there are 2 or 3 sequence of events that triggered before you know the answer for the first (not the scope here, but you relate this happening to the example below).
Regarding the second point, the main idea of an Actor is to be self contained, if it depends too much on external dependencies, maybe you should rethink the design and evaluate if the actor is actually the best design for the problem.
Self contained actors are scalable, they don't depend on external state manager to manage his own state, they won't depend on other actor to accomplish their tasks, they can scale independently of each other.
Example:
Actor1(of ActorTypeA) depends on Actor2 (of ActorTypeB) to execute an operation.
To make it more human friendly let's say:
ActorTypeA is an Ecommerce Checkout Cart
ActorTypeB is a Stock Management
Actor1 is the Cart for user 1
Actor2 is the Stock for product A
Whenever a client(user) interact with his checkout cart, adding or removing products, it will send add and remove commands to the Actor1 to manage his own cart. In this scenario the dependency is one to one, when another user navigate to the website, another actor will be created for him to manage his own cart. In both cases, they will have their own actors.
Let's now say, Whenever a product is placed in the cart, it will be reserved in stock to avoid double selling the same product.
In this case, both actor will try to reserve products in Actor2, and because of single threaded nature of Actors, only the first one will succeed and the second will wait the first to complete and fail if the product is not in stock anymore. Also, the second user, won't be able to add or remove any products to his cart, because the first operation is waiting to complete. Now increase these numbers for thousands and see how the problem evolves quickly and the scalability fails.
This is just a small and simple example, so, the second point is not just for External Dependencies, it also applies to internal ones, every operation outside the actor reduces the scalability of it.
Said that, you should avoid external (outside the actor) dependencies as much as possible , but is not a crime if it is needed, but you will reduce the scalability when an external dependency is limiting it to scale independently.
This other SO question I've answered might also be interesting to you.
So I'll explain the problem through the use of an example as it makes everything more concrete and hopefully will reduce ambiguity.
The Architecture is pretty simple
1 MicroService <=> 1 Aggregate <=> Transactional Boundry
Each microservice will be using CQRS/ES design pattern which implies
Each microservice will have its own Aggregate mapping the domain of a real-world problem
The state of the aggregate will be rebuilt from an event store
Each event will signify a state change within the aggregate and will be transmitted to any service interested in the change via a message broker
Each microservice will be transactional within its own domain
Each microservice will be eventually consistent with other domains
Each microservice will build there own view models, from events being emitted by other microservices
So the example lets say we have a banking system
current-account microservice is responsible for mapping the Customer Current Account ... Withdrawal, Deposits
rewards microservice will be responsible for inventory and stock take of any rewards being served by the bank
air-miles microservice will be responsible for monitoring all the transaction coming from the current-account and in doing so award the Customer with rewards, from our reward micro-service
So the problem is this Should the air-miles microservice take decisions based on its own view model which is being updated from events coming from the current-account, and similarly, on picking which reward it should give out to the Customer?
Drawbacks of taking decisions on local view models;
Replicating domain logic on how to maintain these views
Bugs within the view might propagate the wrong rewards to be given out
State changes (aka events emitted) on corrupted view models could have consequences in other services which are taking their own decisions on these events
Advantages of taking a decision on local view models;
The system doesn't need to constantly query the microservice owning the domain
The system should be faster and less resource intense
Or should it use the events coming from the service to trigger queries to the Aggregate owning the Domain, in doing so we accept the fact that view models might get corrupt but the final decision should always be consulted with the aggregate owning the domain?
Please, not that the above problem is simply my understanding of the architecture, and the aim of this post is to get different views on how one might use this architecture effectively in a microservice environment to keep each service decoupled yet avoid cascading corruption scenario without to much chatter between the service.
So the problem is this Should the air-miles microservice take decisions based on its own view model which is being updated from events coming from the current-account, and similarly, on picking which reward it should give out to the Customer?
Yes. In fact, you should revise your architecture and even create more microservices. What I mean is that, being a event-driven architecture (also an Event-sourced one), your microservices have two responsibilities: they need to keep two different models: the write model and the read model.
So, for each Aggregate should be a microservice that keeps only the write model, that is, it only processes Commands, without building also a read model.
Then, for each read/query use case you should have a microservice that build the perfect read model. This is required if you need to keep the Aggregate microservice clean (as you should) because in general, the read models needs data from multiple Aggregate types/bounded contexts. Read models may cross bounded context boundaries, Aggregates may not. So you see, you don't really have a choice if you need to fully respect DDD.
Some says that domain events should be hidden, only local to the owning microservice. I disagree. In an event-driven architecture the domain events are first class citizens, they are allowed to reach other microservices. This gives the other microservices the chance to build their own interpretation of the system state. Otherwise, the emitting microservice would have the impossible additional responsibility/task of building a state that must match every possible need that all the microservices would ever want(!); i.e. maybe a microservices would want to lookup a deleted remote entity's title, how could it do that if the emitting microservice keeps only the list of non-deleted-yet entities? You may say: but then it will keep all the entities, deleted or not. But maybe someone needs the date that an entity was deleted; you may say: but then I keep also the deletedDate. You see what you do? You break the Open/closed principle. Every time you create a microservice you need to modify the emitting microservice.
There is also the resilience of the microservices. In the Art of scalability, the authors speak about swimming lanes. They are a strategy to separate the components of a system into lanes of failures. A failure in a lane does not propagate to other lanes. Our microservices are lanes. Components in a lane are not allowed to access any component from other lane. One down microservice should not bring the others down. It's not a matter of speed/optimisation, it's a matter of resilience. The domain events are the perfect modality of keeping two remote systems synchronized. They also emphasize the fact that the data is eventually consistent; the events travel at a limited speed (from nanoseconds to even days). When a system is designed with that in mind then no other microservice can bring it down.
Yes, there will be some code duplication. And yes, although I said that you don't have a choice, you have. In order to reduce the code duplication at the cost of a lower resilience, you can have some Canonical read models that build a normal flat state and other microservices could query that. This is dangerous in most cases as it breaks the swimming lanes concept. Should the Canonical microservices go down, go down all dependent microservices. Canonical microservices works best for CRUD-like bounded context.
There are however valid cases when you may have some internal events that you don't want to expose. In other words, you are not required to publish all domain events.
So the problem is this Should the air-miles micro service take decisions based on its own view model which is being updated from events coming from the current-account, and similarly, on picking which reward it should give out to the Customer?
Each consumer uses a local replica of a representation computed by the producer.
So if air-miles needs information from current-account it should be looking at a local replica of a view calculated by the current-account service.
The key idea is this: micro services are supposed to be isolated from one another; you should be able to redesign and deploy one without impacting the others.
So try this thought experiment - suppose we had these three micro services, but all saving snapshots of current state, rather than events. Everything works, then imagine that the current-account maintainer discovers that an event sourced implementation would better serve the business.
Should the change to the current-account require a matching change in the air-miles service? If so, can we really claim that these services are isolated from one another?
Advantages of taking a decision on local view models
I don't particularly like these "advantages"; first, they are dominated by the performance axis (please recall that the second rule of performance optimization is "not yet"). And second, that they assume that the service boundaries are correctly drawn; maybe the performance issue is evidence that the separation of responsibilities needs review.
In a CQRS Domain Driven Design system, the FAQ says that a saga should not query the read side (http://cqrs.nu). However, a saga listens to events in order to execute commands, and because it executes commands, it is essentially a "client", so why can't a saga query the read models?
Sagas should not query the read side (projections) for information it needs to fulfill its task. The reason is that you cannot be sure that the read side is up to date. In an eventual consistent system, you do not know when the projection will be updated so you cannot rely on its state.
That does not mean that sagas should not hold state. Sagas do in many cases need to keep track of state, but then the saga should be responsible of creating that state. As I see it, this can be done in two ways.
It can build up its state by reading the events from the event store. When it receives an event that it should trigger on it will read all events it needs from the store and build up its state in a similar manner that an aggregates does. This can be made performant in Event Store by creating new streams.
The other way is that it continuously listens to events from the event store and build up state and stores it on some data storage like projections do. Just be careful with this approach. You cannot reply sagas in the same way as you do with projections. If you need to change the way you store state and want to rebuild it, make sure that you do not execute the commands that you have already executed.
Sagas use the command model to update the state of the system. The command model contains business rules and is able to ensure that changes are valid within a given domain. To do that, the command model has all the information available that it needs.
The read model, on the other hand, has an entirely different purpose: It structures data so that it is suitable to provide information, e.g. to display on a web page.
Since the saga has all the information it needs through the command model, so it doesn't need the read model. Worse, using the read model from a saga would introduce additional coupling and increase the overall complexity of the system considerably.
This does not mean that you absolutely cannot use the read model. But if you do, be sure you understand the consequences. For me, that bar is quite high, and I have always found a different solution yet.
It's primarily about separation of concerns. Process managers (sagas) are state machines responsible for coordinating activities. If the process manager want to affect change, it dispatches commands (asynchronous).
Also: what is the read model? It's a projection of a bunch of events that already happened. So if the processor cared about those events... shouldn't it have been subscribing to them all along? So there's a modeling smell here.
Possible issues:
The process manager should have been listening to earlier messages in the stream, so that it would be in the right state when this message arrived.
The current event should be richer (so that the data the process manager "needs" is already present).
... variation - the command handler should instead be listening for a different event, and THAT one should be richer.
The query that you want should really be a command to an aggregate that already knows the answer
and failing all else
Send a command to a service, which runs the query and dispatches events in response. This sounds weird, but it's already common practice to have a process manager dispatch a message to a scheduling service, to be "woken up" when some fixed amount of time passes.
In CQRS, do they Commands and Queries belong in the Domain?
Do the Events also belong in the Domain?
If that is the case are the Command/Query Handlers just implementations in the infrastructure?
Right now I have it layed out like this:
Application.Common
Application.Domain
- Model
- Aggregate
- Commands
- Queries
Application.Infrastructure
- Command/Query Handlers
- ...
Application.WebApi
- Controllers that utilize Commands and Queries
Another question, where do you raise events from? The Command Handler or the Domain Aggregate?
Commands and Events can be of very different concerns. They can be technical concerns, integration concerns, domain concerns...
I assume that if you ask about domain, you're implementing a domain model (maybe even with Domain Driven Design).
If this is the case I'll try to give you a really simplified response, so you can have a starting point:
Command: is a business intention, something you want a system to do. Keep the definition of the commands in the domain. Technically it is just a pure DTO. The name of the command should always be imperative "PlaceOrder", "ApplyDiscount" One command is handled only by one command handler and it can be discarded if not valid (however you should make all the validation possible before sending the command to your domain so it cannot fail)
Event: this is something that has happened in the past. For the business it is the immutable fact that cannot be changed. Keep the definition of the domain event it in the domain. Technicaly it's also a DTO object. However the name of the event should always be in the past "OrderPlaced", "DiscountApplied". Events generally are pub/sub. One publisher many handlers.
If that is the case are the Command/Query Handlers just implementations in the infrastructure?
Command Handlers are semantically similar to the application service layer. Generally application service layer is responsible for orchestrating the domain. It's often build around business use cases like for example "Placing an Order". In those use cases invoke business logic (which should be always encapsulated in the domain) through aggregate roots, querying, etc. It's also a good place to handle cross cutting concerns like transactions, validation, security, etc.
However, application layer is not mandatory. It depends on the functional and technical requirements and the choices of architecture that has been made.
Your layring seems correct. I would better keep command handlers at the boundary of the system. If there is not a proper application layer, a command handler can play a role of the use case orchestrator. If you place it in the Domain, you won't be able to handle cross cutting concerns very easily. It's a tradeoff. You should be aware of the pro and cons of your solution. It may work in one case and not in another.
As for the event handlers. I handle it generally in
Application layer if the event triggers modification of another Aggregate in the same bounded context or if the event trigger some infrastructure service.
Infrastructure layer if the event need to be split to multiple consumers or integrate other bounded context.
Anyway you should not blindly follow the rules. There are always tradeoffs and different approaches can be found.
Another question, where do you raise events from? The Command Handler or the Domain Aggregate?
I'm doing it from the domain aggregate root. Because the domain is responsible for raising events.
As there is always a technical rule, that you should not publish events if there was a problem persisting the changes in the aggregate and vice-versa I took the approach used in Event Sourcing and that is pragmatic. My aggregate root has a collection of Unpublished events. In the implementation of my repository I would inspect the collection of Unpublished events and pass them to the middleware responsible for publishing events. It's easy to control that if there is an exception persisting an aggregate root, events are not published. Some says that it's not the responsibility of the repository, and I agree, but who cares. What's the choice. Having awkward code for event publishing that creeps into your domain with all the infrastructure concerns (transaction, exception handling, etc) or being pragmatic and handle all in the Infrastructure layer? I've done both and believe me, I prefer to be pragmatic.
To sum up, there is no a single way of doing things. Always know your business needs and technical requirements (scalability, performance, etc.). Than make your choices based on that. I've describe what generally I've done in the most of cases and that worked. It's just my opinion.
In some implementations, Commands and handlers are in the Application layer. In others, they belong in the domain. I've often seen the former in OO systems, and the latter more in functional implementations, which is also what I do myself, but YMMV.
If by events you mean Domain Events, well... yes I recommend to define them in the Domain layer and emit them from domain objects. Domain events are an essential part of your ubiquitous language and will even be directly coined by domain experts if you practise Event Storming for instance, so it definitely makes sense to put them there.
What I think you should keep in mind though is that no rule about these technical details deserves to be set in stone. There are countless questions about DDD template projects and layering and code "topology" on SO, but frankly I don't think these issues are decisive in making a robust, performant and maintainable application, especially since they are so context dependent. You most likely won't organize the code for a trading system with millions of aggregate changes per minute in the same way that you would a blog publishing platform used by 50 people, even if both are designed with a DDD approach. Sometimes you have to try things for yourself based on your context and learn along the way.
Command and events are DTOs. You can have command handlers and queries in any layer/component. An event is just a notification that something changed. You can have all type of events: Domain, Application etc.
Events can be generated by both handler and aggregate it's up to you. However, regardless where they are generated the command handler should use a service bus to publish the events. I prefer to generate domain events inside the aggregate root.
From a DDD strategic point of view, there are just business concepts and use cases. Domain events, commands, handlers are technical details. However all domain use cases are usually implemented as a command handler, therefore command handlers should be part of the domain as well as the query handlers implementing queries used by the domain. Queries used by the UI can be part of the UI and so on.
The point of CQRS is to have at least 2 models and the Command should be the domain model itself. However you can have a Query model, specialised for domain usage but it's still a read (simplified) model. Consider the command model as being used only for updates, the read model only for queries. But, you can have multiple read models (to be used by a specific layer or component) or just a generic (used for everything query) one.