Question
Can an AR issue its own commands, or it is better to issue them through a processor that listen event emitted by the outer command?
BTW: If you consider that this question could lead to "primarily opinionated” answers, I would still want to know whether or not it is considered a good practice, and the why.
PHP code sample
class PatchableComponent extends EventSourcedAggregateRoot
implements Entity, ReconstitutableEventSourcedAggregateRoot
{
...
public static function importFromCore(...): PatchableComponent
{
$patchableComponent = new self;
$patchableComponent->applyPatchableComponentWasImportedFromCore(
new PatchableComponentWasImportedFromCore(...)
);
// Here, the AR issue its own startLookingForPatches() command.
$patchableComponent->startLookingForPatches();
return $patchableComponent;
}
public function startLookingForPatches(): void
{
$this->applyPatchableComponentStartedLookingForPatches(
new PatchableComponentStartedLookingForPatches(...)
);
}
...
}
Can an AR issue its own commands, or it is better to issue them through a processor that listen event emitted by the outer command?
An aggregate can certainly call its own methods; adding extra layers of indirection is not normally necessary or desirable.
I know there is an accepted answer for this question, but wanted to chip in my own 2 cents.
When you state that an Aggregate is issuing a Command, your code sample doesn't actually do that. Your example is that of an Aggregate performing certain behavior. The concept of a "Command" is that of a message that encapsulates a user's intent (Use Case). A Command would be typically (and hopefully) handled by a CommandHandler which would then call methods on the Aggregate to perform the work. The Aggregate doesn't really know about Use Cases.
If you separate the concept of the Command from the concept of the Aggregate then you are free to implement Behavior in a way that makes your domain flexible. You are able to add new Use Cases (Commands) and new Behaviors (in your Aggregate) independently of each other.
Related
I want to implement Single Responsibility principle in my projects Domain layer (Clean MVVM).
I've approximately 200 different use-cases which are being very hectic to manage. Now I'm thinking to create one UseCaseManager which can provide me required UseCase based on Input & Output Object.
I've tried an approach but that's not looking very good.I'm mentioning some sample code, Please help me how can I aggregate all the UseCases to one UseCaseManager.
UseCase1:
public class ActualUseCase1 extends AsyncUseCase<Object3,Object4> {
public ActualUseCase1(SchedulerProvider schedulerProvider) {
super(schedulerProvider);
}
#Override
public Flowable<Object4> buildUseCaseFlowable(Object3 input) {
return Flowable.just(new Object4());
}
}
UseCase2:
public class ActualUseCase2 extends AsyncUseCase<Object1, Object2> {
public ActualUseCase2(SchedulerProvider schedulerProvider) {
super(schedulerProvider);
}
#Override
public Flowable<Object2> buildUseCaseFlowable(Object1 input) {
return Flowable.just(new Object2());
}
}
UseCaseManager:
public interface UseCaseManager<In, Out> {
<T> T getUseCase(In input, Out output);
}
T can be different UseCase with different In & Out Object.
UseCaseManagerImpl:
public class UseCaseManagerImpl implements UseCaseManager {
#Override
public Object getUseCase(Object object1, Object object2) {
return null;
}
}
Now this is the main problem, I'm not able to understand. How can i implement getUseCase method.
I think you're re-inventing the abstract factory pattern. Google will provide you with lots of content on that subject...
The tricky bit is how you decide which subtype to instantiate and return; that can be as simple as a switch statement, or involve lookup tables, etc. The key point is that you isolate that logic into a single place, where you can unit test it.
A bigger question is - how do you end up with 200 subclasses?
Okay I am getting an idea here that you want sort of a system wherein for a given input you get some output. And you can have 200 such inputs for which 200 corresponding outputs are possible. And you want to make all that manageable.
I will try to explain the solution I have in mind. I am a beginner in Java and hence won’t be able to produce much code.
You can implement this using the Chain of Responsibility pattern. In this design pattern, you have a job runner (UseCaseManagaer in your case) and several jobs (UseCases) to run, which will be run in sequence until one of them returns an output.
You can create a RequestPipeline class which will be a job runner. In the UseCaseManager, you instantiate the pipeline once and all the use cases you want to add using a Builder Pattern like so:
RequestPipeline.add(new UseCase1())
RequestPipeline.add(new UseCase2())...
When an input comes in, you trigger the RequestPipeline which will run all the jobs added to it, one after the other in sequence. If a UseCase returns null, the job runner will call the next UseCase in line until it finds a UseCase which can manage the input and produce an answer.
The advantages of this design pattern are:
Abstraction: The RequestPipeline is responsible for running the jobs in line but does not know anything about the jobs it is
running. The UseCase on the other hand only knows about processing
it’s own use-case. It’s a unit in itself. Hence Single Responsibility Principle is satisfied as both are independent of each other and are re-usable whenever we have a similar design requirement later.
Easily extensible: If you have to add 10 other use-cases, you can easily add them to the RequestPipeline and you are done.
No switch case and if-else. This in itself is a big achievement. I love Chain of Responsibility for this very reason.
Declarative Programming: We simple declare what we need to do and leave the details how to do it to the separate units. The design of the code is easily understandable by a new developer.
More control: RequestPipeline has the ability to dynamically decide the job to run at run-time.
Reference: https://www.google.co.in/amp/s/www.geeksforgeeks.org/chain-responsibility-design-pattern/amp/
Some Java Code is provided here for you to check, if it satisfies your use-case.
Hope this helps. Please let me know if you have any doubts in the comment section.
What you are trying to do is NOT the single responsibility, it's the opposite.
Single responsibility means
There should be one reason to change
See The Single Responsibility Principle
The UseCaseManager you try to implement will handle all of your 200 use cases. Thus it will change whenever a use case change." - That's is mixing of concerns and not separating them.
Usually use cases are invoked by controllers and usually a controller also has a single responsibility. Thus the controller knows which use case it must invoke. Thus I don't see the need for a UseCaseManager.
I guess there is another problem in your design that leads to the problem you have. But since I don't have your full source code I can't give you any further advice.
If using CQRS and creating an entity, and the values of some of its properties are generated part of the its constructor (e.g. a default active value for the status property, or the current datetime for createdAt), how do you include that as part of your response if your command handlers can’t return values?
You would need to create guid before creating an entity, then use this guid to query it. This way your command handlers always return void.
[HttpPost]
public ActionResult Add(string name)
{
Guid guid = Guid.NewGuid();
_bus.Send(new CreateInventoryItem(guid, name));
return RedirectToAction("Item", new { id = guid});
}
public ActionResult Item(Guid id)
{
ViewData.Model = _readmodel.GetInventoryItemDetailsByGuid(id);
return View();
}
Strictly speaking, I don't believe CQRS has a precise hard and fast rule about command handlers not returning values. Greg Young even mentions Martin Fowler's stack.pop() anecdote as a valid counter-example to the rule.
CQS - Command Query Separation, upon which CQRS is based - by Bertrand Meyer does have that rule, but it takes place in a different context and has exceptions, one of which can be interesting for the question at hand.
CQS reasons about objects and the kinds of instructions the routine (execution context) can give them. When issuing a command, there's no need for a return value because the routine already has a reference to the object and can query it whenever it likes as a follow-up to the command.
Still, an important distinction made by Meyer in CQS is the one between a command sent to a known existing object and an instruction that creates an object and returns it.
Functions that create objects
A technical point needs to be clarified before we examine further
consequences of the Command-Query Separation principle: should we
treat object creation as a side effect?
The answer is yes, as we have seen, if the target of the creation is an attribute a: in this case,
the instruction !! a changes the value of an object’s field. The
answer is no if the target is a local entity of the routine. But what
if the target is the result of the function itself, as in !! Result or
the more general form !! Result.make (...)?
Such a creation instruction need not be considered a side effect. It
does not change any existing object and so does not endanger
referential transparency (at least if we assume that there is enough
memory to allocate all the objects we need). From a mathematical
perspective we may pretend that all of the objects of interest, for
all times past, present and future, are already inscribed in the Great
Book of Objects; a creation instruction is just a way to obtain one of
them, but it does not by itself change anything in the environment.
It is common, and legitimate, for a function to create,
initialize and return such an object.
(in Object Oriented Software Construction, p.754)
In other places in the book, Meyer defines this kind of functions as creator functions.
As CQRS is an extension of CQS and maintains the viewpoint that [Commands and Queries] should be pure, I would tend to say that exceptions that hold for CQS are also true in CQRS.
In addition, one of the main differences between CQS and CQRS is the reification of the Command and Query into objects of their own.
In CQRS, there's an additional level of indirection, the "routine" doesn't have a direct reference to the domain object. Object lookup and modification are delegated to the command handler. It weakens, IMO, one of the original reasons that made the "Commands return nothing" precept possible, because the context now can't check the outcome of the operation on its own - it's basically left high and dry until some other object lets it know about the result.
Some ideas:
Let your command handlers return values. This is the simplest option - just return what was created inside the entity. There is some disagreement about whether this is 'allowed' in CQRS though.
The preferred approach is to create your defaults (i.e.id) and pass them into your command - For example, https://github.com/gregoryyoung/m-r/blob/master/CQRSGui/Controllers/HomeController.cs in the Add method, a Guid is created and passed in to the CreateInventoryItem command - this could be returned in the response. This could get quite ugly if you have lots of things to pass in though.
If you can't do 1 or 2, you could try having some async way of handling this, but you haven't said what your use case is so it's difficult to advise. If you're using some sort of socket technology you could do sync-over-async style where you return immediately, then push some value back to the client once the entity has been created. You can also have some sort of workflow where you accept the command then poll / query on the thing being created
According to my understanding of CQRS, you cannot query the aggregate and the command handler could not return any value. The only permitted way of interogating the aggregate is by listening to the raised events. That you could do by simply querying the read model, if the changes are reflected synchronously from the events to the read model.
In the case the changes to the read model are asynchronous things get complicated but solutions exists.
Note: the "command handler" in my answer is the method on the Aggregate, not some Application layer service.
What I ended up doing is I created a 3rd type: CommandQuery. Everything gets pushed into either a Command or Query whenever possible, but then when you have a scenario where running the command results in data you need above, simply turn to the CommandQuery. That way, you know these are special circumstances, like you need an auto-id from a create or you need something back from a stack pop, and you have a clear/easy way to deal with this with no extra overhead that creating some random dummy guid or relying on events (difficult when you are in a web request context) would cause. In a business setting, you could then discuss as a team if a CommandQuery is really warranted for the situation.
In Vaughn Vernon's Implementing Domain-Driven Design book, he described the use of factory method in an Aggregate Root. One example was that of a Forum aggregate root which had startDiscussion factory method which returned a Discussion aggregate root.
public class Forum extends Entity {
...
public Discussion startDiscussion(
DiscussionId aDiscussionId, Author anAuthor, String aSubject) {
if (this.isClosed()) {
throw new IllegalStateException("Forum is closed.");
}
Discussion discussion = new Discussion(
this.tenant(), this.forumId(), aDiscussionId, anAuthor, aSubject);
DomainEventPublisher.instance().publish(new DiscussionStarted(...));
return discussion;
}
How would one implement this factory pattern in an event sourcing system, specifically in Axon?
I believe conventionally, it may be implemented in this way:
StartDiscussionCommand -> DiscussionStartedEvent -> CreateDiscussionCommand -> DiscussionCreatedEvent
We fire a StartDiscussionCommand to be handled by the Forum, Forum then publishes a DiscussionStartedEvent. An external event handler would catch the DiscussionStartedEvent, convert it, and fire a CreateDiscussionCommand. Another handler would instantiate a Discussion using the CreateDiscussionCommand and Discussion would fire a DiscussionCreatedEvent.
Alternatively, can we instead have:
StartDiscussionCommand -> CreateDiscussionCommand -> DiscussionCreatedEvent
We fire StartDiscussionCommand, which would trigger a command handler and invoke Forum's startDiscussion() method that will return the CreateDiscussionCommand. The handler will then dispatch this CreateDiscussionCommand. Another handler receives the command and use this to instantiate Discussion. Discussion would then fire the DiscussionCreatedEvent.
The first practice involves 4 DTOs, whilst the second one involves only 3 DTOs.
Any thoughts on which practice should be preferred? Or is there another way to do this?
The best approach to a problem like this, is to consider your aggregates (in fact, the entire system) as a black box first. Just look at the API.
Given a Forum (that is not closed),
When I send a StartedDiscussionCommand for that forum,
A new Discussion is started.
But also
Given a Forum that was closed
When I send a CreateDiscussionCommand for that forum,
An exception is raised
Note that the API you suggested is too technical. In 'real life', you don't create a discussion, you start one.
This means state of the forum is involved in the creation of the discussion. So ideally (when looking into the black box), such a scenario would be implemented in the Forum aggregate, and apply an event which represents the creation event for a Discussion aggregate. This is under the assumption that other factors require the Forum and Discussion to be two distinct aggregates.
So you don't really want the Command handler to return/send a command, you want that handler to make a decision on whether to create an aggregate or not.
Unfortunately, Axon doesn't support this feature, yet. At the moment, Axon cannot apply an event that belongs to another aggregate, through its regular APIs.
However, there is a way to get it done. In Axon 3, you don't have to apply an Event, you can also publish one directly to the Event Bus (which in the case of Event Sourcing would be an Event Store implementation). So to implement this, you could directly publish a DomainEventMessage that contains a DiscussionCreatedEvent. The ID for the discussion can be any UUID and the sequence number of the event is 0, as it is the creation event of the discussion.
Any thoughts on which practice should be preferred?
The motivation for a command is to direct the application to update the book of record. A command that you don't expect to produce an event is pretty weird.
That is, if your flow is
Forum.startDiscussion -> []
Discussion.create -> [ DiscussionCreated ]
One is bound to ask why the Forum is involved at all?
if (this.isClosed()) {
throw new IllegalStateException("Forum is closed.");
}
This here is an illusion -- we're looking at the state of the Forum at some arbitrary point in the past to process the Discussion command. In other words, after this check, the state of Forum could change, and our processing in Discussion would not know. So it would be just as correct to make this check when validating the command, or by checking the read model from within Discussion.
(Everything we get from the book of record is a representation of the past; it has to be, in order to already be in the book of record for us to read. The only moment that we act in the present is that point where we update the book of record. More precisely, its at the moment of the write that we discover if the assumptions we've made about the past still hold. When we write the changes to Discussion, we are proving that Discussion hasn't changed since we read the data; but that tells us nothing of whether Forum has changed).
What command->command looks like is an api compatibility adapter; in the old API, we used a Forum.startDiscussion command. We changed the model, but continue to support the old command for backwards compatibility. It would all still by synchronous with the request.
That's a real thing (we want the design to support aggressive updates to the model without requiring that clients/consumers be constantly updating), but it's not a good fit for your process flow.
I'm used to develop software creating "domain entities" -- these entities "depends" only on other entities inside the domain.
Let's say I have this interface
package domain;
import domain.beans.City;
public interface CitiesRepository {
City get(String cityName);
}
As you can see, the City I am returning is again a domain object. Implementations of this CitiesRepository can be found outside the domain, and can rely upon a database, an http client, a cached decorator etc.
I am now working with a reactive framework -- vert.x -- and I am trying to understand how I can keep working using such model. I don't want vert.x specific answer but only to understand if there is any pattern/best practice to understand how to achieve this.
In the reactive-programming there is almost never a return value but always some callback/handler that will consume the event after it happened. Should I rewrite my interfaces to be "reactive"?
package domain;
import domain.beans.City;
public interface CitiesRepository {
void get(String cityName, DomainHandler<City> cityHandler);
}
Just providing this example caused me some some spaghetti-headache when thinking to the implementations where I have to deal with the Handler of the "reactive framework" to "fill" my domain handler.
Should I stop thinking in this kind of design when working with reactive model? Should I prefer an Observable/Promise approach?
Any hint would be really appreciated
In the reactive systems I've been involved with there has been an event handler that would then use the repository:
public void SomeEventHandler : IHandle<SomeEvent> {
public SomeEventHandler(CityRepository repo) {}
}
You would then use your existing repository inside the handler code:
public void When(SomeEvent event) {
var city = _cityRepository.Get(event.CityName);
// do something with city
}
In the CompositionRoot of the application, the handler would be registered to handle the event through whatever messaging bus / reactive stream etc. will be receiving / producing the event.
So I wouldn't look to make the repository reactive, rather add in an event handler to use it.
With reactive design you add a layer of indirection of how the API is invoked and you specify that in addition to your original vanilla spec. The reason is, in async design it matters a lot how you invoke stuff and it is not always one size fits all, so better to not make early big decisions, or bind "what it is/does" to "how it does it".
There are three common tools for making things asynchronous:
future/promise
callback
message passing
Future/promise is the most binding of the three in terms of the whole design and is usually the most hairy on the implementation side and you need to do a lot of moves to prevent ABA bugs in your design and to truck futures which are still running something, but no one needs the results. Yes, they abstract away concurrency, are monadic etc., but they make you their hostage the moment you add the first one and they are quite hard to get rid of.
Callback is the fastest in a single process, but to make them work with actor-based system or across wires you inevitably are going to use messages. Moreover, the moment you need a first state machine you need event queue and messages right away. So to be most future proof, the safest path is to just go with messages. Moving between message and callback is very straightforward (when possible at all) for how simple both these mechanisms are.
A protocol to lookup city by key could be something like this:
protocol Requests
message GetCityRequest(name): Requests
protocol Responses
message GetCityResponse(cityMaybe): Responses
But knowing this topic really well, I'd say invest into the "state replication pattern" in generic form and use it for both simple static lookups and dynamic subscriptions. It is not hard to get it right and it will be your main working horse for most of your system needs.
I'm looking for some advice on how much I should be concerned around avoiding the anemic domain model. We are just starting on DDD and are struggling with analysis paralysis regarding simple design decisions. The latest point we are sticking on is where certain business logic belongs, for example we have an Order object, which has properties like Status etc. Now say I have to perform a command like UndoLastStatus because someone made a mistake with an order, this is not as simple as just changing the Status as other information has to be logged and properties changed. Now in the real world this is a pure administration task. So the way I see it I have two options I can think of:
Option 1: Add the method to order so something like Order.UndoLastStatus(), whilst this kinda make sense, it doesn't really reflect the domain. Also Order is the primary object in the system and if everything involving the order is placed in the order class things could get out of hand.
Option 2: Create a Shop object, and with that have different services which represent differant roles. So I might have Shop.AdminService, Shop.DispatchService, and Shop.InventoryService. So in this case I would have Shop.AdminService.UndoLastStatus(Order).
Now the second option we have something which reflects the domain much more, and would allow developers to talk to business experts about similar roles that actually exists. But its also heading toward an anemic model. Which would be the better way to go in general?
Option 2 would lead to procedural code for sure.
Might be easier to develop, but much harder to maintain.
Now in the real world this is a pure administration task
"Administration" tasks should be private and invoked through public, fully "domain`ish" actions. Preferably - still written in easy to understand code that is driven from domain.
As I see it - problem is that UndoLastStatus makes little sense to domain expert.
More likely they are talking about making, canceling and filling orders.
Something along these lines might fit better:
class Order{
void CancelOrder(){
Status=Status.Canceled;
}
void FillOrder(){
if(Status==Status.Canceled)
throw Exception();
Status=Status.Filled;
}
static void Make(){
return new Order();
}
void Order(){
Status=Status.Pending;
}
}
I personally dislike usage of "statuses", they are automatically shared to everything that uses them - i see that as unnecessary coupling.
So I would have something like this:
class Order{
void CancelOrder(){
IsCanceled=true;
}
void FillOrder(){
if(IsCanceled) throw Exception();
IsFilled=true;
}
static Order Make(){
return new Order();
}
void Order(){
IsPending=true;
}
}
For changing related things when order state changes, best bet is to use so called domain events.
My code would look along these lines:
class Order{
void CancelOrder(){
IsCanceled=true;
Raise(new Canceled(this));
}
//usage of nested classes for events is my homemade convention
class Canceled:Event<Order>{
void Canceled(Order order):base(order){}
}
}
class Customer{
private void BeHappy(){
Console.WriteLine("hooraay!");
}
//nb: nested class can see privates of Customer
class OnOrderCanceled:IEventHandler<Order.Canceled>{
void Handle(Order.Canceled e){
//caveat: this approach needs order->customer association
var order=e.Source;
order.Customer.BeHappy();
}
}
}
If Order grows too huge, You might want to check out what bounded contexts are (as Eric Evans says - if he had a chance to wrote his book again, he would move bounded contexts to the very beginning).
In short - it's a form of decomposition driven by domain.
Idea is relatively simple - it is OK to have multiple Orders from different viewpoints aka contexts.
E.g. - Order from Shopping context, Order from Accounting context.
namespace Shopping{
class Order{
//association with shopping cart
//might be vital for shopping but completely irrelevant for accounting
ShoppingCart Cart;
}
}
namespace Accounting{
class Order{
//something specific only to accounting
}
}
But usually enough domain itself avoids complexity and is easily decomposable if You listen to it closely enough. E.g. You might hear from experts terms like OrderLifeCycle, OrderHistory, OrderDescription that You can leverage as anchors for decomposition.
NB: Keep in mind - I got zero understanding about Your domain.
It's quite likely that those verbs I'm using are completely strange to it.
I would be guided by the GRASP principles. Apply the Information Expert design principle, that is you should assign the responsibility to the class that naturally has the most information required to fulfill the change.
In this case, since changing the order status involves other entities, I would make each of these low-level domain objects support a method to apply the change with respect to itself. Then also use a domain service layer as you describe in option 2, that abstracts the whole operation, spanning multiple domain objects as needed.
Also see the Facade pattern.
I think having a method like UndoLastStatus on the Order class feels a bit wrong because the reasons for its existence are in a sense outside of the scope of an order. On the other hand, having a method which is responsible for changing the status of an order, Order.ChangeStatus, fits nicely as a domain model. The status of an order is a proper domain concept and changing that status should be done through the Order class, since it owns the data associated with an order status - it is the responsibility of the Order class to keep itself consistent and in a proper state.
Another way to think of it is that the Order object is what's persisted to the database and it is the 'last stop' for all changes applied to an Order. It is easier to reason about what a valid state for an order might be from the perspective of an Order rather than from the perspective of an external component. This is what DDD and OOP are all about, making it easier for humans to reason about code. Furthermore, access to private or protected members may be required to execute a state change, in which case having the method be on the order class is a better option. This is one of the reasons why anemic domain models are frowned upon - they shift the responsibility of keeping state consistent away from the owning class, thereby breaking encapsulation among other things.
One way to implement a more specific operation such as UndoLastStatus would be to create an OrderService which exposes the domain and is how external components operate upon the domain. Then you can create a simple command object like this:
class UndoLastStatusCommand {
public Guid OrderId { get; set; }
}
An the OrderService would have a method to process that command:
public void Process(UndoLastStatusCommand command) {
using (var unitOfWork = UowManager.Start()) {
var order = this.orderRepository.Get(command.OrderId);
if (order == null)
throw some exception
// operate on domain to undo last status
unitOfWork.Commit();
}
}
So now the domain model for Order exposes all of the data and behavior that correspond to an Order, but the OrderService, and the service layer in general, declare the different kind of operations that are performed on an order and expose the domain for utilization by external components, such as the presentation layer.
Also consider looking into the concept of domain events which considers anemic domain models and ways of improving them.
It sounds like you are not driving this domain from tests. Take a look at the work of Rob Vens, especially his work on exploratory modeling, time inversion and active-passive.