I'm used to develop software creating "domain entities" -- these entities "depends" only on other entities inside the domain.
Let's say I have this interface
package domain;
import domain.beans.City;
public interface CitiesRepository {
City get(String cityName);
}
As you can see, the City I am returning is again a domain object. Implementations of this CitiesRepository can be found outside the domain, and can rely upon a database, an http client, a cached decorator etc.
I am now working with a reactive framework -- vert.x -- and I am trying to understand how I can keep working using such model. I don't want vert.x specific answer but only to understand if there is any pattern/best practice to understand how to achieve this.
In the reactive-programming there is almost never a return value but always some callback/handler that will consume the event after it happened. Should I rewrite my interfaces to be "reactive"?
package domain;
import domain.beans.City;
public interface CitiesRepository {
void get(String cityName, DomainHandler<City> cityHandler);
}
Just providing this example caused me some some spaghetti-headache when thinking to the implementations where I have to deal with the Handler of the "reactive framework" to "fill" my domain handler.
Should I stop thinking in this kind of design when working with reactive model? Should I prefer an Observable/Promise approach?
Any hint would be really appreciated
In the reactive systems I've been involved with there has been an event handler that would then use the repository:
public void SomeEventHandler : IHandle<SomeEvent> {
public SomeEventHandler(CityRepository repo) {}
}
You would then use your existing repository inside the handler code:
public void When(SomeEvent event) {
var city = _cityRepository.Get(event.CityName);
// do something with city
}
In the CompositionRoot of the application, the handler would be registered to handle the event through whatever messaging bus / reactive stream etc. will be receiving / producing the event.
So I wouldn't look to make the repository reactive, rather add in an event handler to use it.
With reactive design you add a layer of indirection of how the API is invoked and you specify that in addition to your original vanilla spec. The reason is, in async design it matters a lot how you invoke stuff and it is not always one size fits all, so better to not make early big decisions, or bind "what it is/does" to "how it does it".
There are three common tools for making things asynchronous:
future/promise
callback
message passing
Future/promise is the most binding of the three in terms of the whole design and is usually the most hairy on the implementation side and you need to do a lot of moves to prevent ABA bugs in your design and to truck futures which are still running something, but no one needs the results. Yes, they abstract away concurrency, are monadic etc., but they make you their hostage the moment you add the first one and they are quite hard to get rid of.
Callback is the fastest in a single process, but to make them work with actor-based system or across wires you inevitably are going to use messages. Moreover, the moment you need a first state machine you need event queue and messages right away. So to be most future proof, the safest path is to just go with messages. Moving between message and callback is very straightforward (when possible at all) for how simple both these mechanisms are.
A protocol to lookup city by key could be something like this:
protocol Requests
message GetCityRequest(name): Requests
protocol Responses
message GetCityResponse(cityMaybe): Responses
But knowing this topic really well, I'd say invest into the "state replication pattern" in generic form and use it for both simple static lookups and dynamic subscriptions. It is not hard to get it right and it will be your main working horse for most of your system needs.
Related
I'm implementing a project in DDD using hexagonal architecture.
I'm trying to follow all the best practise avoiding anemic Domain model but I came across this problem.
I have an entity Client that has a property priority, to calculate this priority I use a rest call to an external API.
I'd like to avoid a setPriority method on the entity so I came out with this solution but I'm not sure it is actually "DDD compliant".
Here in pseudo code:
//DOMAIN
interface PriorityService {
public int exec(Client client);
}
class Client {
....
public void calculatePriority(PriorityService service) {
this.priority = service.exec(this);
}
}
//INFRASTRUCTURE
class ThirdPartyPriorityService implements PriorityService{
...
public int exec(Client client) {
int priority = this.externalApi(...);
return priority;
}
}
I find this approach really simple and effective to integrate third party service, the entity is the sole responsible for the coherence of his internal state but I can integrate different systems with ease.
Is there a better way to do it (preferably without the usage of a setter)?
Is there a better way to do it (preferably without the usage of a setter)?
Maybe. Consider:
Design is what we do to get more of what we want than we would get by just doing it -- Ruth Malan
Whether there is a "better way to do it" is going to depend on the "what we want" in your context.
When Eric Evans described ddd in 2003, one of the things he wanted was to separate the domain code from "the plumbing" -- we should be able to describe the business without our code being polluted by database strings, and transactions, and so on.
public void calculatePriority(PriorityService service) {
this.priority = service.exec(this);
}
Riddle: how, in this design, do you deal with the fact that the network is unreliable? If your solution is going to be resilient, then you are going to need logic somewhere that ensures that the system behaves gracefully when the network is unavailable, the remote server is offline for maintenance, credentials go stale, and so on....
Expressing that same idea a different way: fetching an HTTP response from a web server is I/O, and I/O "contaminates" everything that touches it. Therefore ThirdPartyPriorityService is I/O, therefore Client::calculatePriority is I/O.
Same idea again: our HTTP client necessarily lives in the "imperative shell" (Bernhardt 2012), and by injecting the client into our domain code, we are in effect making our domain code part of the imperative shell.
An alternative approach is to recognize that ThirdPartyPriorityService is retrieving information_ from some remote location, and maybe that concern really belongs in the application, rather than in the domain -- your domain code cares about what the information is, but not where it came from.
So you could, for example, change your implementation so that the application invokes the priority service, then passes the priority to the domain model. See Cory Benfield's 2016 talk on protocol libraries.
Designing your code this way isn't better or worse than the other; it's a different set of tradeoffs.
if we failed to make our program align with what we then understood to be the proper way to think about our financial objects, then we were gonna continually stumble over that disagreement and that would slow us down -- Ward Cunningham, 2009
If the code actually works the way you think it works when you aren't looking at it, your estimates about changing it are likely to be better. That might be a valuable "what we want".
I implemented event sourced entities ( in Domain driven design it's called aggregate). It's a good practice to create a rich domain model. Domain driven design (DDD) suggests putting all business related things when possible into core entities and value objects.
But there is an issue when using such an approach in combination with event sourcing. In comparison to traditional approaches in an event sourced systems events are stored first and later all events are applied when building the entity to execute some methods.
Based upon that, the big question is where to put the business logic. Usually, I would like to have a method like:
public void addNewAppointment(...)
In this case, I would expect that the method makes sure that no business rules are violated. If this is the case an exception would be thrown.
But when using event sourcing I would have to create an event:
Event event = new AppointmentAddedEvent(...);
event store.save(event);
Right now, I explored 2 approaches to check business rules before storing the event.
First, check business rules within the application layer. The application layer in DDD is a delegation layer. Actually, it should contain no business logic. It should only delegate things like getting core entities, calling methods and saving things back. In this example this rule would be violated:
List<Event> events = store.getEventsForConference(id);
// all events are applied to create the conference entity
Conference conf = factory.build(events);
if(conf.getState() == CANCELED) {
throw new ConferenceClosed()
}
Event event = new AppointmentAddedEvent(...);
event store.save(event);
Obviously, the business rule adding appointments to canceled conferences should not be possible leaked into the non-core component.
The second approach I know is to add process methods of commands to core entities:
class Conference {
// ...
public List<Event> process(AddAppointmentCommand command) {
if(this.state == CANCELED) {
throw new ConferenceClosed()
}
return Array.asList(new AppointmentAddedEvent(...));
}
// ...
}
In this case, the benefit is that the business rules are part of the core entity. But there is a violation of separation of concerns principle. Now, the entity is responsible for creating events that are stored in an event store. Besides that, it feels weird that an entity is responsible for creating events. I can argue for why it's natural that an entity can process events. But the creation of domain events for storing, not for natural publishing, feels wrong.
Did anyone of you experience similar issues? And how did you solve these?
For now, I will just go with the business rules within the application service solution. It is still one place and ok-ish but it violates some of the DDD principles.
I am looking forward to your ideas and experiences about DDD, event sourcing and the validation of incoming changes.
Thanks in advance
I love this question. When I first asked it, that was the break between just following the patterns and challenging myself to understand what is really going on.
the big question is where to put the business logic
The usual answer is "the same place you did before" -- in methods of the domain entities. Your "second approach" is the usual idea.
But there is a violation of separation of concerns principle.
It isn't really, but it certainly looks weird.
Consider what we normally do, when saving off current state. We run some query (usually via the repository) to get the original state out of the book of record. We use that state to create an entity. We then run the command, in which the entity creates new state. We then save the object in the repository, which replaces the original state with the new state in the book of record.
In code, it looks something like
state = store.get(id)
conf = ConferenceFactory.build(state)
conf.state.appointments.add(...)
store.save(id, conf.state)
What we are really doing in event sourcing is replacing a mutable state with a persistent collection of events
history = store.get(id)
conf = ConferenceFactory.build(history)
conf.history.add(AppointmentScheduled(...))
store.save(id, conf.history)
In mature business domains, like accounting or banking, the ubiquitous language include event histories: journal, ledger, transaction history,... that sort of thing. In those cases, event histories are an inherent part of the domain.
In other domains -- like calendar scheduling -- we don't (yet?) have analogous entities in the domain language, so it feels like we are doing something weird when we change to events. But the core pattern is the same -- we pull history out of the book of record, we manipulate that history, we save the updates to the book of record.
So the business logic happens in the same place that it always did.
Which is to say that yes, the domain logic knows about events.
An exercise that may help: let go of the "object oriented" constraint, and just think in terms of functions....
static final List<Event> scheduleAppointment(List<Event> history, AddAppointmentCommand addAppointment) {
var state = state(history)
if(state == CANCELED) {
throw new ConferenceClosed()
}
return Array.asList(new AppointmentAddedEvent(...));
}
private static final State state(List<Event> history) {...}
I have flicked through few popular Event Sourcing frameworks written in a variety of different common languages. I have got the impression all of them affect the domain models to a really high degree. As far as I understand ES is just an infrastructure concern - a way of persisting aggregate state. Of course, it facilitates message driven inter-context integration but in core domain's point of view is negligible. I consider commands and events to be part of the domain itself so it looks perfectly fine that aggregate creates events (but not publishes them) or handles commands.
The problem is that all of DDD building blocks tend to be polluted by ES framework. Events must inherit from some base class. Aggregates at least are supposed to implement foreign interfaces. I wonder if domain models should be even aware of using ES approach within an application. In my opinion, even necessity of providing apply() methods indicates that other layer shapes our domain.
How you approach this issue in your projects?
My answer applies only when CQRS is involved (write and read models are split and they communicate using domain events).
As far as I understand ES is just an infrastructure concern - a way of persisting aggregate state
Event sourcing is indeed an infrastructure concern, a kind of repository but event-based Aggregates are not. I consider them to be an architectural style, different from the classical style.
So, the fact that an Aggregate, in reaction to an command, generates zero or more domain events that are applied onto itself in order to build its internal (private) state used to decide what events to generate in the future is just a different mode of thinking and designing an Aggregate. This is a perfect valid style, along with classical style (the one not using events but only objects) or functional programming style.
Event sourcing just means that every time a command reaches an Aggregate, its entire internal state is rebuild instead of being loaded from a flat persistence. Of course there are other huge advantages (!) but they do not affect the design of an Aggregate.
... but not publishes them ...
I like the frameworks that permit us to just return (or better yield - Aggregate's command methods are just generators!) the events.
Events must inherit from some base class
It's sad that some frameworks require that but this is not necessarily. In general, a framework needs one mean of detecting an event class. However, they can be implemented to detect an event by other means instead of using marker interfaces. For example, the client (as in YOU) could provide a filter method that rejects non-event classes.
However, there is one thing that I couldn't avoid in my framework (yes, I know, I'm guilty, I have one): the Command interface with only one method: getAggregateId.
Aggregates at least are supposed to implement foreign interfaces.
Again, like with events, this is not a necessity. A framework could be given a custom client event-applier-on-aggregates function or a convention can be used (i.e. all event-applier methods have the form applyEventClassNameOrType.
I wonder if domain models should be even aware of using ES approach within an application
Of ES not, but event-based YES, so the apply method must still exists.
As far as I understand ES is just an infrastructure concern - a way of persisting aggregate state.
No, events are really core to the domain model.
Technically, you could store diffs in a domain agnostic way. For example, you could look at an aggregate and say "here is the representation before the change, here is the representation after, we'll compute the difference and store that.
The difference between patches and events is the fact that you switch from a domain agnostic spelling to a domain specific spelling. Doing that is normally going to require being intimate with the domain model itself.
The problem is that all of DDD building blocks tend to be polluted by ES framework.
Yup, there's a lot of crap framework in the examples you find in the wild. Sturgeon's Law at work.
Thinking about the domain model from a functional perspective can help a lot. At it's core, the most general form of the model is a function that accepts current state as an input, and returns a list of events as the output.
List<Event> change(State current)
From there, if you want to save current state, you just wrap this function in something that knows how to do the fold
State current = ...
List<Event> events = change(current)
State updated = State.fold(current, events)
Similarly, you can get current state by folding over the previous history
List<Event> savedHistory = ...
State current = State.reduce(savedHistory)
List<Event> events = change(current)
State updated = State.fold(current, events)
Another way of saying the same thing; the "events" are already there in your (not event sourced) domain model -- they are just implicit. If there is business value in tracking those events, then you should replace the implementation of your domain model with one that makes those events explicit. Then you can decide which persisted representation to use independent of the domain model.
Core of my problem is that domain Event inherits from framework Event and aggregate implements some foreign interface (from framework). How to avoid this?
There are a couple of possibilities.
1) Roll your own: take a close look at the framework -- what is it really buying you? If your answer is "not much", then maybe you can do without it.
From what I've seen, the "win" of these frameworks tends to be in taking a heterogeneous collection of events and managing the routing for you. That's not nothing -- but it's a bit magic, and you might be happier having that code explicit, rather than relying on implicit framework magic
2) Suck it up: if the framework is unobtrusive, then it may be more practical to accept the tradeoffs that it imposes and live with them. To some degree, event frameworks are like object relational mappers or databases; sure, in theory you should be able to change them out freely. In practice? how often do you derive benefit from the investment in that flexibility
3) Interfaces: if you squint a little bit, you can see that your domain behaviors don't usually depend on in memory representations, but instead on the algebra of the domain itself.
For example, in the domain model, we deposit Money into an Account updating its Balance. We don't typically care whether those are integers, or longs, or floats, or JSON documents. We can satisfy the model with any implementation that satisfies the constraints of the algebra.
So you can use the framework to provide the implementation (which also happens to have all the hooks the framework needs); the behavior just interacts with the interface it defined itself.
In a strongly typed implementation, this can get really twisty. In Java, for instance, if you want the strong type checks you need to be comfortable with the magic of generics and type erasure.
The real answer to this is that DDD is overrated. It is not true that you have to have one model to rule them all. You may have different views on the state of your world, depending on your current needs. One part of the application has one view, another part - completely different view.
To put it another way, your model is not "what is", but "what happened so far". The actual data model of your application is the event stream itself. Everything else you derive from there.
Iam building bounded context using CQRS pattern. I prepared some queries and handlers for each. And now, in presentation layer(http REST controller) I have to use it. Question is, should I inject each handler to this controller (there is 4 queries and handlers) or use command bus and configure right handlers to specified query?
Well, either way seems like it makes sense; that's probably a hint that you should be passing in an interface that expresses the contract that the Controller needs satisfied, and punt the implementation details down a level.
My guess is that you ultimately won't want the controller directly wired into the query handlers, because that will restrict your options in a number of use cases (example: congestion control and back pressure). But you won't know until much further down the road, so keep things loose.
In the past we've injected some sort of Query Executor into the controller and passed queries into that. eg,
public SomeController(IQueryExecutor queryExecutor) { ...
and then
var results = queryExecutor.Query(new GetSomeThings(args..));
The handler for GetSomeThings is wired up by the supporting infrastructure so you don't need to inject those. Does that make sense?
A word of warning about CQRS/CQS:
If you find you writing queries and handlers that are used at most only once or twice, or commands that are dedicated to the controller that uses them, then perhaps CQRS/CQS is an abstraction you don't actually need to be paying the extra complexity cost for.
My teams and I have found this to be true on a number of projects. Often CQRS/CQS is just another unnecessary abstraction like a Repository which dispatches to an ORM lib or a 'Service' which has one or two line dispatches to a Repository, which...
Hopefully you get the point.
I like the Rule Of Three - don't try to get reusability until you have more than 3 usages. And even then, don't jump deep into something heavy and very prescribed like CQRS/CQS if you don't need it. DDD is very kool but you can pick and choose which elements of it make sense for you. Often much of it won't make sense when you apply pragmatic reasoning.
Just my 2 c.
We use event sourcing in our app and also have a strict need to track the user who initiated changes to many of our objects. Currently we have code like this
class Order {
setNameBy(newname, User user) {
applyChange(new OrderRenamed(user.id, newname));
}
:
}
Since most of our methods are like this and all of them are called like this
setNameBy("a new name", SessionContext.currentUser)
we where contemplating accessing the SessionContext inside the domain object. i.e:
setNameBy(newname, User user) {
applyChange(new OrderRenamed(user.id, newname));
}
becomes
setName(newname) {
applyChange(new OrderRenamed(SessionContext.currenUser.id, newname));
}
I personally prefer the later method signature as it seams more natural on the other hand it feels a bit messy to access the SessionContext inside the Domain object.
So how do you best handle Session data like this in DDD/CQRS apps ?. Is it OK to access the SessionContext in the Domain objects or should I use other methods like event enrichment to add this information to the events emitted from the domain ?.
If tracking the user who initiated the change occurs frequently, then the SessionContext becomes an inherent part of the solution and therefore IMO the path of least resistance (a good enough solution). Perhaps a rewording to UserContext will make it sound less like a "dirty" technical coupling? :)
I often use a Thread bound Context in my applications (both Event sourced and not), and if your SessionContext.currentUser throws an exception in case a SessionContext hasn't been bound to the thread, then it can also help spot bugs during testing (at least it has for me).
The alternative could be to mark the Event as requiring User tracking (e.g. using an interface) and then enrich the event afterwards. This just feels a bit more cumbersome to me and can perhaps make problem solving harder since an unbound SessionContext exception will occur outside the business function requiring the User information.
Both solutions are IMO good enough solutions, so it's mostly a matter of where you want the coupling to the SessionContext.
I prefer to keep my domain models completely ignorant of outside details. If your domain objects require the user id to enforce business rules I would use your current approach and send the User as an argument. If you only need the user id for tracking/auditing purposes you can enrich the event.