I'm learning about microservices.
On one hand, the literature recommends using asynchronous event-publishing for microservices that need to collaborate on sagas or take action on events published by other services.
On the other hand, the same literature recommends not using a shared library to define common events because that couples the microservices through that event library.
Am I taking crazy pills? Aren't those microservices coupled by those events anyway if they rely on them? If so, what is the advantage of coding the exact same events with the same definition in two (or even more) different places? Isn't that a total violation of the DRY principle?
I'm starting to smell a code smell that starts with the initials BS. Will someone help me drink the rest of this koolaid? Or did I just see the emperor with his clothes off for a second?
If so, what is the advantage of coding the exact same events with the same definition in two (or even more) different places?
There could be a number of advantages -- the microservices might be implemented using different languages. Or using the same language, but different in memory representations of the data to suit there specific needs. Or even the "same" in memory representations, but different versions, because they are on different deployment schedules.
There's nothing inherently wrong with sharing the labor of preparing a messaging library among the implementations of your services. But that should be an opt-in, rather than being a requirement. In particular, a team always has the option of replacing the library if the shared implementation is getting in the way.
Two services that agree that the messages are going to use UTF-8 encoded JSON documents should not be required to use the same parser -- the choice of parser is an implementation detail. The coupling is to the schema (the agreement about the semantics of the bytes in the message), not to the implementation.
If you treat events as plain data objects, you don't need a library to deal with them - other than generic messagning and serialization/deserialization code.
The whole point of microservices is to have independent development cycles, so as soon as you introduce the common library, you are starting to make a "distributed monolyth". Any change in this library will cause a redeployment of all microservices.
Without event-specific library the only dependency you introduce it a knowledge of particular event structure from another microservice. Well, this is a necessary evil.
Related
I have flicked through few popular Event Sourcing frameworks written in a variety of different common languages. I have got the impression all of them affect the domain models to a really high degree. As far as I understand ES is just an infrastructure concern - a way of persisting aggregate state. Of course, it facilitates message driven inter-context integration but in core domain's point of view is negligible. I consider commands and events to be part of the domain itself so it looks perfectly fine that aggregate creates events (but not publishes them) or handles commands.
The problem is that all of DDD building blocks tend to be polluted by ES framework. Events must inherit from some base class. Aggregates at least are supposed to implement foreign interfaces. I wonder if domain models should be even aware of using ES approach within an application. In my opinion, even necessity of providing apply() methods indicates that other layer shapes our domain.
How you approach this issue in your projects?
My answer applies only when CQRS is involved (write and read models are split and they communicate using domain events).
As far as I understand ES is just an infrastructure concern - a way of persisting aggregate state
Event sourcing is indeed an infrastructure concern, a kind of repository but event-based Aggregates are not. I consider them to be an architectural style, different from the classical style.
So, the fact that an Aggregate, in reaction to an command, generates zero or more domain events that are applied onto itself in order to build its internal (private) state used to decide what events to generate in the future is just a different mode of thinking and designing an Aggregate. This is a perfect valid style, along with classical style (the one not using events but only objects) or functional programming style.
Event sourcing just means that every time a command reaches an Aggregate, its entire internal state is rebuild instead of being loaded from a flat persistence. Of course there are other huge advantages (!) but they do not affect the design of an Aggregate.
... but not publishes them ...
I like the frameworks that permit us to just return (or better yield - Aggregate's command methods are just generators!) the events.
Events must inherit from some base class
It's sad that some frameworks require that but this is not necessarily. In general, a framework needs one mean of detecting an event class. However, they can be implemented to detect an event by other means instead of using marker interfaces. For example, the client (as in YOU) could provide a filter method that rejects non-event classes.
However, there is one thing that I couldn't avoid in my framework (yes, I know, I'm guilty, I have one): the Command interface with only one method: getAggregateId.
Aggregates at least are supposed to implement foreign interfaces.
Again, like with events, this is not a necessity. A framework could be given a custom client event-applier-on-aggregates function or a convention can be used (i.e. all event-applier methods have the form applyEventClassNameOrType.
I wonder if domain models should be even aware of using ES approach within an application
Of ES not, but event-based YES, so the apply method must still exists.
As far as I understand ES is just an infrastructure concern - a way of persisting aggregate state.
No, events are really core to the domain model.
Technically, you could store diffs in a domain agnostic way. For example, you could look at an aggregate and say "here is the representation before the change, here is the representation after, we'll compute the difference and store that.
The difference between patches and events is the fact that you switch from a domain agnostic spelling to a domain specific spelling. Doing that is normally going to require being intimate with the domain model itself.
The problem is that all of DDD building blocks tend to be polluted by ES framework.
Yup, there's a lot of crap framework in the examples you find in the wild. Sturgeon's Law at work.
Thinking about the domain model from a functional perspective can help a lot. At it's core, the most general form of the model is a function that accepts current state as an input, and returns a list of events as the output.
List<Event> change(State current)
From there, if you want to save current state, you just wrap this function in something that knows how to do the fold
State current = ...
List<Event> events = change(current)
State updated = State.fold(current, events)
Similarly, you can get current state by folding over the previous history
List<Event> savedHistory = ...
State current = State.reduce(savedHistory)
List<Event> events = change(current)
State updated = State.fold(current, events)
Another way of saying the same thing; the "events" are already there in your (not event sourced) domain model -- they are just implicit. If there is business value in tracking those events, then you should replace the implementation of your domain model with one that makes those events explicit. Then you can decide which persisted representation to use independent of the domain model.
Core of my problem is that domain Event inherits from framework Event and aggregate implements some foreign interface (from framework). How to avoid this?
There are a couple of possibilities.
1) Roll your own: take a close look at the framework -- what is it really buying you? If your answer is "not much", then maybe you can do without it.
From what I've seen, the "win" of these frameworks tends to be in taking a heterogeneous collection of events and managing the routing for you. That's not nothing -- but it's a bit magic, and you might be happier having that code explicit, rather than relying on implicit framework magic
2) Suck it up: if the framework is unobtrusive, then it may be more practical to accept the tradeoffs that it imposes and live with them. To some degree, event frameworks are like object relational mappers or databases; sure, in theory you should be able to change them out freely. In practice? how often do you derive benefit from the investment in that flexibility
3) Interfaces: if you squint a little bit, you can see that your domain behaviors don't usually depend on in memory representations, but instead on the algebra of the domain itself.
For example, in the domain model, we deposit Money into an Account updating its Balance. We don't typically care whether those are integers, or longs, or floats, or JSON documents. We can satisfy the model with any implementation that satisfies the constraints of the algebra.
So you can use the framework to provide the implementation (which also happens to have all the hooks the framework needs); the behavior just interacts with the interface it defined itself.
In a strongly typed implementation, this can get really twisty. In Java, for instance, if you want the strong type checks you need to be comfortable with the magic of generics and type erasure.
The real answer to this is that DDD is overrated. It is not true that you have to have one model to rule them all. You may have different views on the state of your world, depending on your current needs. One part of the application has one view, another part - completely different view.
To put it another way, your model is not "what is", but "what happened so far". The actual data model of your application is the event stream itself. Everything else you derive from there.
I'm breaking my system into (at least) two bounded-contexts: study-design and survey-planning.
There's a concept named "subject" (potential subject for interviewing) in the study-design context. We also maintain associations between subjects and populations in that domain.
Now, in the survey-planning, we also need (some) information about the subject (for example: for planning a visit, or even for anticipated selection of questionnaire, in case the population the subject belongs to is known beforehand).
So, I need that "subject" in both contexts.
What approach should I pick? Having a shared kernel, as explained in Eric Evans DDD book? I don't mind (at least for now) having the two contexts sharing the same database.
Or... should I go pure microservice? Meaning: those two can't / shouldn't share database..., and in that case I might have to go the mirroring / duplicating route through event passing: https://www.infoq.com/news/2014/11/sharing-data-bounded-contexts
Any thoughts on which one is better, for the above situation?
Thanks!
The context for microservices is distributed systems. In any other situation it would probably be overkill. Shared kernel will eventually split. That is usually the case. You may start from it. Nothing wrong with that. However, it will not stay there.
I recommend that you choose a event-driven solution, but not necessarily to use microservices. You could build an event-driven monolith in order to spend much less time on synchronizing the two models. When the application grows too big then you split the monolith into microservices. You could use CQRS to split event more the models into write and read. If you use event-sourcing things get even more interesting.
In my experience, with shared kernel, the models become god objects, one-size-fits-all kind of objects.
In my opinion, you have three entities:
study
survey
person
It is pretty intuitive to see that each of these is its own aggregate root. So then we are talking about inter-root relationships. In my experience, those are meaningful entities on their own, and cleanest and most future proof by far is to treat those relationships as independent aggregate roots.
The relationship between a study and a person is perhaps called TestSubject, and the relationship between a person and a survey could be called Interviewee or something similar. In another context, the person could be an employee for a company, and then the Employee would be its own aggregate root. Information that only relates to the relationship and not to the person or the study say, should be limited to this relationship specific aggregate root. This could for instance be the start date at which the subject started to take part in the test, and the end date (when he dropped out, if he or she dropped out prematurely, etc.)
As for storage, all aggregate roots should define their own separate repositories as interfaces and know only those interfaces, but the implementation of those interfaces is free to choose to use the same database or different ones, or even different kinds, local or distributed, etc. So this holds for these 'relational' aggregate roots as well. But you should almost force yourself to use different databases and preferably even different technologies (e.g. one EntityFramework, the other MongoDb) when you start with this, to force yourself to make sure your interfaces are properly defined and independent of implementation.
And yes, big fan of CQRS as well here, and Event/Command Sourcing as well. There are many light-weight implementations possible that allow you to upscale, but are very easy to get into and afford you almost completely linear (=predictable) complexity.
You can start with microservices that share a monolithic data source, but only use partial domain entities and value objects
I'm new to UML and I have crossed path with sequence diagram, and realized that there's 2 types: distributed and centralized. Can anyone explain me the differences?
centralized control, with one participant doing most of the processing and the other participants there to supply data.
Example:
Distributed control, in which the processing is split among many participants, each one doing a little bit of the algorithm
Example:
Both styles have their strengths and weaknesses. Most people, particularly those new to objects, are more used to centralized control. In many ways, it’s simpler, as all the processing is in one place; with distributed control, in contrast, you have the sensation of chasing around the objects, trying to find the program.
Despite this, object bigots like strongly prefer distributed control. One of the main goals of good design is to localize the effects of change. Data and behavior that accesses that data often change together. So putting the data and the behavior that uses it together in one place is the first rule of object-oriented design.
Furthermore, by distributing control, you create more opportunities for using polymorphism rather than using conditional logic. If the algorithms for product pricing are different for different types of product, the distributed control mechanism allows us to use subclasses of product to handle these variations.
I would like to have one or more libraries of reusable classes that are basically value objects, such as Address, PhoneNumber, EmailAdress, containing mostly properties and a few supporting methods. How can my Domain layer use these without breaking the rule that the Domain Layer should not contain external references, and without defining them as interfaces/abstract classes in the Domain Layer?
... without breaking the rule that the Domain Layer should not contain external references
I think your definition of 'external references' requires some reevaluation. It is hard to imagine a domain layer that does not reference anything. In C# and Java you will reference at least basic numeric types, dates and strings. I also don't see any harm in referencing external libraries like Noda/Joda time. On the other hand, you of course would not want to reference any heavy technical libraries like persistence, communication, UI etc.
So I would say that you can build your own reusable library referenced from domain but it requires a very careful consideration, and is often not worth the coupling that it will create. I would use a following criteria for every type:
Should be context-independent. EmailAddress for example is relatively independent of the context it is used from. Address on the other hand may have a different meaning depending on a Bounded context.
Should be stable (does not change often).
Should not hide any out-of-process communication (db, network etc)
Should not have any dependencies of its own (other than standard Java/C#)
I think that what you're referring to is a shared kernel.
Shared Kernel – This is where two teams share some subset of the
domain model. This shouldn’t be changed without the other team being
consulted.
While this looks great at first, since we are drilled not to repeat ourselves, be aware of the pitfalls:
The concepts should have the same meaning in every context. Some of these concepts hold subtle nuances depending on the context. Ask your domain expert.
Changes are more expensive; it might be cheaper to duplicate these few classes so that you can change them on your own than to have to consult multiple teams when something changes.
Stability cuts both ways. If you pull an entity out into each domain, then any changes have to be executed across multiple projects. If you don't, then changes have to be coordinated across multiple domains. The logistics of the former are easier than the latter, but the work involved in the latter can be greater. Either way, you have to test the changes on each platform.
And unless the entity is mature with a relatively well-defined semantics, my experience is that almost everything changes. So stability is nice, but might be a bit of a red herring.
That being said, I like (and +1) #Dmitry.
I'll admit that I am still quite a newbie with DDD and even more so with CQRS. I also realize that DDD and/or CQRS might not be the right approach to every problem. Nevertheless, I like the principals but have some questions in the context of a current project.
The solution is a simulator that generates performance data based on the current configuration. Administrators can create and modify the specifications for simulations. Testers set some environmental conditions and run the simulator. The results are captured, aggregated and reported.
The solution consists of 3 component areas each with their own use-cases, domain logic and supporting data structure. As a result, a modular designed seems appealing as a way to segregate logic and separate concerns.
The first area would be the administrative aspect which allows users to create and modify the specifications. This would be a CRUD heavy 'module'.
The second area would be for executing the simulations. The domain model would be similar to the first area but optimized for executing the simulation as opposed to providing a convenient model for editing.
The third area is reporting.
From this I believe that I have three Bounding Contexts, yes? I have three clear entry points into the application, three sets of domain logic and three different data models to support the domain logic.
My first instinct is to follow these lines and create three modules (assemblies) that encapsulate the domain layer for each area. Should I also have three separate databases? Maybe more than three to support write versus read?
I gather this may be preferred for CQRS but am not sure how to go about it. It appears to me that CQRS suggests a set of back-end processes that move data around. But if that's the case, and data persistence is cross-cutting (as DDD suggests), then doesn't my data access code need awareness of all of the domain objects? If so, then is there a benefit to having separate modules?
Finally, something I failed to mention earlier is that specifications are considered 'drafts' until published, which makes then available for simulation. My PublishingService needs to have knowledge of the domain model for both the first and second areas so that when it responds to the SpecificationPublishedEvent, it can read the specification, translate the model and persist it for execution. This makes me think I don't have three bounding contexts after all. Or am I missing something in my analysis?
You may have a modular UI for this, but I don't see three separate domains in what you are describing necessarily.
First off, in CQRS reporting is not directly a domain model concern, it is a facet of the separated Read Model which takes on the responsibility of presenting the domain state optimized for reporting.
Second just because you have different things happening in the domain is not necessarily a reason to bound them away from each other. I'd take a read through the blue DDD book to get a bit better feel for what BCs look like.
I don't really understand your domain well enough but I'll try to give some general suggestions.
Start with where you talked about your PublishingService. I see a Specification aggregate root which takes a few commands that probably look like CreateNewSpecification, UpdateSpecification and PublishSpecification.
The events look similar and probably feel redundant: SpecificationCreated, SpecificationUpdated, SpecificationPublished. Which kind of sucks but a CRUD heavy model doesn't have very interesting behaviors. I'd also suggest finding an automated way to deal with model/schema changes on this aggregate which will be tedious if you don't use code generation, or handle the changes in a dynamic *emphasized text*way that doesn't require you to build new events each time.
Also you might just consider not using event sourcing for such an aggregate root since it is so CRUD heavy.
The second thing you describe seems to be about starting a simulation which will run based on a Specification and produce data during that simulation (I assume). An event driven architecture makes sense here to decouple updating the reporting data from the process that is producing the data. This has huge benefits if you are producing large amounts of data to process.
However it doesn't sound like a Simulation is necessarily the kind of AR that would benefit from Event Sourcing either. For a couple reasons:
Simulation really takes only one Command which is something like StartSimulation
Simulation then produces events over it's life-time which represent what is happening internally with the simulation
Simulation doesn't seem to ever receive any other Commands that could depend on the current state of the Simulation
Simulation is not interacted with by multiple clients/users simultaneously and as we pointed out it isn't really interacted with at all
In general, domain modeling is very specific to each individual project so it's hard to give you all the information you need to build your domain model. It will come as a result of spending a great deal of time trying to understand your user's needs and the problem they are trying to solve with the software. It likely will go through multiple refinements as you develop insights into their process.