relationship between upstream context and downstream context in ddd - domain-driven-design

Recently, I learn about ddd and it said that the relationship between two related bounded context is upstream and downstream.
But is it possible that in one situation A is upstream and B is downstream and in another situation B is upstream and downstream?
But if it is possible, I think the two bounded context is highly coupling. They are not independent business logic. So when that happens, does is mean that we do not divide domain correctly into bounded context?
Or we do allow communication between the two bounded context to some degree and if there are so many APIs they call from each other, they are actually one bounded context but we do not correctly divide it.

An upstream context will influence the downstream counterpart while the opposite might not be true.
For instance, imagine there are two micro-services as bounded contexts, MoneyTransferService along with NotificationService. If money is transferred, notification should send an email, which includes some info related the transaction, to the customer. So
MoneyTransferService is upstream
NotificationService is downstream
DDD describes several organizational patterns that help us describe and/or manage the way different contexts interact. The most suitable pattern here is called Anti-Corruption Layer (ACL).
In order to follow this pattern in my example, to communicate with two micro-services, Repository layer could be used or a better solution is publish messages and consume them by tools like RabbitMQ. By using RabbitMQ, these services is just dependent on the message type, no need to know anything else.
As far as dependency is concerned, interaction between bounded contexts doesn't mean having dependency between them and you don't necessarily need to redesign them as a bounded context.
Your goal should be to get to the most meaningful separation guided by your domain knowledge. The emphasis isn't on the size, but instead on business capabilities. In addition, if there's clear cohesion needed for a certain area of the application based on a high number of dependencies, that indicates the need for a single bounded context, too.
There is nothing wrong with communications between bounded contexts which need each other to complete their business operations.

Related

Which project in a solution to add a Domain Service that spans two aggregates?

I currently have two projects within one Visual Studio solution. Each project represents a different aggregate. I need to add a domain service that interacts with the two aggregate roots. Which project should I add it to? Does it matter?
If your aggregate roots both belong to the same bounded context then your aggregate roots should probably be in the same project; else the domain service may be in another project that references the two aggregate root projects but that is going to become unwieldy quite quickly. A domain project per bounded context should suffice.
However, if the two aggregate roots are in separate bounded contexts then the "easiest" would be to use some form of messaging and have a process manager in an orchestration layer handle the interaction between the various bounded context endpoints. For this I usually have BC specific orchestration endpoints and BC specific "functional" endpoints where a functional endpoint handles BC specific functions. A BC specific orchestration endpoint, however, contains the BC specific process managers but typically interacts with other functional endpoints from whichever BC it requires a service to be fulfilled.
Imagine you're building an ecommerce app. In your case when you create a product, the data should be decomposed by ui micro controllers which belong to their bounded contexts (shipping, invoicing) (like hyenas) and send info they interested in to bounded context data storage for future fulfilling their capabilities. As a shipping isolated module, in order to calculate shipping cost, I need product weight, hence I should tear away that info when user inputs that data on UI.
https://www.youtube.com/watch?v=hev65ozmYPI - Check out this link.

DDD, CQRS/ES & MicroServices Should Decisions be taken on Microservice's views or aggregates?

So I'll explain the problem through the use of an example as it makes everything more concrete and hopefully will reduce ambiguity.
The Architecture is pretty simple
1 MicroService <=> 1 Aggregate <=> Transactional Boundry
Each microservice will be using CQRS/ES design pattern which implies
Each microservice will have its own Aggregate mapping the domain of a real-world problem
The state of the aggregate will be rebuilt from an event store
Each event will signify a state change within the aggregate and will be transmitted to any service interested in the change via a message broker
Each microservice will be transactional within its own domain
Each microservice will be eventually consistent with other domains
Each microservice will build there own view models, from events being emitted by other microservices
So the example lets say we have a banking system
current-account microservice is responsible for mapping the Customer Current Account ... Withdrawal, Deposits
rewards microservice will be responsible for inventory and stock take of any rewards being served by the bank
air-miles microservice will be responsible for monitoring all the transaction coming from the current-account and in doing so award the Customer with rewards, from our reward micro-service
So the problem is this Should the air-miles microservice take decisions based on its own view model which is being updated from events coming from the current-account, and similarly, on picking which reward it should give out to the Customer?
Drawbacks of taking decisions on local view models;
Replicating domain logic on how to maintain these views
Bugs within the view might propagate the wrong rewards to be given out
State changes (aka events emitted) on corrupted view models could have consequences in other services which are taking their own decisions on these events
Advantages of taking a decision on local view models;
The system doesn't need to constantly query the microservice owning the domain
The system should be faster and less resource intense
Or should it use the events coming from the service to trigger queries to the Aggregate owning the Domain, in doing so we accept the fact that view models might get corrupt but the final decision should always be consulted with the aggregate owning the domain?
Please, not that the above problem is simply my understanding of the architecture, and the aim of this post is to get different views on how one might use this architecture effectively in a microservice environment to keep each service decoupled yet avoid cascading corruption scenario without to much chatter between the service.
So the problem is this Should the air-miles microservice take decisions based on its own view model which is being updated from events coming from the current-account, and similarly, on picking which reward it should give out to the Customer?
Yes. In fact, you should revise your architecture and even create more microservices. What I mean is that, being a event-driven architecture (also an Event-sourced one), your microservices have two responsibilities: they need to keep two different models: the write model and the read model.
So, for each Aggregate should be a microservice that keeps only the write model, that is, it only processes Commands, without building also a read model.
Then, for each read/query use case you should have a microservice that build the perfect read model. This is required if you need to keep the Aggregate microservice clean (as you should) because in general, the read models needs data from multiple Aggregate types/bounded contexts. Read models may cross bounded context boundaries, Aggregates may not. So you see, you don't really have a choice if you need to fully respect DDD.
Some says that domain events should be hidden, only local to the owning microservice. I disagree. In an event-driven architecture the domain events are first class citizens, they are allowed to reach other microservices. This gives the other microservices the chance to build their own interpretation of the system state. Otherwise, the emitting microservice would have the impossible additional responsibility/task of building a state that must match every possible need that all the microservices would ever want(!); i.e. maybe a microservices would want to lookup a deleted remote entity's title, how could it do that if the emitting microservice keeps only the list of non-deleted-yet entities? You may say: but then it will keep all the entities, deleted or not. But maybe someone needs the date that an entity was deleted; you may say: but then I keep also the deletedDate. You see what you do? You break the Open/closed principle. Every time you create a microservice you need to modify the emitting microservice.
There is also the resilience of the microservices. In the Art of scalability, the authors speak about swimming lanes. They are a strategy to separate the components of a system into lanes of failures. A failure in a lane does not propagate to other lanes. Our microservices are lanes. Components in a lane are not allowed to access any component from other lane. One down microservice should not bring the others down. It's not a matter of speed/optimisation, it's a matter of resilience. The domain events are the perfect modality of keeping two remote systems synchronized. They also emphasize the fact that the data is eventually consistent; the events travel at a limited speed (from nanoseconds to even days). When a system is designed with that in mind then no other microservice can bring it down.
Yes, there will be some code duplication. And yes, although I said that you don't have a choice, you have. In order to reduce the code duplication at the cost of a lower resilience, you can have some Canonical read models that build a normal flat state and other microservices could query that. This is dangerous in most cases as it breaks the swimming lanes concept. Should the Canonical microservices go down, go down all dependent microservices. Canonical microservices works best for CRUD-like bounded context.
There are however valid cases when you may have some internal events that you don't want to expose. In other words, you are not required to publish all domain events.
So the problem is this Should the air-miles micro service take decisions based on its own view model which is being updated from events coming from the current-account, and similarly, on picking which reward it should give out to the Customer?
Each consumer uses a local replica of a representation computed by the producer.
So if air-miles needs information from current-account it should be looking at a local replica of a view calculated by the current-account service.
The key idea is this: micro services are supposed to be isolated from one another; you should be able to redesign and deploy one without impacting the others.
So try this thought experiment - suppose we had these three micro services, but all saving snapshots of current state, rather than events. Everything works, then imagine that the current-account maintainer discovers that an event sourced implementation would better serve the business.
Should the change to the current-account require a matching change in the air-miles service? If so, can we really claim that these services are isolated from one another?
Advantages of taking a decision on local view models
I don't particularly like these "advantages"; first, they are dominated by the performance axis (please recall that the second rule of performance optimization is "not yet"). And second, that they assume that the service boundaries are correctly drawn; maybe the performance issue is evidence that the separation of responsibilities needs review.

How many event storages should we use by multiple bounded contexts?

I am currently reading about DDD and I did not manage to find answer to this question. If we have a large application with multiple bounded contexts, then as far as I know we should implement each BC as it were a separate application. Thus it is logical to come to the conclusion that each BC has its own UI and event storage. I previously thought that we have only a single event storage because it is the single source of truth according to some articles (about CQRS). The only problem with these statements that they lack of context. So is an event storage the single source of truth in a single bounded context or in the entire application?
"Is an ES the single source of truth in a bounded context or in entire application?"
I guess you meant system, because Bounded Context is an application in the simplest explanation.
"If we have a large application with multiple bounded contexts"
You can't have multiple bounded contexts within the same model. Bounded context limits model. So you should change term bounded context for subdomain and it would be correct.
Anyway answering your question. It depends.
Single Event Store for whole system
Pros
One place to manage
It is easy to see related events by CorrelationID
In some softwares no need for service discovery. All services (applications) can integrate via single ES (I am talking about true ES not data storage.)
Less cpu/memory needed
Cons
Single point of failure (of course you can scale it, to avoid such situation)
You're coupling services together (breaking microservice's rule)
Obligated to not change ES during system life time
One Event Store per application
Pros
No single point of failure
Deployed with application
No coupling between services. More autonomy
If application will be disabled ES can be disbaled with it
New services can work with new versions or even a diffrent ES
Cons
Additional databases to take care about and monitor
More cpu/ram consumed
Harder to manage correlationIDs, because they are splitted between multiple ES
Some service discovery needed. For subscribing to multiple ES or need for extra message queue

CQRS Commands and Queries - Do they belong in the domain?

In CQRS, do they Commands and Queries belong in the Domain?
Do the Events also belong in the Domain?
If that is the case are the Command/Query Handlers just implementations in the infrastructure?
Right now I have it layed out like this:
Application.Common
Application.Domain
- Model
- Aggregate
- Commands
- Queries
Application.Infrastructure
- Command/Query Handlers
- ...
Application.WebApi
- Controllers that utilize Commands and Queries
Another question, where do you raise events from? The Command Handler or the Domain Aggregate?
Commands and Events can be of very different concerns. They can be technical concerns, integration concerns, domain concerns...
I assume that if you ask about domain, you're implementing a domain model (maybe even with Domain Driven Design).
If this is the case I'll try to give you a really simplified response, so you can have a starting point:
Command: is a business intention, something you want a system to do. Keep the definition of the commands in the domain. Technically it is just a pure DTO. The name of the command should always be imperative "PlaceOrder", "ApplyDiscount" One command is handled only by one command handler and it can be discarded if not valid (however you should make all the validation possible before sending the command to your domain so it cannot fail)
Event: this is something that has happened in the past. For the business it is the immutable fact that cannot be changed. Keep the definition of the domain event it in the domain. Technicaly it's also a DTO object. However the name of the event should always be in the past "OrderPlaced", "DiscountApplied". Events generally are pub/sub. One publisher many handlers.
If that is the case are the Command/Query Handlers just implementations in the infrastructure?
Command Handlers are semantically similar to the application service layer. Generally application service layer is responsible for orchestrating the domain. It's often build around business use cases like for example "Placing an Order". In those use cases invoke business logic (which should be always encapsulated in the domain) through aggregate roots, querying, etc. It's also a good place to handle cross cutting concerns like transactions, validation, security, etc.
However, application layer is not mandatory. It depends on the functional and technical requirements and the choices of architecture that has been made.
Your layring seems correct. I would better keep command handlers at the boundary of the system. If there is not a proper application layer, a command handler can play a role of the use case orchestrator. If you place it in the Domain, you won't be able to handle cross cutting concerns very easily. It's a tradeoff. You should be aware of the pro and cons of your solution. It may work in one case and not in another.
As for the event handlers. I handle it generally in
Application layer if the event triggers modification of another Aggregate in the same bounded context or if the event trigger some infrastructure service.
Infrastructure layer if the event need to be split to multiple consumers or integrate other bounded context.
Anyway you should not blindly follow the rules. There are always tradeoffs and different approaches can be found.
Another question, where do you raise events from? The Command Handler or the Domain Aggregate?
I'm doing it from the domain aggregate root. Because the domain is responsible for raising events.
As there is always a technical rule, that you should not publish events if there was a problem persisting the changes in the aggregate and vice-versa I took the approach used in Event Sourcing and that is pragmatic. My aggregate root has a collection of Unpublished events. In the implementation of my repository I would inspect the collection of Unpublished events and pass them to the middleware responsible for publishing events. It's easy to control that if there is an exception persisting an aggregate root, events are not published. Some says that it's not the responsibility of the repository, and I agree, but who cares. What's the choice. Having awkward code for event publishing that creeps into your domain with all the infrastructure concerns (transaction, exception handling, etc) or being pragmatic and handle all in the Infrastructure layer? I've done both and believe me, I prefer to be pragmatic.
To sum up, there is no a single way of doing things. Always know your business needs and technical requirements (scalability, performance, etc.). Than make your choices based on that. I've describe what generally I've done in the most of cases and that worked. It's just my opinion.
In some implementations, Commands and handlers are in the Application layer. In others, they belong in the domain. I've often seen the former in OO systems, and the latter more in functional implementations, which is also what I do myself, but YMMV.
If by events you mean Domain Events, well... yes I recommend to define them in the Domain layer and emit them from domain objects. Domain events are an essential part of your ubiquitous language and will even be directly coined by domain experts if you practise Event Storming for instance, so it definitely makes sense to put them there.
What I think you should keep in mind though is that no rule about these technical details deserves to be set in stone. There are countless questions about DDD template projects and layering and code "topology" on SO, but frankly I don't think these issues are decisive in making a robust, performant and maintainable application, especially since they are so context dependent. You most likely won't organize the code for a trading system with millions of aggregate changes per minute in the same way that you would a blog publishing platform used by 50 people, even if both are designed with a DDD approach. Sometimes you have to try things for yourself based on your context and learn along the way.
Command and events are DTOs. You can have command handlers and queries in any layer/component. An event is just a notification that something changed. You can have all type of events: Domain, Application etc.
Events can be generated by both handler and aggregate it's up to you. However, regardless where they are generated the command handler should use a service bus to publish the events. I prefer to generate domain events inside the aggregate root.
From a DDD strategic point of view, there are just business concepts and use cases. Domain events, commands, handlers are technical details. However all domain use cases are usually implemented as a command handler, therefore command handlers should be part of the domain as well as the query handlers implementing queries used by the domain. Queries used by the UI can be part of the UI and so on.
The point of CQRS is to have at least 2 models and the Command should be the domain model itself. However you can have a Query model, specialised for domain usage but it's still a read (simplified) model. Consider the command model as being used only for updates, the read model only for queries. But, you can have multiple read models (to be used by a specific layer or component) or just a generic (used for everything query) one.

A bounded context is a full application?

I've been reading about DDD and bounded contexts and I think I'm getting the idea wrong. At first, I liked the idea of subdomains and bounded contexts, I understood it like that: there's a software to be developed, but attacking all at once is too much, so we break it into logical pieces and develop each at once. Another problem we solve is ambiguities on the ubiquitous language.
This led me to think about bounded contexts as basically just folders where I group and bound code related to some specific piece of the application. This code I believed to be made up from things like
The domain model of that bounded context, including abstractions for repositories and services
Infrastructure layer for that bounded context, implementations of repositories and so on
Of course, being the domain model and infrastructure properly separeted within the bounded context.
Reading further, it seems, however, that each bounded context is an entire application on it's own right. It seems, sometimes, that each bounded context has it's own application layer, for instance.
This made me confused, because sometimes I don't want to end up developing tons of applications, I just one to develop one. The bounded context division of the application was supposed to build one app, not many apps to be integrated.
I've seem this question where #MikeSW says both approaches presented by the OP are valid. What I'm asking is about a third structure:
<bc 1>
|_ domain
|_ infrastructure
<bc 2>
|_ domain
|_ infrastructure
|_ application
|_ presentation
At least for all the applications I've seem this makes much more sense. I want one app, not several apps with several presentations, but I still want to be able to break the domain and benefit of things like "bounding the ubiquitous language".
So, is a bounded context a full aplication? Or can a bounded context be used like I understood and felt more useful? There are any problems with my approach?
The domain layer is usually the most complex part of your program, and can also change often due to business requirements and refactoring. So you generally don't want to expose it directly to your presentation layer or other bounded contexts. If you feel that you can expose it, it might be the case that your application logic or use case methods are mixed into your domain layer, or that your program is not large or complex enough to require multiple BC's to begin with. Otherwise, I would go with including the application layer in each BC to protect the domain model's integrity and expose only the commands that need to be called from a use case perspective.
I want one app, not several apps with several presentations, but I
still want to be able to break the domain and benefit of things like
"bounding the ubiquitous language".
You can have a thin application layer for each bounded context, and still have a single presentation layer. This is sometimes called a "composite UI", which should be considered a separate BC in itself. If you need to handle common logic such as authentication, create another application service or facade in the composite UI and have it handle the authentication before in turn calling the application service of an outside BC.
I think most of the examples you see in books and on the web are over-simplified in that they have 1 BC per physical running application (and perform some kind of network communication between them), whereas in the real world you might have a complex application that you need to split into separate logical units, but not run them as separate processes unless the need arrives.
At the end of the day the answer is both. The important thing to take away from bounded context is not how you structure your app, but that you have different spaces where you model specific behavior relating to some context. How you define the boundaries between these contexts is dependent on the problem you need to solve.
There is nothing wrong using namespaces(folders) to define bounded contexts. Like you said most of the time you are simply writing one application. You can also define your bounded context by having separate projects for each context. In this case your presentation layer will reference the project it needs.
There are many right ways to code DDD. You should ask yourself "Am I following the core principle by doing it this way"
The bounded context describes a subset of the complete solution and everything within that context serves that context. So, imo, each context has it's own domain so it could be a separate application or just a subsystem of the same project. The point of the "context" is that the ubiquitous language applies directly TO that context. For example, a User in the Account context might mean something completely different than a User in the Sales context. Each "User" will have different capabilities and follow different rules in each context. Each context needs to be isolated from any other context and are not allowed to share references (unless it's via a 'Shared' context); any communication should be mediated through a service that sits on top of that context. A context doesn't even have to follow DDD to be "DDD compliant" since each context can follow it's own approach (e.g. domain driven, data driven, etc.). Contexts are simply silos that outline a logical section of the business.
Whatever you need to do to prevent direct references across contexts is fine whether that means different namespaces, different assemblies within a solution, or different projects altogether.
The bounded context is the scope on which the code operates. It relies on a domain model, that can be supported by a ORM (or not). It implements different kinds of services (domain services and application services) but its aim is to expose only domain services to its environment. DDD is a service oriented architecture, meant to work as offline as possible and in a loose-coupled way. You may decide to consume your services in different ways. The solution implements different kinds of components, different kinds of layers, different kinds of projects. I believe the most critical attention must concern the model, that should not be distributed across components. Solution design and domain model are orthogonal purposes.

Resources