Do microservices break the bounded context? - domain-driven-design

I am a bit confused. I am working in a young banking company and we decided to implement a DDD architecture to break complexity.
So, here is my question (it follows a design suggestion made by someone in the team). Let's say we have 3 different domains. D1, D2, D3, that expose domain (web)services. Each domain manipulates strongly typed business entities, that rely on the same tables. In front of these domains, we want a microservice to garantee that data persisted in tables is consistent, in a centralized manner. D1, D2 and D3 ask the microservice to persist data conforming to specific rules. We want the microservice to act as a CRUD proxy to the tables. The microservice provides specific DTOs to D1, D2 and D3 domains, obfuscating the tables to D1, D2, D3.
Does this approach sound good ? Would you consider using microservices in a DDD architecture to manage CRUD and data consistency for 1+ domains ? Does "CRUDing" and validating data with a microservice break the bounded context ? What are the best practices with dealing with microservices in a DDD architecture, if any ?
Many thanks for your contribution,
[EDIT]
The following article helped me to refine my thoughts : http://martinfowler.com/bliki/MicroservicePremium.html
Microservices are useful on complex situations where monolithic systems failed at being maintainable. They are not good candidates for upfront design implementations. On the other hand, DDD tries to tackle complexity at the very beginning of the projects. Successful DDD should not meet microservices implementations.

Things like validation, calculation, and persistence (CRUD) are all functions and not services. By centralizing these tasks to one service you are tightly coupling all the other services to it, and lose the main benefit of SOA. Making your services loosely coupled while retaining high cohesion should be your primary goal. I feel this design violates that.
You want to design your services as vertical slices of related business functions, rather than with centralized generic services and layers. A service should be comprised of components that are deployed throughout your enterprise from the database all the way to the UI. Also each service should have it's own database or at least schema if that's not practical. As soon as you have services sharing a database, then you become tightly coupled again.
On a final note, if one of your services needs to do a simple CRUD insert, then that's all it should be. No need to map it to a domain model first. Save DDD for the business processes that have real invariants that need to be enforced. It doesn't have to be an all or nothing approach. Use the tool that makes sense for each use case.

Microservices are not supposed to "break" Bounded Contexts, they are complementary since there's a natural correlation between microservice and BC boundaries.
we want a microservice to garantee that data persisted in tables is
consistent, in a centralized manner
Microservices are not here for facading purposes. And they are all about decentralisation, not centralization. They are modular units organized around business capabilities. They will usually act as gateways to your Bounded Contexts, in front of the Domain, not proxies to a persistent store behind the Domain.
It seems you're trying to apply the wrong solution to your problem -- using microservices as convergence points to a data-centric monolith, which defeats their whole purpose.
Maybe you should elaborate on what you mean by "guarantee data persisted is consistent" and "persist data conforming to specific rules". It might just be implemented in the persistence layer or in services that are not microservices.

Related

How to classify services in microservices?

I am new in microservices. I am coming from monolithic background in current environment i have different kinds services for different purposes like search, file, email, notification. I have taken so many courses but in that the instructor separate each entity and make it's own database also create API for that(like separate shopping cart entity, product entity) it makes no sense, I am not getting what is real world use of microservices or how to make separate component to build it's own microservice.
Can anyone give Real Project example?
Thanks in advance
Read this and this. Also look here and here. I don't think that anyone will give a link to the real working project, so you can try this.
I am not getting what is real world use of microservices
mostly as you heard in all of those tutorials the microservices architecture leverage advantages of:
the smaller services are easy to maintain and develop
easily can scale specific services rather than the whole project(monolith). for example you scale service-1 to 4 instances that request traffic split into these 4 instance and service-2 to 2 instances and go on (load balance). and these services may distributed in to different servers and locations.
if one service failed to work it does not terminate the whole system since they are independent.
services can be reusable for other scenarios or features.
small team can works for each services and its easy to manage both project and development flow.
and also it suffer from disadvantages of
services are simple and small but all as a whole system is complex so designing part are very critical.
poor performance and it requires do some extras to improve the performance (different types of caching on different levels).
transactions are complex and its developments are time costly. imagine simple update should be projected to other services if its required and you have to consider failure and rollback strategy ( SAGA ).
how to make separate component to build it's own microservice
this is the most challenging part of microservices. you need deep study on Domain driven design DDD.
Decompose by subdomain
Decompose by Business Capabilities
Can anyone give Real Project example?
there are many projects the develop microservices with different patterns. I think you have to start your own and make your hands dirty.

DDD, CQRS/ES & MicroServices Should Decisions be taken on Microservice's views or aggregates?

So I'll explain the problem through the use of an example as it makes everything more concrete and hopefully will reduce ambiguity.
The Architecture is pretty simple
1 MicroService <=> 1 Aggregate <=> Transactional Boundry
Each microservice will be using CQRS/ES design pattern which implies
Each microservice will have its own Aggregate mapping the domain of a real-world problem
The state of the aggregate will be rebuilt from an event store
Each event will signify a state change within the aggregate and will be transmitted to any service interested in the change via a message broker
Each microservice will be transactional within its own domain
Each microservice will be eventually consistent with other domains
Each microservice will build there own view models, from events being emitted by other microservices
So the example lets say we have a banking system
current-account microservice is responsible for mapping the Customer Current Account ... Withdrawal, Deposits
rewards microservice will be responsible for inventory and stock take of any rewards being served by the bank
air-miles microservice will be responsible for monitoring all the transaction coming from the current-account and in doing so award the Customer with rewards, from our reward micro-service
So the problem is this Should the air-miles microservice take decisions based on its own view model which is being updated from events coming from the current-account, and similarly, on picking which reward it should give out to the Customer?
Drawbacks of taking decisions on local view models;
Replicating domain logic on how to maintain these views
Bugs within the view might propagate the wrong rewards to be given out
State changes (aka events emitted) on corrupted view models could have consequences in other services which are taking their own decisions on these events
Advantages of taking a decision on local view models;
The system doesn't need to constantly query the microservice owning the domain
The system should be faster and less resource intense
Or should it use the events coming from the service to trigger queries to the Aggregate owning the Domain, in doing so we accept the fact that view models might get corrupt but the final decision should always be consulted with the aggregate owning the domain?
Please, not that the above problem is simply my understanding of the architecture, and the aim of this post is to get different views on how one might use this architecture effectively in a microservice environment to keep each service decoupled yet avoid cascading corruption scenario without to much chatter between the service.
So the problem is this Should the air-miles microservice take decisions based on its own view model which is being updated from events coming from the current-account, and similarly, on picking which reward it should give out to the Customer?
Yes. In fact, you should revise your architecture and even create more microservices. What I mean is that, being a event-driven architecture (also an Event-sourced one), your microservices have two responsibilities: they need to keep two different models: the write model and the read model.
So, for each Aggregate should be a microservice that keeps only the write model, that is, it only processes Commands, without building also a read model.
Then, for each read/query use case you should have a microservice that build the perfect read model. This is required if you need to keep the Aggregate microservice clean (as you should) because in general, the read models needs data from multiple Aggregate types/bounded contexts. Read models may cross bounded context boundaries, Aggregates may not. So you see, you don't really have a choice if you need to fully respect DDD.
Some says that domain events should be hidden, only local to the owning microservice. I disagree. In an event-driven architecture the domain events are first class citizens, they are allowed to reach other microservices. This gives the other microservices the chance to build their own interpretation of the system state. Otherwise, the emitting microservice would have the impossible additional responsibility/task of building a state that must match every possible need that all the microservices would ever want(!); i.e. maybe a microservices would want to lookup a deleted remote entity's title, how could it do that if the emitting microservice keeps only the list of non-deleted-yet entities? You may say: but then it will keep all the entities, deleted or not. But maybe someone needs the date that an entity was deleted; you may say: but then I keep also the deletedDate. You see what you do? You break the Open/closed principle. Every time you create a microservice you need to modify the emitting microservice.
There is also the resilience of the microservices. In the Art of scalability, the authors speak about swimming lanes. They are a strategy to separate the components of a system into lanes of failures. A failure in a lane does not propagate to other lanes. Our microservices are lanes. Components in a lane are not allowed to access any component from other lane. One down microservice should not bring the others down. It's not a matter of speed/optimisation, it's a matter of resilience. The domain events are the perfect modality of keeping two remote systems synchronized. They also emphasize the fact that the data is eventually consistent; the events travel at a limited speed (from nanoseconds to even days). When a system is designed with that in mind then no other microservice can bring it down.
Yes, there will be some code duplication. And yes, although I said that you don't have a choice, you have. In order to reduce the code duplication at the cost of a lower resilience, you can have some Canonical read models that build a normal flat state and other microservices could query that. This is dangerous in most cases as it breaks the swimming lanes concept. Should the Canonical microservices go down, go down all dependent microservices. Canonical microservices works best for CRUD-like bounded context.
There are however valid cases when you may have some internal events that you don't want to expose. In other words, you are not required to publish all domain events.
So the problem is this Should the air-miles micro service take decisions based on its own view model which is being updated from events coming from the current-account, and similarly, on picking which reward it should give out to the Customer?
Each consumer uses a local replica of a representation computed by the producer.
So if air-miles needs information from current-account it should be looking at a local replica of a view calculated by the current-account service.
The key idea is this: micro services are supposed to be isolated from one another; you should be able to redesign and deploy one without impacting the others.
So try this thought experiment - suppose we had these three micro services, but all saving snapshots of current state, rather than events. Everything works, then imagine that the current-account maintainer discovers that an event sourced implementation would better serve the business.
Should the change to the current-account require a matching change in the air-miles service? If so, can we really claim that these services are isolated from one another?
Advantages of taking a decision on local view models
I don't particularly like these "advantages"; first, they are dominated by the performance axis (please recall that the second rule of performance optimization is "not yet"). And second, that they assume that the service boundaries are correctly drawn; maybe the performance issue is evidence that the separation of responsibilities needs review.

Can you suggest DDD best practices

Probably similar questions have been asked many times but I think that every response helps to make the understanding of DDD better and better. I would like to describe how I perceive certain aspects of DDD. I have some basic uncertainties around it and would appreciate if someone could give a solid and practical anwser. Please note, these questions assume a 'classic' approach to DDD. This means using ORM's etc. Approaches like CQRS and event sourcing are not considered here.
Aggregates and entities are the primary objects that implement domain logic. They have state and identity. In this context, I perceive domain logic as the set of all commands that mutate that state. Does that make sense? Why is domain logic related exclusively to state? Is it legal to model domain objects that have no identitiy or no state? Why can't a domain object be implemented as a transaction script? Example: Consider an object that recommends you a partner for a dating site. That object has no real state, but it does quite a lot of domain logic? Putting that into the service layer implies that the domain model cannot cover all logic.
Access to other domain objects. Can aggregates have access to a repository? Example: When a (stateful) domain object needs to have access to all 'users' of the system to perform its work, it would need access them via the repository. As a consequence, an ORM would need to inject the repository when loading the object (which might be technically more challenging). If objects can't have access to repositories, where would you put the domain logic for this example? In the service layer? Isn't the service layer supposed to have no logic?
Aggregates and entities should not talk to the outside world, they are only concerned about their bounded context. We should not inject external dependencies (like IPaymentGateway or IEmailService) into a domain object, this would cause the domain to handle exceptions that come from outside. Solution: an event based approach. How do you send events then? You still need to inject the correct 'listeners' every time you instantiate a domain object. ORM's are about restoring 'data' but are not primarly intended to inject dependencies. Do we need an DI-ORM mix?
Domain objects and DTO's. When you query an aggregate root for its state does it return a projection of its state (DTO) or the domain objects themselves? In most models that I see, clients have full access to the domain data model, introducing a deep coupling to the actual structure of the domain. I perceive the 'object graph' behind an aggregate to be its own buisness. That's encapsulation, right? So for me an aggregate root should return only DTO's. DTO's are often defined in the service layer but my approach is to model it in the domain itself. The service layer might still add another level of abstraction, but that's a different choice. Is that a good advice?
Repositories handle all CRUD operations at the aggregate root level. What about other queries? Queries return DTO's and not domain objects. For that to work, the repsitory must be aware of the data structure of the domain which introduces a coupling. My advice is similar to before: Use events to populate views. Thus, the internal structure is not made public, only the events carry the necessary data to build up the view.
Unit of work. A controller at the system boundary will instantiate commands and pass them to a service layer which in turn loads the appropriate aggregates and forwards the commands. The controller might use multiple commands and pass them to multiple services. This is all controlled by the unit of work pattern. This means, repositories, entities, services - all participate in the same transaction. Do you agree?
Buisness logic is not domain logic. From a buisness perspective the realization of a use case might involve many steps: Registering a customer, sending an email, create a storage account, etc. This overall process can impossibly fit in a domain aggregate root. The domain object would need to have access to all kind of infrastructure. Solution: Workflows or sagas (or transaction script). Is that a good advice?
Thank you
The first best practice I can suggest is to read the Evans' book. Twice.
Too many "DDD projects" fail because developers pretend that DDD is simply OOP done right.
Then, you should really understand that DDD is for applications that have to handle very complex business rules correctly. In a nutshell: if you don't need to pay a domain expert to understand the business, you don't need DDD. The core concept of DDD, indeed, is the ubiquitous language that both the coders, the experts and the users share to understand each other.
Furthermore, you should read and understand what aggregates are (consistency boundaries) by reading Effective Aggregate Design by Vernon.
Finally, you might find useful the modeling patterns documented here.
Despite my comment above, I took a stab at your points. (note: I'm not Eric Evans or Jimmy Nilsson so take my "advice" with a grain of salt).
Your example "Consider an object that recommends you a partner for a dating site.", belongs in a Domain service (not an infrastructure service). See this article here - http://lostechies.com/jimmybogard/2008/08/21/services-in-domain-driven-design/
Aggregates do not access repositories directly, but they can create a unit of work which combines operations from multiple domain objects into one.
Not sure on this one. This should really be a question by itself.
That's debatable, in theory, the domain entities would not be directly available outside the aggregate root, but that is not always practical. I consider this decision on a case-by-case basis.
I not sure what you mean exactly by "queries". If modeling all possible "reading" scenarios in your domain does not seem practical or provide sufficient performance, it suggests a CQRS solution is probably best.
Yes, I agree. UOW is a tool in your toolbox that you can use in various layers.
This statement is fundamentally wrong "Business logic is not domain logic". The domain IS the representation business logic, thus one reason for using ubiquitous language.

Where to put business logic in DDD

I'm trying to figure out the best way to build an easily maintainable and testable architecture. Having gone through several projects, I've seen some pretty bad architectures and I want to avoid making future mistakes on my own projects.
Let's say I'm building a fairly complex three layer application and I want to use DDD. My question is, where should I place my business logic? Some people say it should be placed in services (service layer) and that does make sense. Having a number of services which adhere to Single Responsibility Principle makes sense.
However, some people said that this is an anti pattern and that business logic shouldn't be implemented in the service layer. Why is this?
Let's say we have IAuthenticationService which has a method with bool UsernameAvailable(string username) signature. The method would implement all required logic to check whether the username is available or not.
What is the problem here according to the "this is an antipattern" crowd?
If you put all your business logic in an (implicitly stateless) service layer you're writing procedural code. By decoupling behavior from data, you're giving up on writing object-oriented code.
That's not always bad: it's simple, and if you have simple business logic there's no reason to invest in a full-fledged object-oriented domain model.
The more complex the business logic (and the larger the domain), the faster procedural code turns into spaghetti code: procedures start calling each other with different pre- and post-conditions (in incompatible order) and they begin to require ever-growing state objects.
Martin Fowler's article on Anemic Domain Models is probably the best starting point for understanding why (and under what conditions) people object to putting business logic in a service layer.
A service layer in itself is not an anti-pattern, it is a very reasonable place to put certain elements of your business logic. However, you do need to apply discretion to the design of the service layer, ensuring that you aren't stealing business logic from your domain model and the objects that comprise it.
By doing that you can end up with a true anti-pattern, an anaemic domain model.
This is discussed in-depth by Martin Fowler here.
Your example of an IAuthenticationService isn't perhaps the best for discussing the problem - much of the logic around authentication can be seen as living in a service and not really associated with domain objects. A better example might be if you had some sort of IUserValidationService to validate a user, or even worse a service that does something like process orders - the validation service is stripping logic out of the user object and the order processing service is taking logic away from your order objects, and possibly also from objects representing customers, delivery notices etc...
You have to have 4 layers with DDD: Presentation, Application, Domain, and Infrastructure.
The Presentation layer presents information to the user, interprets user commands.
All dependent on use-cases logic (application entities, application workflow components, e.g. DTOs, Application services) goes to the Application layer (Application logic). This layer doesn’t contain any business logic, does not hold the state of business objects, can keep the state of an application task’s progress.
All invariant to use-cases logic (business entities, business workflow components, e.g. Domain model, Domain services) goes to the Domain layer (Domain logic). This layer is responsible for concepts of the business domain and business rules.
The Infrastructure layer may have IoC, Cache, Repositories, ORM, Cryptography, Logging, Search engine, etc.

DDD vs N-Tier (3-Tier) Architecture

I have been practicing DDD for a while now with the 4 distinct layers: Domain, Presentation, Application, and Infrastructure. Recently, I introduced a friend of mine to the DDD concept and he thought it introduced an unnecessary layer of complexity (specifically targeting interfaces and IoC). Usually, its at this point, I explain the benefits of DDD-- especially, its modularity. All the heavy lifting and under the hood stuff is in the Infrastructure and if I wanted to completely change the underlying data-access method, I could do so with only having to touch the Infrastructure layer repository.
My friend's argument is that he could build a three tiered application in the same way:
Business
Data
Presentation
He would create business models (like domain models) and have the repositories in the Data layer return those Business models. Then he would call the business layer which called the data layer. I told him the problem with that approach is that it is not testable. Sure, you can write integration tests, but you can't write true unit tests. Can you see any other problems with his proposed 3-tiered approach (I know there is, because why would DDD exist otherwise?).
EDIT: He is not using IoC. Each layer in his example is dependent on one another.
I think you're comparing apples and oranges. Nothing about N-Tier prohibits it from utilizing interfaces & DI in order to be easily unit-tested. Likewise, DDD can be done with static classes and hard dependencies.
Furthermore, if he's implementing business objects and using Repositories, it sounds like he IS doing DDD, and you are quibbling over little more than semantics.
Are you sure the issue isn't simply over using DI/IoC or not?
I think you are mixing a few methodologies up. DDD is Domain-Driven Developement and is about making the business domain a part of your code. What you are describing sounds more like the Onion Architecture (link) versus a 'normal' 3-layered approach. There is nothing wrong with using a 3-layered architecture with DDD. DDD depends on TDD (TestDriven Developement). Interfaces help with TDD as it is easier to test each class in isolation. If you use Dependency Injection (and IoC) it is further mitigated.
The Onion Architecture is about making the Domain (a.k.a. business rules) independent of everything else - ie. it's the core of the application with everything depending on the business objects and rules while things related to infrastructure, UI and so on are in the outer layers. The idea is that the closer to the 'shell of the onion' a module is - the easier it is to exchange for a new implementation.
Hope this clears it a bit up - now with a minor edit!
Read "Fundamentals of Software Architecture: An Engineering Approach", Chapter 8, Page 100 to 107.
The top-level partitioning is of particular interest to architects because it defines the fundamental architecture style and way of partitioning code. It is one of the first decisions an architect must make. These two styles (DDD & Layered) represent different ways to top-level partition the architecture. So, you are not comparing apples and oranges here.
Architects using technical partitioning organize the components of the system by technical capabilities: presentation, business rules, persistence, and so on.
Domain partitioning, inspired by the Eric Evan book Domain-Driven Design, which is a modeling technique for decomposing complex software systems. In DDD, the architect identifies domains or workflows independent and decoupled from each other.
The domain partitioning (DDD) may use a persistence library and have a separate layer for business rules, but the top-level partitioning revolves around domains. Each component in the domain partitioning may have subcomponents, including layers, but the top-level partitioning focuses on domains, which better reflects the kinds of changes that most often occur on projects.
So you can implement layers on each component of DDD (your friend is doing the opposite, which is interesting and we might try that out as well).
However, please note that ("Fundamentals of Software Architecture: An Engineering Approach", Page 135)
The layered architecture is a technically partitioned architecture (as
opposed to a domain-partitioned architecture). Groups of components,
rather than being grouped by domain (such as customer), are grouped by
their technical role in the architecture (such as presentation or
business). As a result, any particular business domain is spread
throughout all of the layers of the architecture. For example, the
domain of “customer” is contained in the presentation layer, business
layer, rules layer, services layer, and database layer, making it
difficult to apply changes to that domain. As a result, a
domain-driven design approach does not work as well with the layered
architecture style.
Everything in architecture is a trade-off, which is why the famous answer to every architecture question in the universe is “it depends.” Being said that, the disadvantage with your friend's approach is, it has higher coupling at the data level. Moreover, it will creates difficulties in untangling the data relationships if the architects later want to migrate this architecture to a distributed system (ex. microservices).
N Tier or in this case 3-tier architecture is working great with unit tests .
All you have to do is IoC (inversion of control) using dependency injection and repository pattern .
The business layer can validate , and prepare the returned data for the presentation \ web api layer by returning the exact data which is required .
You can then use mock in you unit tests all the way through the layers.
All your business logic and needs can be on bl layer . And Dal layer will contain repositories injected from higher level.

Resources