Vaughn Vernon and Vlad Khononov speak about the 'Interface or Interchange Bounded Context' as a separate context for the system’s interface layer. This Interface Bounded Context has its own integration model and language (Open-Host service and Published Language) and is decoupled from the Domain Model Bounded Context and its Ubiquitous Language. This decoupling enables flexibility in changes in the Domain Model and a stable, customer friendly API in the interface layer.
Since we are discussing two Bounded Contexts here (Interface and Domain) question arises: what is the integration pattern between the two? Is the 'Shared Kernel' pattern the appropriate one, since there is overlap and mapping between the two? Furthermore these two contexts are maintained by one team. What do you think?
Related
I'm starting on DDD and I have a doubt on application of DDD on a Web Project.
If I have multiple Bounded Contexts for every section of a web project. For example, "Catalog" and "Shopping Cart" on a E-Commerce project. ¿Where should be the code that implements the frontend for all the Web and presents concepts from many Bounded Contexts?
I have thought about creating the "Web" Bounded Context, but this Bounded Context won`t represent a specific Ubiquitous Language because this BC will use concepts of many Bounded Contexts and Subdomains.
What you do think about this?
Thanks.
Where this code goes depend on the structure of your application.
DDD is a set of patterns and rules that helps you model your business. This model should be ubiquitous, meaning different applications should share the same business logic. The main rule of DDD is what describes the business goes in the domain, everything else does not. DDD does not state anything about how to you should structure your application, it can be applied to any architecture.
What you describe is called presentation logic and does not describe your business logic. It describes how your system interacts with clients, which are external actors and is specific to your application: if you make a web or a mobile version of your app, chances are that you will have the same domain implementation but your presentation logic will slightly differ. So, there is no DDD answer to where the presentation logic goes, besides not in the domain.
If you make a traditional 3-layered application, this logic goes in the presentation layer.
The published language of the open-host service (OHS) can be seen as an integration-oriented model to help simplifying the public interface of the OHS consumers. The published language is integration-optimized and exposes data in a more convenient model for consumers, specifically designed for integration needs.
When we are promoting the idea that each bounded context has its own internal (canonical data) model, the OHS in fact decouples the bounded context's internal model from the model used for integration with other bounded contexts. So the bounded context internal model can evolve without impacting the consumers of the OHS.
Say we want to design the integration model of the OHS or even multiple OHS's. Don't we have here the concept of the old school canonical data model we used for integration in the ESB / SOA era? In fact one can say that designing the integration models for the public OHS even contributes to the concept of the interchange context: a separate bounded context mainly in charge of transforming models for more convenient consumption by other components.
So question is: are we not going back to the 'old school' canonical data model from SOA ESB era with the concept of interchange context as a separate bounded context in charge of transforming models for more convenient consumption by other components? If not, what is the difference?
While studying DDD I'm wondering why the Domain model need to define interfaces for the Infrastructure layer.
From my reads I got that a high level (domain model, application service, domain service) defines interfaces that need to be implemented by a lower layer (infrastructure). Simple.
From this idea I think it makes sense that an application level (in a high level) defines interfaces for a lower one (infrastructure) because the application level will use infrastructure components (A repository is a usual client of the applicaiton layer) but doesn't want to be tied to any special implementation.
However, this is confusing when I see in some books a domain level defining infrastructure interfaces because a domain model will not use ever a repository because we want our domain model "pure".
Am I missing something?
While studying DDD I'm wondering why the Domain model need to define interfaces for the Infrastructure layer.
It doesn't really -- that's part of the point.
The domain model defines the interfaces / contracts that it needs to do work, with the promise of happily working with any implementation that conforms to the contract.
So you can choose to implement the interface in your application component, or in the infrastructure component, or where ever makes sense.
Note the shift in language from "layer" to "component". Layers may be too simplistic to work -- see Udi Dahan 2007.
I came across the same question myself. From my understanding, the reason behind this is that you want the application to only consume objects/interfaces defined in the Domain Model. It also helps to keep one repository per aggregate since they sit next to each other in the same layer.
The fact that the Application layer has a reference to the Infrastructure layer is only for the purpose of dependency injection. Your Application layer services should call these interfaces exposed in the Domain layer, get Domain Objects (POCOs), do things and possibly call interfaces again; for example to commit a transaction. This is why for example the Unit of Work pattern exposes its action through a Domain Layer interface.
I have a important question for implementation about Domain Driven Design.
On the representation of layer architecture in evans's book, the Domain to references the layer Infrastructure this being the lowest layer, however I see in all implementations on the Internet the contrary, the infrastructure layer referencing the domain layer, maybe because of the implementation of the repository pattern using an ORM. What you guys think about this? Someone would have an example that implentasse exactly as Evans's book.
The examples that you see where interfaces lives in the domain (e.g. UserRepository) and their implementation lives in the infrastructure (e.g. HibernateUserRepository) are applying the Dependency Inversion Principle (DIP) to the traditional Layered Architecture.
In the traditional Layered Architecture, high level modules should depend on low-level modules. If we look at the standard layers order, we would have Domain -> Infrastructure.
Do we really want our domain to depend on infrastructure details? By applying the DIP principle, we inverse the dependency and make Infrastructure depend on the Domain layer, however it doesn't depend on concretions, but on abstractions.
Here's what the DIP principle states:
A. High-level modules should not depend on low-level modules. Both should depend on abstractions.
B. Abstractions should not depend on details. Details should depend on abstractions.
Source: http://en.wikipedia.org/wiki/Dependency_inversion_principle
Are you sure you are looking at dependencies correctly? Repositories are not exactly infrastructure, they are located in between your domain and your data access layer.
As stated in Fowler's book, repository
Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects.
If you are using a database, or, as you noted, an ORM, your repositories implementation will reference the ORM. Quite often, repository is implemented as a generic class, the only way it then "references" the domain as such is as it uses the base "entity" class for the generic constraint.
Very often data retrieval queries "by this" and "by that" are implemented in repository classes but this is not a very good practice. This work is for queries, or specifications, that a repository should be able to execute without knowing much of the details about them.
I have been practicing DDD for a while now with the 4 distinct layers: Domain, Presentation, Application, and Infrastructure. Recently, I introduced a friend of mine to the DDD concept and he thought it introduced an unnecessary layer of complexity (specifically targeting interfaces and IoC). Usually, its at this point, I explain the benefits of DDD-- especially, its modularity. All the heavy lifting and under the hood stuff is in the Infrastructure and if I wanted to completely change the underlying data-access method, I could do so with only having to touch the Infrastructure layer repository.
My friend's argument is that he could build a three tiered application in the same way:
Business
Data
Presentation
He would create business models (like domain models) and have the repositories in the Data layer return those Business models. Then he would call the business layer which called the data layer. I told him the problem with that approach is that it is not testable. Sure, you can write integration tests, but you can't write true unit tests. Can you see any other problems with his proposed 3-tiered approach (I know there is, because why would DDD exist otherwise?).
EDIT: He is not using IoC. Each layer in his example is dependent on one another.
I think you're comparing apples and oranges. Nothing about N-Tier prohibits it from utilizing interfaces & DI in order to be easily unit-tested. Likewise, DDD can be done with static classes and hard dependencies.
Furthermore, if he's implementing business objects and using Repositories, it sounds like he IS doing DDD, and you are quibbling over little more than semantics.
Are you sure the issue isn't simply over using DI/IoC or not?
I think you are mixing a few methodologies up. DDD is Domain-Driven Developement and is about making the business domain a part of your code. What you are describing sounds more like the Onion Architecture (link) versus a 'normal' 3-layered approach. There is nothing wrong with using a 3-layered architecture with DDD. DDD depends on TDD (TestDriven Developement). Interfaces help with TDD as it is easier to test each class in isolation. If you use Dependency Injection (and IoC) it is further mitigated.
The Onion Architecture is about making the Domain (a.k.a. business rules) independent of everything else - ie. it's the core of the application with everything depending on the business objects and rules while things related to infrastructure, UI and so on are in the outer layers. The idea is that the closer to the 'shell of the onion' a module is - the easier it is to exchange for a new implementation.
Hope this clears it a bit up - now with a minor edit!
Read "Fundamentals of Software Architecture: An Engineering Approach", Chapter 8, Page 100 to 107.
The top-level partitioning is of particular interest to architects because it defines the fundamental architecture style and way of partitioning code. It is one of the first decisions an architect must make. These two styles (DDD & Layered) represent different ways to top-level partition the architecture. So, you are not comparing apples and oranges here.
Architects using technical partitioning organize the components of the system by technical capabilities: presentation, business rules, persistence, and so on.
Domain partitioning, inspired by the Eric Evan book Domain-Driven Design, which is a modeling technique for decomposing complex software systems. In DDD, the architect identifies domains or workflows independent and decoupled from each other.
The domain partitioning (DDD) may use a persistence library and have a separate layer for business rules, but the top-level partitioning revolves around domains. Each component in the domain partitioning may have subcomponents, including layers, but the top-level partitioning focuses on domains, which better reflects the kinds of changes that most often occur on projects.
So you can implement layers on each component of DDD (your friend is doing the opposite, which is interesting and we might try that out as well).
However, please note that ("Fundamentals of Software Architecture: An Engineering Approach", Page 135)
The layered architecture is a technically partitioned architecture (as
opposed to a domain-partitioned architecture). Groups of components,
rather than being grouped by domain (such as customer), are grouped by
their technical role in the architecture (such as presentation or
business). As a result, any particular business domain is spread
throughout all of the layers of the architecture. For example, the
domain of “customer” is contained in the presentation layer, business
layer, rules layer, services layer, and database layer, making it
difficult to apply changes to that domain. As a result, a
domain-driven design approach does not work as well with the layered
architecture style.
Everything in architecture is a trade-off, which is why the famous answer to every architecture question in the universe is “it depends.” Being said that, the disadvantage with your friend's approach is, it has higher coupling at the data level. Moreover, it will creates difficulties in untangling the data relationships if the architects later want to migrate this architecture to a distributed system (ex. microservices).
N Tier or in this case 3-tier architecture is working great with unit tests .
All you have to do is IoC (inversion of control) using dependency injection and repository pattern .
The business layer can validate , and prepare the returned data for the presentation \ web api layer by returning the exact data which is required .
You can then use mock in you unit tests all the way through the layers.
All your business logic and needs can be on bl layer . And Dal layer will contain repositories injected from higher level.