The published language of the open-host service (OHS) can be seen as an integration-oriented model to help simplifying the public interface of the OHS consumers. The published language is integration-optimized and exposes data in a more convenient model for consumers, specifically designed for integration needs.
When we are promoting the idea that each bounded context has its own internal (canonical data) model, the OHS in fact decouples the bounded context's internal model from the model used for integration with other bounded contexts. So the bounded context internal model can evolve without impacting the consumers of the OHS.
Say we want to design the integration model of the OHS or even multiple OHS's. Don't we have here the concept of the old school canonical data model we used for integration in the ESB / SOA era? In fact one can say that designing the integration models for the public OHS even contributes to the concept of the interchange context: a separate bounded context mainly in charge of transforming models for more convenient consumption by other components.
So question is: are we not going back to the 'old school' canonical data model from SOA ESB era with the concept of interchange context as a separate bounded context in charge of transforming models for more convenient consumption by other components? If not, what is the difference?
Related
Vaughn Vernon and Vlad Khononov speak about the 'Interface or Interchange Bounded Context' as a separate context for the system’s interface layer. This Interface Bounded Context has its own integration model and language (Open-Host service and Published Language) and is decoupled from the Domain Model Bounded Context and its Ubiquitous Language. This decoupling enables flexibility in changes in the Domain Model and a stable, customer friendly API in the interface layer.
Since we are discussing two Bounded Contexts here (Interface and Domain) question arises: what is the integration pattern between the two? Is the 'Shared Kernel' pattern the appropriate one, since there is overlap and mapping between the two? Furthermore these two contexts are maintained by one team. What do you think?
While studying DDD I'm wondering why the Domain model need to define interfaces for the Infrastructure layer.
From my reads I got that a high level (domain model, application service, domain service) defines interfaces that need to be implemented by a lower layer (infrastructure). Simple.
From this idea I think it makes sense that an application level (in a high level) defines interfaces for a lower one (infrastructure) because the application level will use infrastructure components (A repository is a usual client of the applicaiton layer) but doesn't want to be tied to any special implementation.
However, this is confusing when I see in some books a domain level defining infrastructure interfaces because a domain model will not use ever a repository because we want our domain model "pure".
Am I missing something?
While studying DDD I'm wondering why the Domain model need to define interfaces for the Infrastructure layer.
It doesn't really -- that's part of the point.
The domain model defines the interfaces / contracts that it needs to do work, with the promise of happily working with any implementation that conforms to the contract.
So you can choose to implement the interface in your application component, or in the infrastructure component, or where ever makes sense.
Note the shift in language from "layer" to "component". Layers may be too simplistic to work -- see Udi Dahan 2007.
I came across the same question myself. From my understanding, the reason behind this is that you want the application to only consume objects/interfaces defined in the Domain Model. It also helps to keep one repository per aggregate since they sit next to each other in the same layer.
The fact that the Application layer has a reference to the Infrastructure layer is only for the purpose of dependency injection. Your Application layer services should call these interfaces exposed in the Domain layer, get Domain Objects (POCOs), do things and possibly call interfaces again; for example to commit a transaction. This is why for example the Unit of Work pattern exposes its action through a Domain Layer interface.
When the application is using a Repository ( which is an Infrastructure service ), then Repository interface is defined within a Domain layer, while its implementation is defined within an Infrastructure layer, since that way Domain model has no outgoing dependencies.
a) If we're 100 % certain that a particular ( non-Repository ) Infrastructure service InfServ won't ever be called by any code within Domain layer, then I assume InfServ's interface doesn't need to be defined within a Domain layer?
Assuming InfServ won't be called by code within Domain layer ( and as such we don't need to define its interface within Domain layer ):
a) if InfServ will be called by Application layer's Services, I assume Application layer should have an implicit dependency on Infrastructure layer and not the other way around? In other words, both InfServ's interface and its implementation should be defined within an Infrastructure layer?
b) And the reason why it is better for Application layer to have a dependency on Infrastructure layer and not another way around is that Infrastructure layer can be reused by many applications, while Application layer is in most cases specific only to a single application and usually can't be reused by other applications?
c) If we are 100 % certain that no code within Domain layer will ever use a Repository, then we wouldn't need to define its interface within Domain layer?
UPDATE:
1)
Yes, however, the definition of domain layer can include application
services which act as a facade over the domain. The application
service usually references a repository and other infrastructural
services. It orchestrates domain objects and said services to implement
use cases.
a)
... which act as a facade over the domain
Couldn't we argue that "regular" ( those I was referring to in my question ) Application Services also act as a facade over the domain?
b)
The application service usually references a repository and other
infrastructural services.
From your reply, it seems as if you're suggesting ( though you're probably not ) that "regular" Application services don't normally reference Infrastructural services, which in fact they do ( as far as I know )?!
2)
a)
I usually merge application services, as described above, into the
domain layer.
"Regular" Application layer is also part of a BLL layer, so couldn't we argue that it is merged with Domain layer ( it actually sits on top of Domain layer )?!
b)
I tend to adhere to the Hexagonal architecture style ...
It appears Hexagonal architecture doesn't have the explicit concept of Application layer ( ie of Application Services )?
c)
Part of the benefit of declaring a repository interface in the domain
layer is that it specifies the data access requirements of the domain.
So we should include the Repository interface in Domain layer even if we domain code won't use it?
SECOND UPDATE:
2a)
If what you call a "regular" application service interacts with the
domain, I think it is acceptable to make it part of the domain
"layer". However, the domain should not directly depend on the
surrounding application service and so it is possible to split them up
into separate modules if desired.
I'm not sure whether or not you're implying that the design of Application layer ( in traditionally layered architecture ) is any different from when you merge Application layer with Domain layer using, for example, Onion architecture?
I'd say at least as far as modules are concerned, the two shouldn't be different, since in both cases we can separate Application and Domain layer modules? ( though I must confess I skipped the chaperon modules ( Evan's book ) since I didn't think I'd need that knowledge so early into learning DDD :O )
2b)
Yes because it can be contrasted with a layered architecture. A strict
layered architecture does not gel with the idea of declaring
repository interfaces in domain layer and implementing in
infrastructure layer. This is because, on the one hand, you have a
dependency in one direction, but in terms of deployment the dependency
is in the other. Hexagonal addresses these concerns by placing the
domain at the center. Take a look at the onion architecture - it is
essentially hexagonal but may be easier to grasp.
I don't yet know the MVC pattern or Asp.Net MVC, but regardless, from reading the first three parts of the series ( which confused me to the point I stopped reading it ), it appears :
a) From Onion article:
Each layer is coupled to the layers below it, and each layer is often
coupled to various infrastructure concerns.
The author is implying that in traditionally layered architecture TLA a Domain layer is coupled to Infrastructure layers, which certainly isn't true, since we normally define Infrastructure interfaces ( such as Repository interface ) within Domain layer?!
b) And if when using TLA we decide to define Infrastructure interfaces in Application layer, then Application layer also isn't coupled to Infrastructure layer?!
c) Isn't with Onion architecture an * Infrastructure layer* coupled to Application Core ( which includes Application layer ), since Infrastructure interfaces are defined in Application Core?
d) If yes to c), isn't it better to couple Application layer to Infrastructure layer ( for reasons I gave in my original question ( here I'm assuming Domain layer won't be calling Infrastructure services) )?
4)
From Onion article:
The first layer around the Domain Model is typically where we would
find interfaces that provide object saving and retrieving behavior,
called repository interfaces.
From Onion article:
The controller only depends on interfaces, which are defined in the
application core. Remember that all dependencies are toward the
center.
It appears the author is implying that since dependencies are only inwards and since Infrastructure interfaces are defined around Domain Model, that code within Domain Model shouldn't reference these interfaces. In other words, we shouldn't pass Repository reference as an argument to a method of a Domain entity ( which as you yourself has said is allowed ):
class Foo
{
...
public int DoSomething(IRepository repo)
{
...
var info = repo.Get...;
...
}
}
For reasons stated above I must confess that I fail to see the benefits of using Onion architecture or even how it is different from TLA ( assuming all Infrastructure interfaces are defined within Domain layer ) --> in other words, couldn't we depict TLA using Onion architecture diagram?!
FINAL UPDATE:
2)
b)
Yes. In TLA the domain layer would communicate directly with
infrastructure in terms of classes declared by infrastructure layer
not the other way around.
Just to be sure – you're saying that with TLA an Infrastructure interfaces would be defined in Infrastructure layer ( I know that with Hexagonal/Onion architecture they are defined in Application core )?
d)
The coupling goes both ways. The application core depends on
implementations in infrastructure and infrastructure depend on
interfaces declared in application core.
My point is that since with Onion architecture an InfServ interface is declared within Application layer ( here the assumption is that InfServ is never called by Domain Layer and as such we decide not to define InfServ interface within Domain Layer – see original 1a question ), it means that Application layer controls the InfServ interface. But I'd argue that it would be better if Infrastructure layer controlled the InfServ interface, due to reasons stated in the original 2b question?!
4)
It appears the author is implying that since dependencies are only inwards
and since Infrastructure interfaces are defined around Domain Model,
that code within Domain Model shouldn't reference these interfaces.
In my opinion, your code sample is appropriate to code.
So I was correct in saying that Onion Architecture doesn't "allow" Domain Model to reference Infrastructure interfaces, since they are defined in layer InDe ( InDe of course also resides within application core ) surrounding DM strong textand referencing them from DM would mean that dependencies go upwards from DM to InDe?
DDD is typically presented in a hexagonal/onion architectural style
which is why there may be some confusion. I think what you've probably
been doing is already hexagonal which is why it seems like it is the
same thing.
Yeah, I got that impression also. Though I do plan to look a bit deeper into the Hexagonal architecture ( especially since the new DDD book will base some examples on it ) after I finish reading Evan's book.
FOURTH UPDATE:
2)
d)
If infrastructure owned interface and implementation then the domain
or application layer would be responsible for implementing persistence
for itself.
I assume Application or Domain layers would need to implement it for themselves because referencing interfaces defined in Infrastructure layer would Onion's rule of inner layers not having dependencies on outer layers ( Infrastructure layer being the outer layer )?
4)
Domain model can reference infrastructure interfaces, such as a
repository, because they are declared together. If application layer
is split from the domain, as it is in the Onion diagram, then the domain layer
can avoid referencing interfaces because they can be defined in the
application layer.
But according to that article, Infrastructure interfaces are declared in a layer surrounding Domain Layer, which would mean it is closer to the outer edges of application core than Domain Layer – and as the article pointed out, the inner layer shouldn't have dependencies on outer layers>!
thank you
1a) Yes, however the definition of domain layer can include application services which act as a facade over the domain. The application service usually references a repository and other infrastructural services. It orchestrates domain object and said services to implement use cases.
2a,b)
I usually merge application services, as described above, into the domain layer. I tend to adhere to the Hexagonal architecture style, also called ports and adapters. The domain is at the center and is encapsulated by the application service and all external connections, including repositories, UI, and services act as ports.
2c) Part of the benefit of declaring a repository interface in the domain layer is that it specifies the data access requirements of the domain.
UPDATED
1a) I didn't intend for the application services that I mention to be distinguished from "regular" application services. If it is a facade over the domain then the rules apply.
1b) There may be services that can still be called application services, but aren't directly related to the domain and I wanted to exclude those.
2a) If what you call a "regular" application service interacts with the domain, I think it is acceptable to make it part of the domain "layer". However, the domain should not directly depend on the surrounding application service and so it is possible to split them up into separate modules if desired.
2b) Yes because it can be contrasted with a layered architecture. A strict layered architecture does not gel with the idea of declaring repository interfaces in domain layer and implementing in infrastructure layer. This is because on the one hand you have a dependency in one direction, but in terms of deployment the dependency is in the other. Hexagonal addresses these concerns by placing the domain at the center. Take a look at the onion architecture - it is essentially hexagonal but may be easier to grasp.
2c) Yes, but usually the repository interface would be referenced by an application service.
UPDATE 2
2a) They could be implemented differently, but the general responsibilities are the same. The main difference is in the dependency graph. While you can separate application services into their own modules with either architecture, the onion/hexagonal architecture emphasizes the use of interfaces to declare dependencies on infrastructure.
2ba) Yes and this is in fact a characteristic of the onion architecture.
b) Yes. In TLA the domain layer would communicate directly with infrastructure in terms of classes declared by infrastructure layer not the other way around.
c) Yes, infrastructure implements interfaces declared by domain layer.
d) The coupling goes both ways. The application core depends on implementations in infrastructure and infrastructure depends on interfaces declared in application core.
4) In my opinion, your code sample is appropriate code. There are cases where an entity needs access to a repository to perform some action. However, to improve the coupling characteristics of this code, it would be better to either define a specific interface that declares the functionality required by the entity, or if lambdas are available even better. The repository could then implement that interface and the application service would pass the repository to the entity when invoking the given behavior. This way, the entity does not depend on a general repository interface, instead it depends on a very specific role.
DDD is typically presented in a hexagonal/onion architectural style which is why there may be some confusion. I think what you've probably been doing is already hexagonal which is why it seems like it is the same thing.
UPDATE 3
2b) In TLA there would be no interfaces, or not the same kind. The domain would communicate directly with infrastructure, such as a persistence framework and so it would be responsible for persistence.
2d) If infrastructure owned interface and implementation then the domain or application layer would be responsible for implementing persistence for it self. In hexagonal/onion, the implementation of persistence is part of infrastructure - it "adapts" the abstract domain to database.
4) Domain model can reference infrastructure interfaces, such as a repository, because they are declared together. If application layer is split from domain, as it is in Onion diagram, then the domain layer can avoid referencing interfaces because they can be defined in the application layer.
UPDATE 4
2d) No that statement applies to a layered architecture with hierarchy like: UI -> Business Logic -> Data Access. The business logic layer depends on the data access layer and has to implement its data access based on the object model of the data access framework. The data access framework itself doesn't know anything about business layer.
4) The article specifies only one possible architecture. There are acceptable variations.
The only benefit I see in placing repository interfaces in the Application layer instead of the Domain is if you want to reuse the Domain DLL in another application where you would completely redefine the way you query your collections of domain entities - in other words, need completely different repositories.
However there are 2 major obstacles to that approach :
Domain layer Services. They represent processes that are at the crossroads of many entities, and thus need multiple references to do their job. It's a common and convenient thing for them to get these references by asking Repositories.
You can find an example of this in the RoutingService here.
Also, there are special cases when some entities could need the help of a repository to find a particular other entity. For instance, if you have a hierarchy of aggregate roots with parent/child relationships between them, one particular instance could say "I need the list of all my ancestors/descendants that satisfy these criteria". A repository seems like the best fit for that kind of query.
I'm trying to figure out the best way to build an easily maintainable and testable architecture. Having gone through several projects, I've seen some pretty bad architectures and I want to avoid making future mistakes on my own projects.
Let's say I'm building a fairly complex three layer application and I want to use DDD. My question is, where should I place my business logic? Some people say it should be placed in services (service layer) and that does make sense. Having a number of services which adhere to Single Responsibility Principle makes sense.
However, some people said that this is an anti pattern and that business logic shouldn't be implemented in the service layer. Why is this?
Let's say we have IAuthenticationService which has a method with bool UsernameAvailable(string username) signature. The method would implement all required logic to check whether the username is available or not.
What is the problem here according to the "this is an antipattern" crowd?
If you put all your business logic in an (implicitly stateless) service layer you're writing procedural code. By decoupling behavior from data, you're giving up on writing object-oriented code.
That's not always bad: it's simple, and if you have simple business logic there's no reason to invest in a full-fledged object-oriented domain model.
The more complex the business logic (and the larger the domain), the faster procedural code turns into spaghetti code: procedures start calling each other with different pre- and post-conditions (in incompatible order) and they begin to require ever-growing state objects.
Martin Fowler's article on Anemic Domain Models is probably the best starting point for understanding why (and under what conditions) people object to putting business logic in a service layer.
A service layer in itself is not an anti-pattern, it is a very reasonable place to put certain elements of your business logic. However, you do need to apply discretion to the design of the service layer, ensuring that you aren't stealing business logic from your domain model and the objects that comprise it.
By doing that you can end up with a true anti-pattern, an anaemic domain model.
This is discussed in-depth by Martin Fowler here.
Your example of an IAuthenticationService isn't perhaps the best for discussing the problem - much of the logic around authentication can be seen as living in a service and not really associated with domain objects. A better example might be if you had some sort of IUserValidationService to validate a user, or even worse a service that does something like process orders - the validation service is stripping logic out of the user object and the order processing service is taking logic away from your order objects, and possibly also from objects representing customers, delivery notices etc...
You have to have 4 layers with DDD: Presentation, Application, Domain, and Infrastructure.
The Presentation layer presents information to the user, interprets user commands.
All dependent on use-cases logic (application entities, application workflow components, e.g. DTOs, Application services) goes to the Application layer (Application logic). This layer doesn’t contain any business logic, does not hold the state of business objects, can keep the state of an application task’s progress.
All invariant to use-cases logic (business entities, business workflow components, e.g. Domain model, Domain services) goes to the Domain layer (Domain logic). This layer is responsible for concepts of the business domain and business rules.
The Infrastructure layer may have IoC, Cache, Repositories, ORM, Cryptography, Logging, Search engine, etc.
I have been practicing DDD for a while now with the 4 distinct layers: Domain, Presentation, Application, and Infrastructure. Recently, I introduced a friend of mine to the DDD concept and he thought it introduced an unnecessary layer of complexity (specifically targeting interfaces and IoC). Usually, its at this point, I explain the benefits of DDD-- especially, its modularity. All the heavy lifting and under the hood stuff is in the Infrastructure and if I wanted to completely change the underlying data-access method, I could do so with only having to touch the Infrastructure layer repository.
My friend's argument is that he could build a three tiered application in the same way:
Business
Data
Presentation
He would create business models (like domain models) and have the repositories in the Data layer return those Business models. Then he would call the business layer which called the data layer. I told him the problem with that approach is that it is not testable. Sure, you can write integration tests, but you can't write true unit tests. Can you see any other problems with his proposed 3-tiered approach (I know there is, because why would DDD exist otherwise?).
EDIT: He is not using IoC. Each layer in his example is dependent on one another.
I think you're comparing apples and oranges. Nothing about N-Tier prohibits it from utilizing interfaces & DI in order to be easily unit-tested. Likewise, DDD can be done with static classes and hard dependencies.
Furthermore, if he's implementing business objects and using Repositories, it sounds like he IS doing DDD, and you are quibbling over little more than semantics.
Are you sure the issue isn't simply over using DI/IoC or not?
I think you are mixing a few methodologies up. DDD is Domain-Driven Developement and is about making the business domain a part of your code. What you are describing sounds more like the Onion Architecture (link) versus a 'normal' 3-layered approach. There is nothing wrong with using a 3-layered architecture with DDD. DDD depends on TDD (TestDriven Developement). Interfaces help with TDD as it is easier to test each class in isolation. If you use Dependency Injection (and IoC) it is further mitigated.
The Onion Architecture is about making the Domain (a.k.a. business rules) independent of everything else - ie. it's the core of the application with everything depending on the business objects and rules while things related to infrastructure, UI and so on are in the outer layers. The idea is that the closer to the 'shell of the onion' a module is - the easier it is to exchange for a new implementation.
Hope this clears it a bit up - now with a minor edit!
Read "Fundamentals of Software Architecture: An Engineering Approach", Chapter 8, Page 100 to 107.
The top-level partitioning is of particular interest to architects because it defines the fundamental architecture style and way of partitioning code. It is one of the first decisions an architect must make. These two styles (DDD & Layered) represent different ways to top-level partition the architecture. So, you are not comparing apples and oranges here.
Architects using technical partitioning organize the components of the system by technical capabilities: presentation, business rules, persistence, and so on.
Domain partitioning, inspired by the Eric Evan book Domain-Driven Design, which is a modeling technique for decomposing complex software systems. In DDD, the architect identifies domains or workflows independent and decoupled from each other.
The domain partitioning (DDD) may use a persistence library and have a separate layer for business rules, but the top-level partitioning revolves around domains. Each component in the domain partitioning may have subcomponents, including layers, but the top-level partitioning focuses on domains, which better reflects the kinds of changes that most often occur on projects.
So you can implement layers on each component of DDD (your friend is doing the opposite, which is interesting and we might try that out as well).
However, please note that ("Fundamentals of Software Architecture: An Engineering Approach", Page 135)
The layered architecture is a technically partitioned architecture (as
opposed to a domain-partitioned architecture). Groups of components,
rather than being grouped by domain (such as customer), are grouped by
their technical role in the architecture (such as presentation or
business). As a result, any particular business domain is spread
throughout all of the layers of the architecture. For example, the
domain of “customer” is contained in the presentation layer, business
layer, rules layer, services layer, and database layer, making it
difficult to apply changes to that domain. As a result, a
domain-driven design approach does not work as well with the layered
architecture style.
Everything in architecture is a trade-off, which is why the famous answer to every architecture question in the universe is “it depends.” Being said that, the disadvantage with your friend's approach is, it has higher coupling at the data level. Moreover, it will creates difficulties in untangling the data relationships if the architects later want to migrate this architecture to a distributed system (ex. microservices).
N Tier or in this case 3-tier architecture is working great with unit tests .
All you have to do is IoC (inversion of control) using dependency injection and repository pattern .
The business layer can validate , and prepare the returned data for the presentation \ web api layer by returning the exact data which is required .
You can then use mock in you unit tests all the way through the layers.
All your business logic and needs can be on bl layer . And Dal layer will contain repositories injected from higher level.