DDD vs N-Tier (3-Tier) Architecture - domain-driven-design

I have been practicing DDD for a while now with the 4 distinct layers: Domain, Presentation, Application, and Infrastructure. Recently, I introduced a friend of mine to the DDD concept and he thought it introduced an unnecessary layer of complexity (specifically targeting interfaces and IoC). Usually, its at this point, I explain the benefits of DDD-- especially, its modularity. All the heavy lifting and under the hood stuff is in the Infrastructure and if I wanted to completely change the underlying data-access method, I could do so with only having to touch the Infrastructure layer repository.
My friend's argument is that he could build a three tiered application in the same way:
Business
Data
Presentation
He would create business models (like domain models) and have the repositories in the Data layer return those Business models. Then he would call the business layer which called the data layer. I told him the problem with that approach is that it is not testable. Sure, you can write integration tests, but you can't write true unit tests. Can you see any other problems with his proposed 3-tiered approach (I know there is, because why would DDD exist otherwise?).
EDIT: He is not using IoC. Each layer in his example is dependent on one another.

I think you're comparing apples and oranges. Nothing about N-Tier prohibits it from utilizing interfaces & DI in order to be easily unit-tested. Likewise, DDD can be done with static classes and hard dependencies.
Furthermore, if he's implementing business objects and using Repositories, it sounds like he IS doing DDD, and you are quibbling over little more than semantics.
Are you sure the issue isn't simply over using DI/IoC or not?

I think you are mixing a few methodologies up. DDD is Domain-Driven Developement and is about making the business domain a part of your code. What you are describing sounds more like the Onion Architecture (link) versus a 'normal' 3-layered approach. There is nothing wrong with using a 3-layered architecture with DDD. DDD depends on TDD (TestDriven Developement). Interfaces help with TDD as it is easier to test each class in isolation. If you use Dependency Injection (and IoC) it is further mitigated.
The Onion Architecture is about making the Domain (a.k.a. business rules) independent of everything else - ie. it's the core of the application with everything depending on the business objects and rules while things related to infrastructure, UI and so on are in the outer layers. The idea is that the closer to the 'shell of the onion' a module is - the easier it is to exchange for a new implementation.
Hope this clears it a bit up - now with a minor edit!

Read "Fundamentals of Software Architecture: An Engineering Approach", Chapter 8, Page 100 to 107.
The top-level partitioning is of particular interest to architects because it defines the fundamental architecture style and way of partitioning code. It is one of the first decisions an architect must make. These two styles (DDD & Layered) represent different ways to top-level partition the architecture. So, you are not comparing apples and oranges here.
Architects using technical partitioning organize the components of the system by technical capabilities: presentation, business rules, persistence, and so on.
Domain partitioning, inspired by the Eric Evan book Domain-Driven Design, which is a modeling technique for decomposing complex software systems. In DDD, the architect identifies domains or workflows independent and decoupled from each other.
The domain partitioning (DDD) may use a persistence library and have a separate layer for business rules, but the top-level partitioning revolves around domains. Each component in the domain partitioning may have subcomponents, including layers, but the top-level partitioning focuses on domains, which better reflects the kinds of changes that most often occur on projects.
So you can implement layers on each component of DDD (your friend is doing the opposite, which is interesting and we might try that out as well).
However, please note that ("Fundamentals of Software Architecture: An Engineering Approach", Page 135)
The layered architecture is a technically partitioned architecture (as
opposed to a domain-partitioned architecture). Groups of components,
rather than being grouped by domain (such as customer), are grouped by
their technical role in the architecture (such as presentation or
business). As a result, any particular business domain is spread
throughout all of the layers of the architecture. For example, the
domain of “customer” is contained in the presentation layer, business
layer, rules layer, services layer, and database layer, making it
difficult to apply changes to that domain. As a result, a
domain-driven design approach does not work as well with the layered
architecture style.
Everything in architecture is a trade-off, which is why the famous answer to every architecture question in the universe is “it depends.” Being said that, the disadvantage with your friend's approach is, it has higher coupling at the data level. Moreover, it will creates difficulties in untangling the data relationships if the architects later want to migrate this architecture to a distributed system (ex. microservices).

N Tier or in this case 3-tier architecture is working great with unit tests .
All you have to do is IoC (inversion of control) using dependency injection and repository pattern .
The business layer can validate , and prepare the returned data for the presentation \ web api layer by returning the exact data which is required .
You can then use mock in you unit tests all the way through the layers.
All your business logic and needs can be on bl layer . And Dal layer will contain repositories injected from higher level.

Related

Correct Implementation Domain Driven Design

I have a important question for implementation about Domain Driven Design.
On the representation of layer architecture in evans's book, the Domain to references the layer Infrastructure this being the lowest layer, however I see in all implementations on the Internet the contrary, the infrastructure layer referencing the domain layer, maybe because of the implementation of the repository pattern using an ORM. What you guys think about this? Someone would have an example that implentasse exactly as Evans's book.
The examples that you see where interfaces lives in the domain (e.g. UserRepository) and their implementation lives in the infrastructure (e.g. HibernateUserRepository) are applying the Dependency Inversion Principle (DIP) to the traditional Layered Architecture.
In the traditional Layered Architecture, high level modules should depend on low-level modules. If we look at the standard layers order, we would have Domain -> Infrastructure.
Do we really want our domain to depend on infrastructure details? By applying the DIP principle, we inverse the dependency and make Infrastructure depend on the Domain layer, however it doesn't depend on concretions, but on abstractions.
Here's what the DIP principle states:
A. High-level modules should not depend on low-level modules. Both should depend on abstractions.
B. Abstractions should not depend on details. Details should depend on abstractions.
Source: http://en.wikipedia.org/wiki/Dependency_inversion_principle
Are you sure you are looking at dependencies correctly? Repositories are not exactly infrastructure, they are located in between your domain and your data access layer.
As stated in Fowler's book, repository
Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects.
If you are using a database, or, as you noted, an ORM, your repositories implementation will reference the ORM. Quite often, repository is implemented as a generic class, the only way it then "references" the domain as such is as it uses the base "entity" class for the generic constraint.
Very often data retrieval queries "by this" and "by that" are implemented in repository classes but this is not a very good practice. This work is for queries, or specifications, that a repository should be able to execute without knowing much of the details about them.

Do microservices break the bounded context?

I am a bit confused. I am working in a young banking company and we decided to implement a DDD architecture to break complexity.
So, here is my question (it follows a design suggestion made by someone in the team). Let's say we have 3 different domains. D1, D2, D3, that expose domain (web)services. Each domain manipulates strongly typed business entities, that rely on the same tables. In front of these domains, we want a microservice to garantee that data persisted in tables is consistent, in a centralized manner. D1, D2 and D3 ask the microservice to persist data conforming to specific rules. We want the microservice to act as a CRUD proxy to the tables. The microservice provides specific DTOs to D1, D2 and D3 domains, obfuscating the tables to D1, D2, D3.
Does this approach sound good ? Would you consider using microservices in a DDD architecture to manage CRUD and data consistency for 1+ domains ? Does "CRUDing" and validating data with a microservice break the bounded context ? What are the best practices with dealing with microservices in a DDD architecture, if any ?
Many thanks for your contribution,
[EDIT]
The following article helped me to refine my thoughts : http://martinfowler.com/bliki/MicroservicePremium.html
Microservices are useful on complex situations where monolithic systems failed at being maintainable. They are not good candidates for upfront design implementations. On the other hand, DDD tries to tackle complexity at the very beginning of the projects. Successful DDD should not meet microservices implementations.
Things like validation, calculation, and persistence (CRUD) are all functions and not services. By centralizing these tasks to one service you are tightly coupling all the other services to it, and lose the main benefit of SOA. Making your services loosely coupled while retaining high cohesion should be your primary goal. I feel this design violates that.
You want to design your services as vertical slices of related business functions, rather than with centralized generic services and layers. A service should be comprised of components that are deployed throughout your enterprise from the database all the way to the UI. Also each service should have it's own database or at least schema if that's not practical. As soon as you have services sharing a database, then you become tightly coupled again.
On a final note, if one of your services needs to do a simple CRUD insert, then that's all it should be. No need to map it to a domain model first. Save DDD for the business processes that have real invariants that need to be enforced. It doesn't have to be an all or nothing approach. Use the tool that makes sense for each use case.
Microservices are not supposed to "break" Bounded Contexts, they are complementary since there's a natural correlation between microservice and BC boundaries.
we want a microservice to garantee that data persisted in tables is
consistent, in a centralized manner
Microservices are not here for facading purposes. And they are all about decentralisation, not centralization. They are modular units organized around business capabilities. They will usually act as gateways to your Bounded Contexts, in front of the Domain, not proxies to a persistent store behind the Domain.
It seems you're trying to apply the wrong solution to your problem -- using microservices as convergence points to a data-centric monolith, which defeats their whole purpose.
Maybe you should elaborate on what you mean by "guarantee data persisted is consistent" and "persist data conforming to specific rules". It might just be implemented in the persistence layer or in services that are not microservices.

Understanding onion architecture

Onion Architecture Mockups
Above are two images that depict my understanding of Onion Architecture.
They are slightly different from the drawings found online, because they address an agenda that I cannot find an answer to.
Infrastructure, as far as I can tell, are things like persistence, logging, etc. I have written examples of them in italics. However, a lot of the time, infrastructure components, as well as UI, tend to need to communicate with one another. The UI might want to audit or log something, the persistence project may need to log something. Logging being one of the harder to fit items in onion architecture, my understanding is that a lot of people have different opinions on where you should and shouldn't log.
In my first drawing, I have put an Infrastructure Interfaces layer in the diagram to allow cross communication without any one component knowing the implementation of another component. This is something that I have seen in a few examples.
The second drawing is my preference, it uses a mediator to cross communicate between infrastructure, UI, and its basically a way to allow the core services to communicate with infrastructure indirectly (assume Service Interfaces is called Core Services on the right diagram). The logger would subscribe itself to certain events, as would the database etc.
The first diagram allows only pocos and interfaces in all layers except the outer layer (excluding the dependency resolver). The second allows domain and business logic in the core service layer and allow the infrastructure layers to do their jobs in isolation.
I justified the infrastructure components by ensuring that they had an output of some sort. Auditing and Logging would usually use a db of some sort, cache would usually store in memory and db should probably have been called persistence. However, there is a library called AutoMapper. I have seen it wrapped in some instances, so that its interface can go in the Core to be used in pretty much any infrastructure, but it seems like over abstraction to me. Automapper is kind of like the Events object in that all infrastructures use it to translate between itself and the domain, but I'm not sure if it fits in that layer since it is not a service.
Question: Which of the two is closest to the definition of onion architecture and where would you fit in a tool like auto mapper, and do you think trying to wrap something like that is over abstraction?
Thanks!
I've used Auto Mapper and the Onion Architecture. We configured AutoMapper in the MVC Global.asax file, that typically calls a Config Method in the AutoMapperConfig Class in the App Start directory.
Regarding your graphics, it appears one of them has a separate layer for the Mediator and Observer Patters. They're not necessarily needed but it entirely depends on your approach. Just as you can use Model-View-Controller Pattern in the Onion Architecture or Model-View-Presenter or Model-View-ViewModel. They're just coupling separate Patterns to incorporate some added benefit.
Here's where I first came across the Onion Architecture Jeffery Palermo. If you're wanting to a see a more pure graphical representation.

DDD and data export system

I'm a beginner in DDD and am facing a little problem with architecture.
Our system must be able to export business data in various formats (Excel, Word, PDF and other more exotic formats).
In your opinion, which layer must be responsible for the overall process of retrieving source data, exporting them in the target format and preparing the final result to the user ? I'm confusing between domain and application responsibilities.
And regarding the export subsystem, should implementations and their common interface contract belong to the infrastructure layer ?
Neither Application nor Domain layer or any other 'layer'. DDD is not a layered architecture. Search for onion architecture or ports and adapters pattern for mor on this subject.
Now to the core of your problem. The issue you are facing is a separate bounded context and should go to a separate component of your system. Lets call it Reporting. And as it's just a presentation problem, no domain logic - DDD is not suitable for it. Just make some SQL views, read them using NHibernate, LINQ2SQL, EF or even plain DataReaders and build your Word/whatever documents using a Builder pattern. No Aggregates, Repositories, Services or any other DDD building blocks.
You may want to go a bit further and make all data presentation in your application to be handled by a separate component. Thats CQRS.
I usually take the simple approach where possible.
The domain code implements the language of your domain - The nouns, verbs etc.
Application code creates poetry using this language.
So in your example the construction of the output is an application concern, while the construction of the bits that the application will talk about (since you don't mention it) would the the domain concern.
e.g. if the report consists of sales data, that things like dates, bills, orders etc. would be abstractly constructed in the domain, while the application would concerned with producing documents using these.

DDD and implementing persistence

I am getting my feet wet with DDD (in .Net) for the first time, as I am re-architecting some core components of a legacy enterprise application.
Something I want to clear up is, how do we implement persistence in a proper DDD architecture?
I realize that the domains themselves are persistence ignorant, and should be designed using the "ubiquitous language" and certainly not forced into the constraints of the DAC of the month or even the physical database.
Am I correct that the Repository Interfaces live within the Domain assembly, but the Respository Implementations exist within the persistence layer? The persistence layer contains a reference to the Domain layer, never vice versa?
Where are my actual repository methods (CRUD) being called from?
Am I correct that the Repository Interfaces live within the Domain
assembly, but the Repository Implementations exist within the
persistence layer? The persistence layer contains a reference to the
Domain layer, never vice versa?
Yes, this is a very good approach.
Where are my actual repository methods (CRUD) being called from?
It might be a good idea to not think in CRUD terms because it is too data-centric and may lead you into Generic Repository Trap. Repository helps to manage middle and the end of life for domain objects. Factories are often responsible for beginning. Keep in mind that when the object is restored from the database it is in its midlife stage from DDD perspective. This is how the code can look like:
// beginning
Customer preferredCustomer = CustomerFactory.CreatePreferred();
customersRepository.Add(preferredCustomer);
// middle life
IList<Customer> valuedCustomers = customersRepository.FindPrefered();
// end life
customersRepository.Archive(customer);
You can call this code directly from you application. It maybe worth downloading and looking at Evan's DDD Sample. Unit of Work pattern is usually employed to deal with transactions and abstracting your ORM of choice.
Check out what Steve Bohlen has to say on the subject. The code for the presentation can be found here.
I was at the presentation and found the information on how to model repositories good.
Am I correct that the Repository Interfaces live within the Domain
assembly, but the Repository Implementations exist within the
persistence layer? The persistence layer contains a reference to the
Domain layer, never vice versa?
I disagree here, let's say a system is comprised of the following layers:
Presentation Layer (win forms, web forms, asp.net MVC, WPF, php, qt, java, , ios, android, etc.)
Business Layer (sometimes called managers or services, logic goes here)
Resource Access Layer (manually or ORM)
Resource/Storage (RDBMS, NoSQL, etc.)
The assumption here is that the higher you are the more volatile the layer is (highest being presentation and lowest being resource/storage). It is because of this that you don't want the resource access layer referencing the business layer, it is the other way around! The business layer references the resource access layer, you call DOWN not UP!
You put the interfaces/contracts in their own assembly instead, they have no purpose in the business layer at all.

Resources