Where logging should go in onion architecture with DDD - domain-driven-design

I am developing a console application using onion architecture and domain driven design. I have a two domains, where I need to implement logging, I confused where I can place the logging component. Can I place that in respective infrastructure of two domains? Or In shared kernel which can be referenced in both the domains? If I need to place it in the shared kernel what is the structure I should follow?, I mean like core, infrastructure.

Logging is a cross-cutting concern. aspect-oriented programming aims to encapsulate cross-cutting concerns into aspects. This allows for the clean isolation and reuse of code addressing the cross-cutting concern.
You need to create a library and implement your logging classes, something like "MyProject.CrossCutting.Logging" And use aspect-oriented approaches to log the events using this library.

Logging is cross-cutting across all of your applications. That should be part of your framework. All of the layers of all of your application projects can have dependency on your framework, the same way they have dependency on .Net Framework, Spring, etc. Your framework must have abstractions for cross-cutting concerns that you can easily rely on, and then the implementation just has to be referenced in the composition root of the application which is in the infrastructure.

If you're following DDD and the Onion Architecture, it doesn't matter how many domains you have. Each Domain can implement their own version of a Logger if needed. More than likely, you will create a Logging Interface and possibly a static Implementation that is kept in Common Layer that can be called by any of the Layers that needed it. In the image that was shared previously, it would be kept in Cross-Cutting Layer. As previously mentioned, Logging is a concern of all layers.

Related

Can Core project in Clean Architecture depends on nuget package?

I have Core project where I need to do some cryptographic operations, e.g. verification of SHA256. What can I do if it's Core project, so it shouldn't depend on anything? I have to write my own cryptographic functions that are resistant to e.g. side-channel attack? This causes security problems.
So what to do? Can my Core project depend on a nuget package if I use Clean Architecture?
The guideline regarding dependencies is to keep the core project as simple as possible so that most of its logic is about solving the business problem.
By keeping it simple, it's much easier to express which part of the business domain the classes solve. It's also easy to write focused tests that prove that the code can solve the correct part of the business problem.
To me, preventing attacks is not a part of that. It's something that should be done on inbound API calls before the domain is called. I would put that logic in application services. Those services can, of course, live in the Core project but not in any of the bounded contexts.
In Clean Architecture we try to keep the domain and application logic as independent from external libraries and frameworks as possible so that we do not depend on their future development.
Nevertheless the application logic will have to interact with external libraries, services and other IO which is achieved via "dependency inversion": the application logic defines an interface which is implemented by the outer layers (infrastructure).
This was the application logic remains "clean" and can focus on decision making while you can still reuse external libraries and services.
A more detailed discussion of this topic you can find here: http://www.plainionist.net/Implementing-Clean-Architecture-Frameworks/

Do I need to test the domain services in Domain driven design?

I am developing a console application using domain driven design, i tried to keep the domain logic as much as possible into domain entities, but some how, some logic leaked into domain services, so now, do i need to test the domain services, if yes how to do it?
Yes, if they contain logic, they have to be tested.
If the service was properly decoupled, it should be possible to test it with unit tests and dependency injection.
First always make sure that your domain services are stateless.
The usual roles of domain services is stuff like validation and persistence. In those cases simply create mocks/stubs/dummies of infrastructure services that they use, pass them in, inside a unit test and assert certain behavior on those mocks. Business as usual. Sometimes the domain services need entities. Mock and pass those in as well. Assert as usual.
Inevitably someone will chime in with the venerable statement of: "but domain services aren't about persistence". If a domain service deals/uses some persistence mechanism (repository/gateway) to accomplish some responsibility, then it's perfectly reasonable 'english' to state 'its usual role is stuff like persistence'.
With that out of the way. DDD does not make decoupling goals. Good DDD is allowing ALL your business logic to happen in the domain. Making Domain Services stateless accomplishes that. Like VO It makes DS safe to pass around from outside layers. Keeping the API of DS consistent with your ubiquitous language insures they remain as a coherent unit of organization within your domain.
"But DS are not about persistence".. only if the ubiquitous language is not, and it often isn't, so the DS API shouldn't in those cases reflect a persistence mechanism. But it sure as hell can have internal methods that use a hell lot of persistence, and you will need to use mocks/stubs/dummies to get at those suckers in unit tests. Domain Services aren't some architectural scaffolding for keeping your layers separate. They are a unit of organization for higher level domain logic.
Domain Services are all about domain logic, so they should definitely be tested.
It's all the more simple to do since they have very few dependencies, and especially usually no coupling to infrastructure classes doing slow I/O stuff.
The domain layer is at the center of your application and isn't tightly coupled to any other layer, most calls will stay inside its boundaries. If Domain Services want to communicate to the outside world, they'll often use events. Therefore, you should need little to no mocks or stubs when testing them.
It depends on the framework you use. Using C# and depending on project complexity, I would take advantage on DI and factories (if any), or implement some functional tests (retrospectively) with SpecFlow and Moq, given interface contracts you should have written when implementing your domain services. The starting point would consist in installing SpecFlow, and then, you should create a dedicated test project...
http://www.specflow.org/getting-started/

Understanding onion architecture

Onion Architecture Mockups
Above are two images that depict my understanding of Onion Architecture.
They are slightly different from the drawings found online, because they address an agenda that I cannot find an answer to.
Infrastructure, as far as I can tell, are things like persistence, logging, etc. I have written examples of them in italics. However, a lot of the time, infrastructure components, as well as UI, tend to need to communicate with one another. The UI might want to audit or log something, the persistence project may need to log something. Logging being one of the harder to fit items in onion architecture, my understanding is that a lot of people have different opinions on where you should and shouldn't log.
In my first drawing, I have put an Infrastructure Interfaces layer in the diagram to allow cross communication without any one component knowing the implementation of another component. This is something that I have seen in a few examples.
The second drawing is my preference, it uses a mediator to cross communicate between infrastructure, UI, and its basically a way to allow the core services to communicate with infrastructure indirectly (assume Service Interfaces is called Core Services on the right diagram). The logger would subscribe itself to certain events, as would the database etc.
The first diagram allows only pocos and interfaces in all layers except the outer layer (excluding the dependency resolver). The second allows domain and business logic in the core service layer and allow the infrastructure layers to do their jobs in isolation.
I justified the infrastructure components by ensuring that they had an output of some sort. Auditing and Logging would usually use a db of some sort, cache would usually store in memory and db should probably have been called persistence. However, there is a library called AutoMapper. I have seen it wrapped in some instances, so that its interface can go in the Core to be used in pretty much any infrastructure, but it seems like over abstraction to me. Automapper is kind of like the Events object in that all infrastructures use it to translate between itself and the domain, but I'm not sure if it fits in that layer since it is not a service.
Question: Which of the two is closest to the definition of onion architecture and where would you fit in a tool like auto mapper, and do you think trying to wrap something like that is over abstraction?
Thanks!
I've used Auto Mapper and the Onion Architecture. We configured AutoMapper in the MVC Global.asax file, that typically calls a Config Method in the AutoMapperConfig Class in the App Start directory.
Regarding your graphics, it appears one of them has a separate layer for the Mediator and Observer Patters. They're not necessarily needed but it entirely depends on your approach. Just as you can use Model-View-Controller Pattern in the Onion Architecture or Model-View-Presenter or Model-View-ViewModel. They're just coupling separate Patterns to incorporate some added benefit.
Here's where I first came across the Onion Architecture Jeffery Palermo. If you're wanting to a see a more pure graphical representation.

DDD vs N-Tier (3-Tier) Architecture

I have been practicing DDD for a while now with the 4 distinct layers: Domain, Presentation, Application, and Infrastructure. Recently, I introduced a friend of mine to the DDD concept and he thought it introduced an unnecessary layer of complexity (specifically targeting interfaces and IoC). Usually, its at this point, I explain the benefits of DDD-- especially, its modularity. All the heavy lifting and under the hood stuff is in the Infrastructure and if I wanted to completely change the underlying data-access method, I could do so with only having to touch the Infrastructure layer repository.
My friend's argument is that he could build a three tiered application in the same way:
Business
Data
Presentation
He would create business models (like domain models) and have the repositories in the Data layer return those Business models. Then he would call the business layer which called the data layer. I told him the problem with that approach is that it is not testable. Sure, you can write integration tests, but you can't write true unit tests. Can you see any other problems with his proposed 3-tiered approach (I know there is, because why would DDD exist otherwise?).
EDIT: He is not using IoC. Each layer in his example is dependent on one another.
I think you're comparing apples and oranges. Nothing about N-Tier prohibits it from utilizing interfaces & DI in order to be easily unit-tested. Likewise, DDD can be done with static classes and hard dependencies.
Furthermore, if he's implementing business objects and using Repositories, it sounds like he IS doing DDD, and you are quibbling over little more than semantics.
Are you sure the issue isn't simply over using DI/IoC or not?
I think you are mixing a few methodologies up. DDD is Domain-Driven Developement and is about making the business domain a part of your code. What you are describing sounds more like the Onion Architecture (link) versus a 'normal' 3-layered approach. There is nothing wrong with using a 3-layered architecture with DDD. DDD depends on TDD (TestDriven Developement). Interfaces help with TDD as it is easier to test each class in isolation. If you use Dependency Injection (and IoC) it is further mitigated.
The Onion Architecture is about making the Domain (a.k.a. business rules) independent of everything else - ie. it's the core of the application with everything depending on the business objects and rules while things related to infrastructure, UI and so on are in the outer layers. The idea is that the closer to the 'shell of the onion' a module is - the easier it is to exchange for a new implementation.
Hope this clears it a bit up - now with a minor edit!
Read "Fundamentals of Software Architecture: An Engineering Approach", Chapter 8, Page 100 to 107.
The top-level partitioning is of particular interest to architects because it defines the fundamental architecture style and way of partitioning code. It is one of the first decisions an architect must make. These two styles (DDD & Layered) represent different ways to top-level partition the architecture. So, you are not comparing apples and oranges here.
Architects using technical partitioning organize the components of the system by technical capabilities: presentation, business rules, persistence, and so on.
Domain partitioning, inspired by the Eric Evan book Domain-Driven Design, which is a modeling technique for decomposing complex software systems. In DDD, the architect identifies domains or workflows independent and decoupled from each other.
The domain partitioning (DDD) may use a persistence library and have a separate layer for business rules, but the top-level partitioning revolves around domains. Each component in the domain partitioning may have subcomponents, including layers, but the top-level partitioning focuses on domains, which better reflects the kinds of changes that most often occur on projects.
So you can implement layers on each component of DDD (your friend is doing the opposite, which is interesting and we might try that out as well).
However, please note that ("Fundamentals of Software Architecture: An Engineering Approach", Page 135)
The layered architecture is a technically partitioned architecture (as
opposed to a domain-partitioned architecture). Groups of components,
rather than being grouped by domain (such as customer), are grouped by
their technical role in the architecture (such as presentation or
business). As a result, any particular business domain is spread
throughout all of the layers of the architecture. For example, the
domain of “customer” is contained in the presentation layer, business
layer, rules layer, services layer, and database layer, making it
difficult to apply changes to that domain. As a result, a
domain-driven design approach does not work as well with the layered
architecture style.
Everything in architecture is a trade-off, which is why the famous answer to every architecture question in the universe is “it depends.” Being said that, the disadvantage with your friend's approach is, it has higher coupling at the data level. Moreover, it will creates difficulties in untangling the data relationships if the architects later want to migrate this architecture to a distributed system (ex. microservices).
N Tier or in this case 3-tier architecture is working great with unit tests .
All you have to do is IoC (inversion of control) using dependency injection and repository pattern .
The business layer can validate , and prepare the returned data for the presentation \ web api layer by returning the exact data which is required .
You can then use mock in you unit tests all the way through the layers.
All your business logic and needs can be on bl layer . And Dal layer will contain repositories injected from higher level.

Questions regarding Domain driven Design

After reading Eric Evans' Domain driven Design I have a few questions. I searched but no where i could able to find satisfying answers. Please let me know if anyone of you have clear understanding below questions.
My concerns are
Repository is for getting already existing aggregates from DB,Web service .
If yes, Can Repository also have transaction calls on this entity (i.e Transfer amount,send account details ...etc)
Can Entity have Methods which have business logic in which it calls infrastructure Layer services for sending emails .. logs etc (Entity methods calling IS services direclty).
Repository implementation and Factory classes will reside in Infrastrucure Layer. is that correct statement ?
Can UI layer (controller) call Repositry methods directly ? or should we call these from Application layer ?
There are still lot many confusion in my mind ... please guide me ...
Books i am using Eric Evan's domain driven desing ......
.NET Domain-Driven Design with C#
There is a lot of debate about whether Repositories should be read-only or allow transactions. DDD doesn't dictate any of these views. You can do both. Proponents of read-only Repositories prefer Unit of Work for all CUD operations.
Most people (self included) consider it good practice that Entities are Persistent-Ignorant. Extending that principle a bit would indicate that they should be self-contained and free of all infrastructure layer services - even in abstract form. So I would say that calls to infrastructure services belong in Service classes that operate on Entities.
It sounds correct that Repository implementations and Factories (if any) should reside in the infrastructure layer. Their interfaces, however, must reside in the Domain Layer so that the domain services can interact with them without having dependencies on the infrastructure layer.
DDD doesn't really dictate whether you can skip layers or not. Late in the book, Evans talks a bit about layering and calls it Relaxed Layering when you allow this, so I guess he just sees it as one option among several. Personally I'd prefer to prevent layer skipping, because it makes it easier to inject some behavior at a future time if calls already go through the correct layers.
Personally, in my latest DDD-project, I use a Unit Of Work that holds an NHibernate session. The UoW is ctor injected in the repositories, giving them the single responsible of Add, Remove and Find.
Evans has stated that one piece of the puzzle that's missing in the DDD book is «Domain Events». Using something like Udi Dahan's DomainEvents will give you a totally decoupled architecture (the domain object simply raises an event). Personally, I use a modified version of Domain Events and StructureMap for the wiring. It works great for my needs.
I recommend, based on other recommendations, that the Repository interfaces be a part of the model, and their implementations be a part of the infrastructure.
Yes! I've personally worked on three DDD web projects where services and repositories were injected to the presenters/controllers (ASP.NET/ASP.NET MVC) and it made a lot of sense in our context.
The repository should only be for locating and saving entities, there should not be any business logic in that layer. For example:
repository.TransferAmount(amount, toAccount); // this is bad
Entities can do things like send emails as long as they depend on abstractions defined in your domain. The implementation should be in your infrastructure layer.
Yes, you put your repository implementation in your infrastructure layer.
Can UI layer (controller) call Repositry methods directly ? or should we call these from Application layer ?
Yes, I try to follow this pattern for the most part:
[UnitOfWork]
public ActionResult MyControllerAction(int id)
{
var entity = repository.FindById(id);
entity.DoSomeBusinessLogic();
repository.Update(entity);
}

Resources