Understanding onion architecture - automapper

Onion Architecture Mockups
Above are two images that depict my understanding of Onion Architecture.
They are slightly different from the drawings found online, because they address an agenda that I cannot find an answer to.
Infrastructure, as far as I can tell, are things like persistence, logging, etc. I have written examples of them in italics. However, a lot of the time, infrastructure components, as well as UI, tend to need to communicate with one another. The UI might want to audit or log something, the persistence project may need to log something. Logging being one of the harder to fit items in onion architecture, my understanding is that a lot of people have different opinions on where you should and shouldn't log.
In my first drawing, I have put an Infrastructure Interfaces layer in the diagram to allow cross communication without any one component knowing the implementation of another component. This is something that I have seen in a few examples.
The second drawing is my preference, it uses a mediator to cross communicate between infrastructure, UI, and its basically a way to allow the core services to communicate with infrastructure indirectly (assume Service Interfaces is called Core Services on the right diagram). The logger would subscribe itself to certain events, as would the database etc.
The first diagram allows only pocos and interfaces in all layers except the outer layer (excluding the dependency resolver). The second allows domain and business logic in the core service layer and allow the infrastructure layers to do their jobs in isolation.
I justified the infrastructure components by ensuring that they had an output of some sort. Auditing and Logging would usually use a db of some sort, cache would usually store in memory and db should probably have been called persistence. However, there is a library called AutoMapper. I have seen it wrapped in some instances, so that its interface can go in the Core to be used in pretty much any infrastructure, but it seems like over abstraction to me. Automapper is kind of like the Events object in that all infrastructures use it to translate between itself and the domain, but I'm not sure if it fits in that layer since it is not a service.
Question: Which of the two is closest to the definition of onion architecture and where would you fit in a tool like auto mapper, and do you think trying to wrap something like that is over abstraction?
Thanks!

I've used Auto Mapper and the Onion Architecture. We configured AutoMapper in the MVC Global.asax file, that typically calls a Config Method in the AutoMapperConfig Class in the App Start directory.
Regarding your graphics, it appears one of them has a separate layer for the Mediator and Observer Patters. They're not necessarily needed but it entirely depends on your approach. Just as you can use Model-View-Controller Pattern in the Onion Architecture or Model-View-Presenter or Model-View-ViewModel. They're just coupling separate Patterns to incorporate some added benefit.
Here's where I first came across the Onion Architecture Jeffery Palermo. If you're wanting to a see a more pure graphical representation.

Related

A bounded context is a full application?

I've been reading about DDD and bounded contexts and I think I'm getting the idea wrong. At first, I liked the idea of subdomains and bounded contexts, I understood it like that: there's a software to be developed, but attacking all at once is too much, so we break it into logical pieces and develop each at once. Another problem we solve is ambiguities on the ubiquitous language.
This led me to think about bounded contexts as basically just folders where I group and bound code related to some specific piece of the application. This code I believed to be made up from things like
The domain model of that bounded context, including abstractions for repositories and services
Infrastructure layer for that bounded context, implementations of repositories and so on
Of course, being the domain model and infrastructure properly separeted within the bounded context.
Reading further, it seems, however, that each bounded context is an entire application on it's own right. It seems, sometimes, that each bounded context has it's own application layer, for instance.
This made me confused, because sometimes I don't want to end up developing tons of applications, I just one to develop one. The bounded context division of the application was supposed to build one app, not many apps to be integrated.
I've seem this question where #MikeSW says both approaches presented by the OP are valid. What I'm asking is about a third structure:
<bc 1>
|_ domain
|_ infrastructure
<bc 2>
|_ domain
|_ infrastructure
|_ application
|_ presentation
At least for all the applications I've seem this makes much more sense. I want one app, not several apps with several presentations, but I still want to be able to break the domain and benefit of things like "bounding the ubiquitous language".
So, is a bounded context a full aplication? Or can a bounded context be used like I understood and felt more useful? There are any problems with my approach?
The domain layer is usually the most complex part of your program, and can also change often due to business requirements and refactoring. So you generally don't want to expose it directly to your presentation layer or other bounded contexts. If you feel that you can expose it, it might be the case that your application logic or use case methods are mixed into your domain layer, or that your program is not large or complex enough to require multiple BC's to begin with. Otherwise, I would go with including the application layer in each BC to protect the domain model's integrity and expose only the commands that need to be called from a use case perspective.
I want one app, not several apps with several presentations, but I
still want to be able to break the domain and benefit of things like
"bounding the ubiquitous language".
You can have a thin application layer for each bounded context, and still have a single presentation layer. This is sometimes called a "composite UI", which should be considered a separate BC in itself. If you need to handle common logic such as authentication, create another application service or facade in the composite UI and have it handle the authentication before in turn calling the application service of an outside BC.
I think most of the examples you see in books and on the web are over-simplified in that they have 1 BC per physical running application (and perform some kind of network communication between them), whereas in the real world you might have a complex application that you need to split into separate logical units, but not run them as separate processes unless the need arrives.
At the end of the day the answer is both. The important thing to take away from bounded context is not how you structure your app, but that you have different spaces where you model specific behavior relating to some context. How you define the boundaries between these contexts is dependent on the problem you need to solve.
There is nothing wrong using namespaces(folders) to define bounded contexts. Like you said most of the time you are simply writing one application. You can also define your bounded context by having separate projects for each context. In this case your presentation layer will reference the project it needs.
There are many right ways to code DDD. You should ask yourself "Am I following the core principle by doing it this way"
The bounded context describes a subset of the complete solution and everything within that context serves that context. So, imo, each context has it's own domain so it could be a separate application or just a subsystem of the same project. The point of the "context" is that the ubiquitous language applies directly TO that context. For example, a User in the Account context might mean something completely different than a User in the Sales context. Each "User" will have different capabilities and follow different rules in each context. Each context needs to be isolated from any other context and are not allowed to share references (unless it's via a 'Shared' context); any communication should be mediated through a service that sits on top of that context. A context doesn't even have to follow DDD to be "DDD compliant" since each context can follow it's own approach (e.g. domain driven, data driven, etc.). Contexts are simply silos that outline a logical section of the business.
Whatever you need to do to prevent direct references across contexts is fine whether that means different namespaces, different assemblies within a solution, or different projects altogether.
The bounded context is the scope on which the code operates. It relies on a domain model, that can be supported by a ORM (or not). It implements different kinds of services (domain services and application services) but its aim is to expose only domain services to its environment. DDD is a service oriented architecture, meant to work as offline as possible and in a loose-coupled way. You may decide to consume your services in different ways. The solution implements different kinds of components, different kinds of layers, different kinds of projects. I believe the most critical attention must concern the model, that should not be distributed across components. Solution design and domain model are orthogonal purposes.

Where logging should go in onion architecture with DDD

I am developing a console application using onion architecture and domain driven design. I have a two domains, where I need to implement logging, I confused where I can place the logging component. Can I place that in respective infrastructure of two domains? Or In shared kernel which can be referenced in both the domains? If I need to place it in the shared kernel what is the structure I should follow?, I mean like core, infrastructure.
Logging is a cross-cutting concern. aspect-oriented programming aims to encapsulate cross-cutting concerns into aspects. This allows for the clean isolation and reuse of code addressing the cross-cutting concern.
You need to create a library and implement your logging classes, something like "MyProject.CrossCutting.Logging" And use aspect-oriented approaches to log the events using this library.
Logging is cross-cutting across all of your applications. That should be part of your framework. All of the layers of all of your application projects can have dependency on your framework, the same way they have dependency on .Net Framework, Spring, etc. Your framework must have abstractions for cross-cutting concerns that you can easily rely on, and then the implementation just has to be referenced in the composition root of the application which is in the infrastructure.
If you're following DDD and the Onion Architecture, it doesn't matter how many domains you have. Each Domain can implement their own version of a Logger if needed. More than likely, you will create a Logging Interface and possibly a static Implementation that is kept in Common Layer that can be called by any of the Layers that needed it. In the image that was shared previously, it would be kept in Cross-Cutting Layer. As previously mentioned, Logging is a concern of all layers.

Is it common practice to have a shared and dedicated assembly or library for messages in a distributed system?

I'm talking a crack at some of the concepts behind distributed domain driven design and I'm building a proof of concept. I have three C# solutions that have specific responsibility within the overall system.
The solutions I have are:
The write model (receives commands from a client and creates and sends events)
The read model (receives events from write model, creates a database and exposes DTO services to the client, could potentially be 2 separate solutions)
The client (calls services to get needed data and sends commands to the write model)
All three solutions use messaging (commands, events) through a service bus. (MassTransit in my case).
My main question is: Is it common practice to create an assembly with the messages and have each solution reference that assembly?
Extra credit: Is there anything I'm doing that seems weird or problematic in this POC? Any additional info I should be aware of when creating this type of a system?
Is it common practice to create an assembly with the messages and have
each solution reference that assembly?
Yes. This is a common practice with messaging systems in general. For example, many NServiceBus samples employ this approach. Think of this assembly as representing your contract. In systems built upon different platforms this representation would come in the form of an XSD schema or some other schema definition mechanism.
Is there anything I'm doing that seems weird or problematic in this
POC? Any additional info I should be aware of when creating this type
of a system?
Everything seems to be well fitted to CQRS so far. To be fair, I should mention that it can be easy to get carried away with CQRS as a silver bullet and structure systems around it. It is often a wise decision to forgo CQRS all together. Keep focus on the business domain and use CQRS as an architectural style to implement your system, not to guide its model.

DDD and data export system

I'm a beginner in DDD and am facing a little problem with architecture.
Our system must be able to export business data in various formats (Excel, Word, PDF and other more exotic formats).
In your opinion, which layer must be responsible for the overall process of retrieving source data, exporting them in the target format and preparing the final result to the user ? I'm confusing between domain and application responsibilities.
And regarding the export subsystem, should implementations and their common interface contract belong to the infrastructure layer ?
Neither Application nor Domain layer or any other 'layer'. DDD is not a layered architecture. Search for onion architecture or ports and adapters pattern for mor on this subject.
Now to the core of your problem. The issue you are facing is a separate bounded context and should go to a separate component of your system. Lets call it Reporting. And as it's just a presentation problem, no domain logic - DDD is not suitable for it. Just make some SQL views, read them using NHibernate, LINQ2SQL, EF or even plain DataReaders and build your Word/whatever documents using a Builder pattern. No Aggregates, Repositories, Services or any other DDD building blocks.
You may want to go a bit further and make all data presentation in your application to be handled by a separate component. Thats CQRS.
I usually take the simple approach where possible.
The domain code implements the language of your domain - The nouns, verbs etc.
Application code creates poetry using this language.
So in your example the construction of the output is an application concern, while the construction of the bits that the application will talk about (since you don't mention it) would the the domain concern.
e.g. if the report consists of sales data, that things like dates, bills, orders etc. would be abstractly constructed in the domain, while the application would concerned with producing documents using these.

DDD vs N-Tier (3-Tier) Architecture

I have been practicing DDD for a while now with the 4 distinct layers: Domain, Presentation, Application, and Infrastructure. Recently, I introduced a friend of mine to the DDD concept and he thought it introduced an unnecessary layer of complexity (specifically targeting interfaces and IoC). Usually, its at this point, I explain the benefits of DDD-- especially, its modularity. All the heavy lifting and under the hood stuff is in the Infrastructure and if I wanted to completely change the underlying data-access method, I could do so with only having to touch the Infrastructure layer repository.
My friend's argument is that he could build a three tiered application in the same way:
Business
Data
Presentation
He would create business models (like domain models) and have the repositories in the Data layer return those Business models. Then he would call the business layer which called the data layer. I told him the problem with that approach is that it is not testable. Sure, you can write integration tests, but you can't write true unit tests. Can you see any other problems with his proposed 3-tiered approach (I know there is, because why would DDD exist otherwise?).
EDIT: He is not using IoC. Each layer in his example is dependent on one another.
I think you're comparing apples and oranges. Nothing about N-Tier prohibits it from utilizing interfaces & DI in order to be easily unit-tested. Likewise, DDD can be done with static classes and hard dependencies.
Furthermore, if he's implementing business objects and using Repositories, it sounds like he IS doing DDD, and you are quibbling over little more than semantics.
Are you sure the issue isn't simply over using DI/IoC or not?
I think you are mixing a few methodologies up. DDD is Domain-Driven Developement and is about making the business domain a part of your code. What you are describing sounds more like the Onion Architecture (link) versus a 'normal' 3-layered approach. There is nothing wrong with using a 3-layered architecture with DDD. DDD depends on TDD (TestDriven Developement). Interfaces help with TDD as it is easier to test each class in isolation. If you use Dependency Injection (and IoC) it is further mitigated.
The Onion Architecture is about making the Domain (a.k.a. business rules) independent of everything else - ie. it's the core of the application with everything depending on the business objects and rules while things related to infrastructure, UI and so on are in the outer layers. The idea is that the closer to the 'shell of the onion' a module is - the easier it is to exchange for a new implementation.
Hope this clears it a bit up - now with a minor edit!
Read "Fundamentals of Software Architecture: An Engineering Approach", Chapter 8, Page 100 to 107.
The top-level partitioning is of particular interest to architects because it defines the fundamental architecture style and way of partitioning code. It is one of the first decisions an architect must make. These two styles (DDD & Layered) represent different ways to top-level partition the architecture. So, you are not comparing apples and oranges here.
Architects using technical partitioning organize the components of the system by technical capabilities: presentation, business rules, persistence, and so on.
Domain partitioning, inspired by the Eric Evan book Domain-Driven Design, which is a modeling technique for decomposing complex software systems. In DDD, the architect identifies domains or workflows independent and decoupled from each other.
The domain partitioning (DDD) may use a persistence library and have a separate layer for business rules, but the top-level partitioning revolves around domains. Each component in the domain partitioning may have subcomponents, including layers, but the top-level partitioning focuses on domains, which better reflects the kinds of changes that most often occur on projects.
So you can implement layers on each component of DDD (your friend is doing the opposite, which is interesting and we might try that out as well).
However, please note that ("Fundamentals of Software Architecture: An Engineering Approach", Page 135)
The layered architecture is a technically partitioned architecture (as
opposed to a domain-partitioned architecture). Groups of components,
rather than being grouped by domain (such as customer), are grouped by
their technical role in the architecture (such as presentation or
business). As a result, any particular business domain is spread
throughout all of the layers of the architecture. For example, the
domain of “customer” is contained in the presentation layer, business
layer, rules layer, services layer, and database layer, making it
difficult to apply changes to that domain. As a result, a
domain-driven design approach does not work as well with the layered
architecture style.
Everything in architecture is a trade-off, which is why the famous answer to every architecture question in the universe is “it depends.” Being said that, the disadvantage with your friend's approach is, it has higher coupling at the data level. Moreover, it will creates difficulties in untangling the data relationships if the architects later want to migrate this architecture to a distributed system (ex. microservices).
N Tier or in this case 3-tier architecture is working great with unit tests .
All you have to do is IoC (inversion of control) using dependency injection and repository pattern .
The business layer can validate , and prepare the returned data for the presentation \ web api layer by returning the exact data which is required .
You can then use mock in you unit tests all the way through the layers.
All your business logic and needs can be on bl layer . And Dal layer will contain repositories injected from higher level.

Resources