Does it make sense to group all Interfaces of your Domain Layer (Modules, Models, Entities, Domain Services, etc) all within the Infrastructure layer? If not, does it make sense to create a "shared" project/component that groups all of these into a shared library? After all, the definition of "Infrastructure Layer" includes "shared libraries for Domain, Application, and UI layers".
I am thinking of designing my codebase around the DDD layers: UI, Application, Domain, Infrastructure. This would create 4 projects respectfully. My point is, you reference the Infrastructure Layer from the Domain Layer. But if you define the interfaces in the Domain Layer project, say for IPost, then you'll have a circulur reference when you have to reference the Domain Layer project from the Infrastructure project when you are defining the IPostRepository.Save(IPost post) method. Hence, the idea of "define all Interfaces in the Shared library".
Perhaps the repositories should not expect an object to save (IPostRepository.Save(IPost post); but instead, expect the params of the object (that could be a long set of params in the Save() though). Given, this could be an ideal situation that shows when an object is getting overly complex, and additional Value Objects should be looked into for it.
Thoughts?
Concerning where to put the repositories, personally I always put the repositories in a dedicated infrastructure layer (e.g . MyApp.Data.Oracle) but declare the interfaces to which the repositories have to conform to in the domain layer.
In my projects the Application Layer has to access the Domain and Infrastructure layer because it’s responsible to configure the domain and infrastructure layer.
The application layer is responsible to inject the proper infrastructure into the domain. The domain doesn’t know to which infrastructure it’s talking to, it only knows how to call it. Of course I use IOC containers like Structuremap to inject the dependencies into the domain.
Again I do not state that this is the way DDD recommends to structure your projects, it’s just the way, I architecture my apps.
Cheers.
I’m quiet new in DDD so don’t hesitate to comment if you disagree, as you I’m here to learn.
Personally I don’t understand why you should reference the infrastructure layer from your domain. In my opinion the domain shouldn’t be dependent on the infrastructure. The Domain objects should be completely ignorant on which database they are running on or which type of mail server is used to send mails. By abstracting the domain from the infrastructure it is easier to reuse; because the domain don’t know on which infrastructure its running.
What I do in my code is reference the domain layer from my infrastructure layer (but not the opposite). Repositories know the domain objects because their role is to preserve state for the domain. My repositories contains my basic CRUD operations for my root aggregates (get(id), getall(), save(object), delete(object) and are called from within my controllers.
What I did on my last project (my approach isn’t purely DDD but it worked pretty well) is that I abstracted my Repositories with interfaces. The root aggregates had to be instantiated by passing a concrete type of a Repository:
The root aggregate had to be instantiated through a repository by using the Get(ID) or a Create() method of the repository. The concrete Repository constructing the object passed itself so that the aggregate could preserve his state and the state of his child objects but without knowing anything of the concrete implementation of the repository.
e.g.:
public class PostRepository:IPostRepository
{
...
public Post Create()
{
Post post=new Post(this);
dbContext.PostTable.Insert(post);
return post;
}
public Save(post)
{
Post exitingPost=GetPost(post.ID);
existingPost = post;
dbContext.SubmitChanges();
}
}
public class Post
{
private IPostRepository _repository
internal Post(IPostRepository repository)
{
_repository = repository;
}
...
Public Save()
{
_repository.Save(this);
}
}
I would advise you to consider Onion architecture. It fits with DDD very nicely. The idea is that your repository interfaces sit in a layer just outside Domain and reference Entities directly:
IPostRepository.Save(Post post)
Domain does not need to know about repositories at all.
Infrastructure layer is not referenced by Domain, or anybody else and contains concrete implementations of repositories among other I/O-related stuff. The common library with various helpers is called Application Core in this case, and it can be referenced by anyone.
Related
I have this classic DDD problem; I have a Domain Service "DetectPriority" that do some stuff.
PM ask me to create 2 different services; one INTERNAL ( with is FULL of business rules and involve many other Domain Models ) and another one ETERNAL ( a simple API call ).
There is a interface "DetectPriorityInterface" within the Domain.
Both Implementations MUST be active in the same time; a kind of "mixed" has to select one instead of the other in real time.
The problem is: Where do these implementations ( two implementations ) should live: in Domain Layer or Infrastructure Layer??
Internal Implementation is full of business rules and should reside in Domain Layer.
External Implementation is a simple CALL and should lives in Infrastructure.
Should we put both in Infrastructure layer?
Thanks
EDIT
Actually we have one interface "DetectPriority" and three implementations, ALL in our Domain layer ( temporary "wrong" solution ) :
InternalDetector ( with Business Rules )
ExternalDetector ( With external API call )
MixerDetector ( get both Implementations and handle the choise )
Clients use the Interface so, for Application Layer, all these stuff are trasparent; in the next, we will remove the Internal ( or External ) and Mixer and use only One Implementation. ( The idea behind all these is to understand who performs better, it is an A/BN test )
Our internal debate is: Cause InternalDetector has some domain rules that belongs ONLY to that Detector, it should live in Infrastructure layer, cause it is not an General Domain Rules. Some of us disagree with this, cause in InternalDetector we only have business rules and we don't see that in Infra Layer.
Problably the correct way should be add Internal in Domain, and External in Infra .. but it seems to be a bit confused ..
Putting all together in Infra layer would be more readable for devs...
We didn't find some stuff in books cause usually you have a single implementation of a domain service ....
The short answer is that you should implement an internal service in the domain layer and an external one in the infrastructure layer, exactly as you said in your question. This way, everything will be in its place.
An additional important thing to consider is that the code that decides which service to call should be in the domain layer too. As I can imagine from your question, you decide which detector to use using some business rule. That fact that one detector is implemented in your application, and another one is not is just an implementation detail of your system. In fact, you just decide to use one set of business rules or another one according to some condition. It is a business decision.
DDD is pretty often about difficult compromises. But when you are looking for a good compromise, I would advise never move the domain logic outside of the domain layer, and never add references from the domain layer to other ones.
This is essential.
And here is an example of how you can solve this task without violation of these rules:
// Names in this code should be changed to something with business
// meaning. For example `externalDetector` can be `governmentDetector`
// and `internalDetector` can be `corporateDetector`.
// Declare a service interface in the domain layer
public interface DetectPriority {}
// Inject both detectors in the domain service.
// Your dependency injection code should inject here
// an internal implementation and an external one,
// implemented in the infrastructure layer.
// So your DI code knows about different implementations
// but the domain service doesn't.
// For the domain service it's just two implementations
// of domain interface IDetector
IDetector _externalDetector;
IDetector _internalDetector;
// Implement the method of the domain service like this:
public Priority Detect()
{
if (weShouldUseExternalSetOfRules)
{
return _externalDetector.Detect(); // this one is implemented in your infrastructure layer
}
else
{
return _internalDetector.Detect(); // this one is implemented in your domain
}
}
In this solution you can see that:
All domain logic (an implementation of internal detector and decision which set of rules to use) is placed in the domain layer.
We don't have references to the infrastructure layer from our domain. The domain service have the reference only to IDetector interface, but this interface is declared in the domain layer.
There is not infrastructure code in the domain layer. In this case, infrastructure code means something like "call that GET method of that REST service using this set of parameters in the query string". Obviously, this code will be in the externalDetector implementation.
To be sure that it is a good way, you can take a look at this repository with a sample DDD application from famous Eric Evans' book. You can find there a service interface declared in the domain layer and the service itself implemented in the infrastructure layer. Unfortunately, there aren't examples of using this service interface inside of the domain layer in this application. But it's declared inside of the domain layer to make such a usage possible.
And you can find the same approach with a good explanation in this great article.
EDIT
According to new information in the question, if it is about A/B testing, then choosing a detector is the application-level decision. All other things remain the same. So:
MixerDetector should be in the application layer
DetectPriority interface - in the domain layer
InternalDetector in the domain layer
ExternalDetector in the infrastructure layer
And you don't need "business" names for your detectors then, because they are literally InternalDetector and ExternalDetector.
Should we put both in Infrastructure layer?
Not usually, no. Among other things, that's going to make a mess of your dependency graphs. We don't want domain code to have a dependency on infrastructure code (one of the motivations for having a domain model is so that you can implement the logic of the domain without being tightly coupled to the context that runs the domain model -- introducing infrastructure dependencies is contrary to that goal).
That doesn't necessarily mean that the infrastructure code is "far away" - see package by feature vs package by layer. They are different responsibilities (in the single responsibility principle sense), so there will usually be some separation between the two.
One aspect in which the two are very different: failure modes -- robust code that communicates across the network needs to respect the fact that the network is unreliable, but that's not a domain concern, so we don't usually want to pollute our domain code with network contingency logic.
But if you like, ignore all that -- the real heuristic is simple: we want the arrangement of code that is easiest to maintain over its entire lifetime. If in your context that means putting domain code into the infrastructure layer, then that's what you should do.
The guidelines of DDD and other styles are primarily there to help you avoid the trap of increasing the lifetime maintenance cost by deciding to do what is easy "right now".
I use to keep domain service implementations which are free of infrastructure dependencies in the domain layer. Implementations of a domain service interface which require infrastructure dependencies should reside in the infrastructure layer.
What you need to consider as well in your case is that the code which instantiates the concrete implementation of your DetectPriorityInterface at runtime has to reside in the infrastructure layer as well as it also has a direct dependency to the external domain service.
I suggest you have some factory for that job which decides on creating one or the other domain service based on some kind of parameter. But you can still use a factory interface which you can put in your domain layer. Let's call it PriorityDetectorFactoryInterface or similar. And only the concrete implementation of that factory, let's call it PriorityDetectorFactory would reside in the infrastructure layer.
If you have some application service which is responsible for handling the use case where the priority detection comes into play you would pass the PriorityDetectorFactoryInterface into this application service. At runtime the concrete implementation of the factory interface (i.e. PriorityDetectorFactory) will be injected into the application service. With that you can also keep the application layer where you usually only define the workflows for orchestrating your use cases free of infrastructure dependencies.
With that you would have:
DetectPriorityInterface in your domain layer
InternalPriorityDetector (implementing DetectPriorityInterface) in your domain layer
ExternalPriorityDetector (implementing DetectPriorityInterface) in your infrastructure layer
PriorityDetectorFactoryInterface in your domain layer as well
PriorityDetectorFactory (implementing PriorityDetectorFactoryInterface) in your infrastructure layer
...and the mentioned application service handling your use case in your application layer
Note: this is all based on the assumption that your internal domain service implementation is really free of dependencies other than stuff from the domain layer itself.
is the following good design and allowable in onion architecture and domain driven design?
say you have an "Order" domain class like so
class Order
{
INotificationService _notificationService;
ICartRepository _cartRepository;
void Checkout(Cart cart, bool notifyCustomer)
{
_cartRepository.Save(cart);
if (notifyCustomer)
{
_notificationService.sendnotification();
}
}
}
Is it good or bad design to have the domain model call interfaces of infrastructure?( in this case, notificationservice and CartRepository)
Your design will be OK only if both INotificationService and ICartRepository interfaces are defined within your Domain (Core) layer and if they're bound at runtime with the right implementation by your Dependency Resolution layer (the outermost layer of your Onion architecture).
Remember that in an Onion architecture, your Domain layer cannot reference any libraries.
ICartRepository implementation will obviously be done in your Infrastucture layer as it will surely be tied to your Data Access layer technology.
If your INotificationService implementation needs to talk to an external service, then it goes to Infrasrtructure as well. But if it is part of your business then it's implementation could be in the Domain layer.
I think INotificationService is a domain concept rather than infrastructure service. We could model this as a DomainEvent telling "the customer about the cart is saved".
What about moving the INotificationService to the domain layer and rename it as "CartDomainEvents".
CartDomainEvents.raise(CartSavedEvent(...));
In general case, Calling infrastructure components introduces bidirectional dependencies which is usually not a good design.
Repository layer are exist for each aggregate of the domain,repository is on top of domain, you can have repository base interfaces on your domain who they live in infrastructure but you can not see the repository implementations in your model(it's not correct).
there are some basic interfaces such as UnitOfWrok, Repository, Specificationm authentication and ... in your infrastructure accessible in all layers. I recommend to take a look at aghata store front written in .net to see a real implementation project https://github.com/elbandit/Asp-Net-Design-Patterns-CQRS
repository belong to infrastructure layer
also it would be better to use Icart interface
void Checkout(ICart cart, bool notifyCustomer)
to reduce coupling.
So, I've looked at some posts about the Specification Pattern here, and haven't found an answer to this one yet.
My question is, in an n-layered architecture, where exactly should me Specifications get "newed" up?
I could put them in my Service Layer (aka, Application layer it's sometimes called... basically, something an .aspx code-behind would talk to), but I feel like by doing that, I'm letting business rules leak out of the Domain. If the Domain objects are accessed some other way (besides the Service Layer), the Domain objects cannot enforce their own business rules.
I could inject the Specification into my Model class via constructor injection. But again, this feels "wrong". I feel like the only thing that should be injected into Model classes are "services", like Caching, Logging, dirty-flag tracking, etc... And if you can avoid it, to use Aspects instead of littering the constructors of the Model classes with tons of service interfaces.
I could inject the Specification via method injection (sometimes referred to as "Double Dispatch"???), and explicitly have that method encapsulate the injected Specification to enforce its business rule.
Create a "Domain Services" class, which would take a Specification(s) via constructor injection, and then let the Service Layer use the Domain Service to coordinate the Domain object. This seems OK to me, as the rule enforced by the Specification is still in the "Domain", and the Domain Service class can be named very much like the Domain object it's coordinating. The thing here is I feel like I'm writing a LOT of classes and code, just to "properly" implement the Specification pattern.
Add to this, that the Specification in question requires a Repository in order to determine whether it's "satisfied" or not.
This could potentially cause performance problems, esp. if I use constructor injection b/c consuming code could call a property that perhaps wraps the Specification, and that, in turn is calling the database.
So any ideas/thoughts/links to articles?
Where is the best place to new up and use Specifications?
Short answer:
You use Specifications mainly in your Service Layer, so there.
Long answer:
First of all, there's two questions here:
Where should your specs live, and where should they be new'd up?
Just like your repository interfaces, your specs should live in the domain layer, as they are, after all, domain specific. There's a question on SO that discusses this on repository interfaces.
Where should they be new'd up though? Well, I use LinqSpecs on my repositories and mostly ever have three methods on my repository:
public interface ILinqSpecsRepository<T>
{
IEnumerable<T> FindAll(Specification<T> specification);
IEnumerable<T> FindAll<TRelated>(Specification<T> specification, Expression<Func<T, TRelated>> fetchExpression);
T FindOne(Specification<T> specification);
}
The rest of my queries are constructed in my service layer. That keeps the repositories from getting bloated with methods like GetUserByEmail, GetUserById, GetUserByStatus etc.
In my service, I new-up my specs and pass them to the FindAll or FindOne methods of my repository. For example:
public User GetUserByEmail(string email)
{
var withEmail = new UserByEmail(email); // the specification
return userRepository.FindOne(withEmail);
}
and here is the Specification:
public class UserByEmail : Specification<User>
{
private readonly string email;
public UserByEmail(string email)
{
this.email = email;
}
#region Overrides of Specification<User>
public override Expression<Func<User, bool>> IsSatisfiedBy()
{
return x => x.Email == email;
}
#endregion
}
So to answer your question, specs are new'd up in the service layer (in my book).
I feel like the only thing that should be injected into Model classes
are "services"
IMO you should not be injecting anything into domain entities.
Add to this, that the Specification in question requires a Repository
in order to determine whether it's "satisfied" or not.
That's a code smell. I would review your code there. A Specification should definitely not require a repository.
A specification is an implementation check of a business rule. It has to exist in the domain layer full stop.
Its hard to give specifics on how you do this as every codebase is different, but any business logic in my opinion needs to be in the domain layer and nowhere else. This business logic needs to be completely testable and coupled loosely from UI, database, external services and other non-domain dependencies. So I would definitely rule out 1, 2, and 3 above.
4 is an option, at least the specification will live in your domain layer. However the newing up of specifications depends really again on the implementation. We usually use depencency injection, hence the newing up of pretty much all of our objects is performed via an IOC container and corresponding bootstrapping code (i.e. we usually wire the application fluently). However we would never directly link business logic directly to e.g. UI model classes and the like. We usually have contours/boundaries between things such as UI and domain. We usually define domain service contracts, which can then be used by outside layers such as the UI etc.
Lastly, my answer is assuming that the system you are working on is at least some way complex. If it is a very simple system, domain driven design as a concept is probably too over the top. However some concepts such as testability, readibility, SoC etc should be respected regardless of the codebase in my opinion.
I have been tasked with the undertaking of translating an application from SharePoint to .NET. I am concerned with doing it the right way, so I got an architecture book to read up on patterns and practices.
I've tried to model everything out using Domain Driven Design. I have a Model that represents my world, a Repository to store it in the database, and a Service layer to interact with UI (which is WebForms, because I have 0 experience in MVC and am already hardly treading water with this undertaking).
I am having difficulty grasping the correct way for the layers to interact. My understanding is that the Model should be the base of everything. It references nothing, other layers reference it.
Question 1: Is that right?
I'm getting really concerned with the Service Layer. I'm noticing that I'm developing a very anemic Model, for two reasons: 1, my Model doesn't reference Repository, so I can't store anything via the Model. 2, I am trying to do things as they happen (ie. I add an object to a list of objects, so I store it in the DB one at a time instead of all at once when the user is done adding the objects). So a lot of the work is being done between the Service and Rep layers, with the Model just being there and looking nice.
I'm starting to worry--I'm early in the development, but I am the one being looked at to be the architect of all this. I don't want a maintenance nightmare (I expect this application will be used for years). As always, time is a concern, and I have not been able to prepare/learn effectively. Any help would be swell. :-)
Model should be the base of everything. It references nothing, other layers reference it.
Question 1: Is that right?
The typical way to enforce separation between the domain model and the persistence system is to define repositories. Persistence however is not a part of the domain model.
Your models should call methods defined by the repository
For example consider this totally fake repository:
// Repository
public class SharepointRepository
{
void SaveWidget(); // Implement
}
So the repository is only concerned with loading and saving data, they don't contain any of your domain logic at all.
However if your model is tightly bound to the repository, you've got a separation of concerns issue.
So in this case, Dependency Injection becomes useful. With the prior example, your model would have to explicitly instantiate SharePointRepository and invoke methods. A cleaner way of doing this so that your model doesn't care is to inject the dependency of your repository at runtime.
namespace YourApp.Domain.Abstract
{
public interface ISharePointRepository
{
void SaveWidget();
}
}
Based on this interface, you could build a concrete implementation and inject the dependency to the concrete implementation at run time.
namespace YourApp.Domain.Concrete
{
public class SqlSharePointRepository : ISharePointRepository
{
void SaveWidget()
{
// Code that Saves the widget using the managed sql provider
}
}
}
So on your web form:
When you are collecting data, your model will be hydrated with data from the form, and will invoke a repository method, however the repository itself would have been injected into the asp.net application at runtime, so you could change SqlSharePointRepository to RavenDbRepository without breaking your app.
To see how to bind your repository in Webforms see this SO Post: How can I implement Ninject or DI on asp.net Web Forms?
With MVC the controller is responsible for the gap you think your are experiencing. But as to your questions and based on your current design, the model should invoke repository operations, however the repository itself should be loosely coupled.
I've seen lots of book and article examples saying to put validation code in your Service Layer. Keep the Domain Objects "dumb" (aka, pure POCO's) and handle all validation that a Domain Object might do in the Service Layer.
The Service Layer is responsible for so much it seems (or at least it can be); user authentication, role authentication, scripting dependency objects for IoC's (loggers, error-handlers, etc...), scripting Domain Objects, scripting repositories and passing Domain Objects to and from the repository... whew!
Doesn't creating all these rules in the Service Layer pose a substantial threat to your Domain Objects? For instance, what happens is some programmer decides to write consuming code directly against your Domain Objects and just bypasses the Service Layer altogether? That would be bad, but a believable situation.
If you are going to put a lot of the responsibilities in the Service Layer, including all Domain Object validation, is there a way to "protect" your Domain Objects is someone tries to script them directly? For instance, maybe some way your Domain Objects now they're not being used by a certain client (in this case, the Service Layer?).
Good design makes me think the Domain Objects should know nothing about who's calling them and how they're being called.
If there is no way to "lock down" the Domain Objects, then why are so many articles, books, etc suggesting that putting Domain Object validation in the Service Layer the way to go? I would imagine by taking a defensive programming position, that you should build your Domain Objects to be bullet-proof, and rely on your Service Layer for a simple layer of code to forwarding and receiving requests between the UI and the BAL/DAL.
Has anyone had some real-life project experiences with "abuse" of their Domain Objects from people that have bypassed their Service Layer?
I think you may misunderstand the purpose of a POCO. A POCO, as I understand it, is not an anemic domain object with only properties and attributes. Rather a POCO simply is not tied to a framework or complicated inheritance model. The object is flexible and only concerned about its role in the domain.
They are 2 different design philosophies. Rich Domain Model vs Anemic Domain Model.
The short answer is yes, you can prevent direct access to your domain objects.
You can do so with a number of techniques:
1) You can make all public facing domain objects immutable (i.e. you can't change the data) via only having the only public methods be getters. All methods that modify your objects can be protected or package private so only the correctly packaged services can access them (in Java at least)
2) You can expose only separate classes to your external developers -- so if you have a Person domain class you can have a PersonInfo class that you pass up, that does nothing but contain info.
3) You should expose a coherent API to your app consumers. You basically prevent them from bypassing your Service layer.