Bads of rich domain model - domain-driven-design

I was reading about anemic domain model is a antipattern and I had some questions about.
I have a database where three clients uses and each one of them has diferrents business rules to insert a product into database.
So, if I use a rich domain model, my code will be something like this:
public class Product: IValidatableObject
{
public int Id;
public Client Client;
public int ClientId;
public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
{
if (ClientId == 1)
DoValidationForClientOne();
else if (ClientId == 2)
DoValidationForClientTwo();
else if (ClientId == 3)
DoValidationForClientThree();
}
}
well, it's horrible, isn't?
Now if I have an anemic domain model I could simple create three services layer classes where each one of them would contain a validation for one especific client. Isn't it good?
My second argument is: if I have a desktop and a web application using the same rich domain model, how can I know when to throw a HttpException and when to throw some desktop exception? Wouldn't it better to separate it?
So, finally, why an anemic domain model is an anti pattern in a situation like this in my project?

An AnaemicDomainModel has its place: https://softwareengineering.stackexchange.com/questions/160782/are-factors-such-as-intellisense-support-and-strong-typing-enough-to-justify-the
Your domain model should not be throwing exceptions that are specific to a presentation platform. Let your presentation code sort that out. You should aim to make your domain model agnostic of its presentation.

As already stated - you showed just a DTO, not a domain entity.
In DDD, you would have some constant rules directly in Product entity and some number of ProductPolicies to encapsulate what can differ in handling products in defferent contexts. Horrible? No. Beautiful and powerful. But only if your domain is complex enough. If it's not - use an anemic model.
Your domain should not depend on anything. Should not know anything about web platform, desktop platform, ORM being used, DI container being used. So if you throw an exception, it should be a domain custom exception. Read about onion architecture or hexagonal architecure for more detailed explanation: http://jeffreypalermo.com/blog/the-onion-architecture-part-1/

I will recommend following:
Define IProductValidator interface, and provide 3 implementations as:
interface IProductValidator {
void validateProduct(Product product);
}
Change Client class, and add following methods to it
class Client {
void validateProduct(Product product) {
getProductValidator().validate(product);
}
IProductValidator getProductValidator() {
// this method returns validator, and it's better the method
// be abstract, and be implemented in sub-classes according
// to their type
}
}
And change the Product class to:
public class Product: IValidatableObject {
public int Id;
public Client client;
public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) {
client.validate(this);
}
}
Now tou

Related

C# MVC Entity Framework with testability

I'm thinking a bit about "best practice" regarding testability and the best way to define a particular action.
In the SportsStore application (from Pro ASP.NET MVC 4), for the AdminController, we have the following two methods in the AdminController.cs file:
IProductRepository
namespace SportsStore.Domain.Abstract {
public interface IProductRepository {
IQueryable<Product> Products { get; }
void SaveProduct(Product product); //Defined in EFProductRepository
void DeleteProduct(Product product); //Defined in EFProductRepository
}
}
AdminController:
private IProductRepository repository;
public ViewResult Edit(int productId) {
Product product = repository.Products.FirstOrDefault(p => p.ProductID == productId);
...
}
[HttpPost]
public ActionResult Delete(int productId) {
Product prod = repository.Products.FirstOrDefault(p => p.ProductID == productId);
...
}
As I noticed, we are basically doing the same bit of logic, that being, finding the productID. If productId changes at all, to something else, we need to change this in two spots. This can be tested, easily, though, since the controller itself is making the Linq call.
I was thinking that I could put this into the equivalent of the EFProducts (so the database implementation of the IProducts interface), but this creates a tie to a database state of some kind. I'd like to avoid this in my unit tests as it increases testing complexity a fair amount.
Is there a better place to put this FindOrDefault logic, rather than in the controller, yet keep a good amount of testability?
Edit1: Adding the definition for the repository, which points to an interface
The responses under my question discuss the possibilities. Both the book and #maess agree in keeping the logic for this particular part in the controller. The comments under my question are worth looking at, as both #SOfanatic and #Maess provided fantastic input.

UnitOfWork / working with multiple databases in a DDD application

We have an application which stores its data in two different databases. At some point in the future we may only be storing our data in one database, so we want it to be as painful as possible to make this kind of change. For this reason, we wrap our DbContexts in a single MyDataContext which gets injected into our UnitOfWork and Repository classes.
class MyDataContext : IDataContext {
internal Database1Context Database1;
internal Database2Context Database2;
}
class UnitOfWork : IUnitOfWork {
MyDataContext myDataContext;
public UnitOfWork(MyDataContext myDataContext) {
this.myDataContext = myDataContext;
}
public Save() {
//todo: add transaction/commit/rollback logic
this.myDataContext.Database1.SaveChanges();
this.myDataContext.Database2.SaveChanges();
}
}
class Database1Context : DbContext {
public DbSet<Customer> Customers { get; set; }
}
class Database2Context : DbContext {
public DbSet<Customers> CustomerProfile { get; set; }
}
class CustomerRepository : ICustomerRepository {
MyDataContext myDataContext;
public CustomerRepository(MyDataContext myDataContext) {
this.myDataContext = myDataContext;
}
public GetCustomerById(int id) {
return this.myDataContext.Database1.Customers.Single(...);
}
}
My first question is, am I doing it right? I've been doing a lot of reading, but admittedly DDD is a little bit overwhelming at this point.
My second question is which layer of the application do the IUnitOfWork and IDataContext interfaces reside in? I know that the interfaces for repositories live in the Core/Domain layer/assembly of the application, but not sure about these two. Should these two even have interfaces?
My first question is, am I doing it right?
You can do that, but first reconsider why you're storing data in different places in the first place. Are distinct aggregates at play? Furthermore, if you wish to commit changes to two different databases within a transaction, you will need to use 2-phase commit which is best to avoid. If you have different aggregates, perhaps you can save them separately?
My second question is which layer of the application do the
IUnitOfWork and IDataContext interfaces reside in?
These can be placed in the application layer.

Message-based domain object collaboration

Recently, i was thinking about how to use message to implement the domain object collaboration. And now i have some thoughts:
Each domain object will implement an interface if it want to response one message;
Each domain object will not depend on any other domain objects, that means we will not have the Model.OtherModel relation;
Each domain object only do the things which only modify itself;
Each domain object can send a message, and this message will be received by any other domain objects which are care about this message;
Totally, the only way of collaboration between domain objects is message, one domain object can send any messages or receive any messages as long as it need.
When i learn Evans's DDD, i see that he defines the aggregate concept in domain, i think aggregate is static and not suitable for objects interactions, he only focused on the static structure of objects or relationship between objects. In real world, object will interact using messages, not by referencing each other or aggregating other objects. In my opinion, all the objects are equal, that means they will not depend on any other objects.
For about how to implement the functionality of sending messages or receive messages, i think we can create a EventBus framework which is specially used for the collaboration of domain object. We can mapping the event type to the subscriber type in a dictionary. The key is event type, the value is a list of subscriber types. When one event is raised, we can find the corresponding subscriber types, and get all the subscriber domain objects from data persistence and then call the corresponding handle methods on each subscriber.
For example:
public class EventA : IEvent { }
public class EventB : IEvent { }
public class EventC : IEvent { }
public class ExampleDomainObject : Entity<Guid>{
public void MethodToRaiseAnExampleEvent()
{
RaiseEvent(new EventC());
}
}
public class A : Entity<Guid>, IEventHandler<EventB>, IEventHandler<EventC> {
public void Handle(EventB evnt)
{
//Response for EventB.
}
public void Handle(EventC evnt)
{
//Response for EventC.
}
}
public class B : IEventHandler<EventA>, IEventHandler<EventC> {
public void Handle(EventA evnt)
{
//Response for EventA.
}
public void Handle(EventC evnt)
{
//Response for EventC.
}
}
That's my thoughts. Hopes to hear your words.
Have you ever heard of event sourcing or CQRS?
It sounds like that's the direction your thoughts are heading.
There's a lot of great info out there. Many good blog posts about CQRS and Domain Events, messaging-based domains.
Some example implementations are available, and there's a helpful and active community where implementation details can be discussed.

Proper way to secure domain objects?

If I have an entity Entity and a service EntityService and EntityServiceFacade with the following interfaces:
interface EntityService {
Entity getEntity(Long id);
}
interface EntityServiceFacade {
EntityDTO getEntity(Long id);
}
I can easily secure the read access to an entity by controlling access to the getEntity method at the service level. But once the facade has a reference to an entity, how can I control write access to it? If I have a saveEntity method and control access at the service (not facade) level like this (with Spring security annotations here):
class EntityServiceImpl implements EntityService {
...
#PreAuthorize("hasPermission(#entity, 'write')")
public void saveEntity(Entity entity) {
repository.store(entity);
}
}
class EntityServiceFacadeImpl implements EntityServiceFacade {
...
#Transactional
public void saveEntity(EntityDTO dto) {
Entity entity = service.getEntity(dto.id);
entity.setName(dto.name);
service.save(entity);
}
}
The problem here is that the access control check happens already after I have changed the name of the entity, so that does not suffice.
How do you guys do it? Do you secure the domain object methods instead?
Thanks
Edit:
If you secure your domain objects, for example with annotations like:
#PreAuthorize("hasPermission(this, 'write')")
public void setName(String name) { this.name = name; }
Am I then breaking the domain model (according to DDD?)
Edit2
I found a thesis on the subject. The conclusion of that thesis says that a good way IS to annotate the domain object methods to secure them. Any thoughts on this?
I wouldn't worry about securing individual entity methods or properties from being modified.
Preventing a user from changing an entity in memory is not always necessary if you can control persistence.
The big gotcha here is UX, you want to inform a user as early as possible that she will probably be unable to persist changes made to that entity. The decision you will need to make is whether it is acceptable to delay the security check until persistence time or if you need to inform a user before (e.g. by deactivating UI elements).
If Entity is an interface, can't you just membrane it?
So if Entity looks like this:
interface Entity {
int getFoo();
void setFoo(int newFoo);
}
create a membrane like
final class ReadOnlyEntity implements Entity {
private final Entity underlying;
ReadOnlyEntity(Entity underlying) { this.underlying = underlying; }
public int getFoo() { return underlying.getFoo(); } // Read methods work
// But deny mutators.
public void setFoo(int newFoo) { throw new UnsupportedOperationException(); }
}
If you annotate read methods, you can use Proxy classes to automatically create membranes that cross multiple classes (so that a get method on a readonly Entity that returns an EntityPart returns a readonly EntityPart).
See deep attenuation in http://en.wikipedia.org/wiki/Object-capability_model for more details on this approach.

Are we all looking for the same IRepository?

I've been trying to come up with a way to write generic repositories that work against various data stores:
public interface IRepository
{
IQueryable<T> GetAll<T>();
void Save<T>(T item);
void Delete<T>(T item);
}
public class MemoryRepository : IRepository {...}
public class SqlRepository : IRepository {...}
I'd like to work against the same POCO domain classes in each. I'm also considering a similar approach, where each domain class has it's own repository:
public interface IRepository<T>
{
IQueryable<T> GetAll();
void Save(T item);
void Delete(T item);
}
public class MemoryCustomerRepository : IRepository {...}
public class SqlCustomerRepository : IRepository {...}
My questions: 1)Is the first approach even feasible? 2)Is there any advantage to the second approach.
The first approach is feasible, I have done something similar in the past when I wrote my own mapping framework that targeted RDBMS and XmlWriter/XmlReader. You can use this sort of approach to ease unit testing, though I think now we have superior OSS tools for doing just that.
The second approach is what I currently use now with IBATIS.NET mappers. Every mapper has an interface and every mapper [could] provide your basic CRUD operations. The advantage is each mapper for a domain class also has specific functions (such as SelectByLastName or DeleteFromParent) that are expressed by an interface and defined in the concrete mapper. Because of this there's no need for me to implement separate repositories as you're suggesting - our concrete mappers target the database. To perform unit tests I use StructureMap and Moq to create in-memory repositories that operate as your Memory*Repository does. Its less classes to implement and manage and less work overall for a very testable approach. For data shared across unit tests I use a builder pattern for each domain class which has WithXXX methods and AsSomeProfile methods (the AsSomeProfile just returns a builder instance with preconfigured test data).
Here's an example of what I usually end up with in my unit tests:
// Moq mocking the concrete PersonMapper through the IPersonMapper interface
var personMock = new Mock<IPersonMapper>(MockBehavior.Strict);
personMock.Expect(pm => pm.Select(It.IsAny<int>())).Returns(
new PersonBuilder().AsMike().Build()
);
// StructureMap's ObjectFactory
ObjectFactory.Inject(personMock.Object);
// now anywhere in my actual code where an IPersonMapper instance is requested from
// ObjectFactory, Moq will satisfy the requirement and return a Person instance
// set with the PersonBuilder's Mike profile unit test data
Actually there is a general consensus now that Domain repositories should not be generic. Your repository should express what you can do when persisting or retrieving your entities.
Some repositories are readonly, some are insert only (no update, no delete), some have only specific lookups...
Using a GetAll return IQueryable, your query logic will leak into your code, possibly to the application layer.
But it's still interesting to use the kind of interface you provide to encapsulate Linq Table<T> objects so that you can replace it with an in memory implementation for test purpose.
So I suggest, to call it ITable<T>, give it the same interface that the linq Table<T> object, and use it inside your specific domain repositories (not instead of).
You can then use you specific repositories in memory by using a in memory ITable<T> implementation.
The simplest way to implement ITable<T> in memory is to use a List<T> and get a IQueryable<T> interface using the .AsQueryable() extension method.
public class InMemoryTable<T> : ITable<T>
{
private List<T> list;
private IQueryable<T> queryable;
public InMemoryTable<T>(List<T> list)
{
this.list = list;
this.queryable = list.AsQueryable();
}
public void Add(T entity) { list.Add(entity); }
public void Remove(T entity) { list.Remove(entity); }
public IEnumerator<T> GetEnumerator() { return list.GetEnumerator(); }
public Type ElementType { get { return queryable.ElementType; } }
public IQueryProvider Provider { get { return queryable.Provider; } }
...
}
You can work in isolation of the database for testing, but with true specific repositories that give more domain insight.
This is a bit late... but take a look at the IRepository implementation at CommonLibrary.NET on codeplex. It's got a pretty good feature set.
Regarding your problem, I see a lot of people using methods like GetAllProducts(), GetAllEmployees()
in their repository implementation. This is redundant and doesn't allow your repository to be generic.
All you need is GetAll() or All(). The solution provided above does solve the naming problem though.
This is taken from CommonLibrary.NET documentation online:
0.9.4 Beta 2 has a powerful Repository implementation.
* Supports all CRUD methods ( Create, Retrieve, Update, Delete )
* Supports aggregate methods Min, Max, Sum, Avg, Count
* Supports Find methods using ICriteria<T>
* Supports Distinct, and GroupBy
* Supports interface IRepository<T> so you can use an In-Memory table for unit-testing
* Supports versioning of your entities
* Supports paging, eg. Get(page, pageSize)
* Supports audit fields ( CreateUser, CreatedDate, UpdateDate etc )
* Supports the use of Mapper<T> so you can map any table record to some entity
* Supports creating entities only if it isn't there already, by checking for field values.

Resources