Domain Modelling: Neither an Entity nor a Value Object - domain-driven-design

In DDD, the domain model consists of entities and value objects, but what do we do when we need something in the model which is neither of these?
For example, I have introduced the following ScheduledItems<T> implementation in order to encapsulate scheduling specifics:
public class ScheduledItems<T>
{
private SortedDictionary<DateTime, T> scheduledItems;
public ScheduledItems()
{
scheduledItems = new SortedDictionary<DateTime, T>();
}
public void ScheduleItem(DateTime scheduledDate, T item)
{
scheduledItems.Add(scheduledDate, item);
}
public void RemoveItem(T item)
{
scheduledItems
.Where(x => x.Value.Equals(item))
.Select(x => x.Key)
.ToList()
.ForEach(k => scheduledItems.Remove(k));
}
}
This class will be used by a couple of entities for scheduling purposes.
At this point, this is neither an Entity (it has no identity) nor a Value Object (it is not immutable).
One solution is to turn it into a Value Object by making it immutable ('adding' or 'removing' items would return a new instance of ScheduledItems).
But is this really necessary for something which is not really associated to the domain? This class could be just like any other .NET collection.

That class looks like a repository for ScheduledItems. So ScheduledItem is the Entity and ScheduledItems is the the Repository with Add(), Remove() methods.

I guess it depends on why the items are sorted.
If they need to be sorted because of certain business rules then this should be part of your domain.
If they need to be sorted to be properly shown in the UI, then this most likely is just a bit of view logic that should not be part of the domain.
If none of the above, I would consider this a collection-like helper class that could be in a part of the infrastructure layer that could be used across the other layers.

Related

DDD Entity and EntityType reference

I'm learning DDD and here is a problem I faced. I have two Aggregates (simplified):
class NoteType : AggregateRoot {
int noteTypeId
string name
string fields[]
... code omitted ...
}
class Note : AggregateRoot {
int noteId
int noteTypeId
map<str, str> fieldValues
setFieldValue(fieldName, fieldValue) {
// I want to check that fieldName is present in Notes.fields
// and later fieldValues[field.name] = fieldValue
}
... code omitted ...
}
I've heard that aggregates should reference to each other by ID's only. It this case I can't access NoteType.fields. I found several ways to do so, but not sure which one is better:
Pass NoteType instance into the Note model via constructor (do not reference by ID)
Use repository in setFieldValue to load NoteType
Use service which will do the check (this may cause all the Note logic to be implemented in this service, since Note highly dependent on NoteType)
What do you suggest?
What do you suggest?
Pass the information that the aggregate needs to the aggregate when it needs it.
setFieldValue(fieldName, fieldValue, noteType) {
// Now you have the data that you need to verify the noteType.fields
}
Sometimes, if you can't tell from outside the aggregate what information you need, then you instead pass the capability to look up that information
setFieldValue(fieldName, fieldValue, notes) {
// Use the provided capability to get what you need
noteType = notes.get(this.noteTypeId)
// the do the useful work
this.setFieldValue(fieldName, fieldValue, noteType)
}
Of course, if the only thing you need is the fields, then you might prefer to work only with that property:
setFieldValue(fieldName, fieldValue, fields)
Design is what we do, when we want to get more of what we want than we'd get by just doing it. -- Ruth Malan
In Domain Driven Design, a common "what we want" is to have the "business logic", meaning our implementation of the policies of information change that are important to our business, separated from the "plumbing" that describes how to read and store that information.

Avoiding storage concerns in entities, with a complex database

The project I'm working on deals with quite complex business rules, so I'm trying to apply DDD. Unfortunately, I have to work with a legacy database I cannot get rid of, and I'm having trouble keeping a clean Domain Design.
Lets say some Entity, has some ValueType as primary key, which is required. This could be designed in DDD like the following:
public class Entity
{
public Entity(ValueType key)
{
Key = key;
}
public ValueType Key { get; }
}
Now, lets say this key is actually stored as a string representation, which can be parsed to construct the ValueType. I could do something like this, to make it work with Entity Framework:
public class Entity
{
private Entity()
{
//Private empty ctor for EF
}
public Entity(ValueType key)
{
StoredKey = key.ToString();
}
public ValueType Key => ValueType.Parse(StoredKey);
//DB representation of the key, setter for EF
private string StoredKey { get; set; }
}
This way, I feel I'm kind of polluting my Domain Design with storage concerns. For what the Domain cares, the Entity could be persisted just in memory, so this string internal representation feels weird.
This is a very simple scenario to show an example, but things can actually get really worse. I would like to know if there is any way to achieve persistance ignorance in the model with this simple example, so I can start thinking later about how to design more complex scenarios.
The domain model doesn't need to follow the entity framework structure. What you can do is to create 2 types of models. One pure domain models and when passing it to the repository to persist it transform it into entity framework model. And when fetching the model you can do the inverse transformation.
You can achieve persistance ignorance in this instance. Your instincts are right, get rid of all persistance concerns from your domain model, move them entirely within your dal where they belong.
DB.sql:
create table entity {
id nvarchar(50) not null primary key,
fields nvarchar(max) /*Look mum, NoSql inside sql! (please dont do this) */
}
Domain.dll:
class Entity {
/*optional - you are going to need some way of 'restoring' a persisted domain entity - how you do this is up to your own conventions */
public Entity(ValueType key, ValueObjects.EntityAttributes attributes) {Key=key;Attributes=attributes;}
public ValueType Key { get; }
public ValueObjects.EntityAttributes Attributes { get; }
/* domain functions below */
}
IEntityRepository {
public Update(Domain.Entity enity);
public Fetch(ValueType Key);
}
now ALL persistance work can go in your DAL, includeing the translation. I havent done EF in a while so treat the below
as sudo code only.
DAL (EF):
/* this class lives in your DAL, and can be private, no other project needs to know about this class */
class Entity :{
public string EntityId {get;set;}
public string Fields {get;set;}
}
class EntityRepository : BaseRepository, Domain.IEntityRepository {
public EntityRepository(DBContext context) {
base.Context = context;
}
public Domain.Entity Fetch(ValueType key) {
string id = key.ToString();
var efEntity = base.Context.Entitys.SingleOrDefault(e => e.Id == id);
return MapToDomain(efEntity);
}
/*Note: Handle mapping as you want, this is for example only*/
private Domain.Entity MapToDomain(EF.Entity efEntity) {
if (efEntity==null) return null;
return new Domain.Entity(
ValueType.Parse(efEntity.Id),
SomeSerializer.Deserialize<ValueObjects.EntityAttributes>(efEntity.Fields) /*every time you do this, a puppy hurts its paw*/
);
}
public Domain.Entity Update(Domain.Entity domainEntity) {
string id = key.ToString();
var efEntity = MapToEf(domainEntity);
base.Context.Entities.Attach(efEntity);
base.Context.Entity(efEntity).State=EntityState.Modified;
base.Context.SaveChanges();
}
private Domain.Entity MapToEf(Domain.Entity domainEntity) {
return new EF.Entity(
Id = domainEntity.Key.ToString(),
Fields = SomeSerializer.Serialize(domainEntity.Attributes) /*stahp!*/
);
}
}
The takeaway thing here is that you are going to need to do Mapping of some sort. This all but unavoidable unless your domain is realy simple and your ORM is super fancy, but even then I would recommend keeping your ORM models seperate to your Domain models because they solving 2 different problems (ORMS are providing a code version of your database model, DDD are providing a code version of you Business Models). If you are compromising your Domain Model (ie, making properties public set ) to cater for your DAL then step back and re evaluate. Obviously compromise where appropriate but realise this means you are introducing (implied) dependancies across your application layers.
You next quetion in realtion to performance (but mapping is so slow) was answered by Constantin Galbenu, have seperate 'read' models and reposistories for lists, searches. Do you really need to pull back 1000's of business models just to populate a search result list (and then have the tempation to add properties of no concern to the business model because 'the search page needs this one bit of data for the finaince people'). You should only be pulling out our domain model when you are doing some sort of business action, otherwise some nice anemica read only views are your friend.
As many suggested in the comments, CQRS is a good choice for complex business rules. It has the great advantage that you can have different models for each side (write/command and read/query). In this way you separate the concerns. This is also very good because the business logic for the write side differs from the read side's but enough with the advantages of CQRS.
...Unfortunately, I have to work with a legacy database I cannot get rid of...
Your new Write model, the Aggregate, will be responsible for handling commands. This means that the legacy model will be relieved of this responsibility; it will be used only for queries. And to keep it up-to-date you can create a LegacyReadModelUpdater that is subscribed to all Domain events generated by the new Aggregate and it will project them to the old model in an eventually consistent manner.

DDD: Instantiate Value objects inside Aggregate or pass it as parameter?

When creating aggregates, should we create value objects inside aggregates, or we should pass already created value objects to ctor or factory.
public Booking(DateTime arrivalDate, DateTime departureDate)
{
this.ArrivalAndDepartureinformation = new ArrivalAndDepartureInfo(arrivalDate, departureDate);
}
or
public Booking(ArrivalAndDepartureinformation arrivalAndDepartureInfo)
{
this.ArrivalAndDepartureinformation = arrivalAndDepartureInfo;
}
Instantiate Value objects inside Aggregate or pass it as parameter?
If we speak about passing parameters into constructor, it depends on how it is used. There might be some infrastructure limitations that can require usage of primitive types.
If we speak about passing parameters into methods then Value Objects is 100% my choice.
In general, I'd say it is better to pass value objects into your aggregates.
Value Objects can:
make language of you model more expressive
bring type safety
encapsulate validation rules
own behavior
The general guideline I would recommend is this:
Inside the domain model, use value objects as much as possible.
Convert primitives into value objects at the boundary of the domain model (controllers, application services).
For example, instead of this:
public void Process(string oldEmail, string newEmail)
{
Result<Email> oldEmailResult = Email.Create(oldEmail);
Result<Email> newEmailResult = Email.Create(newEmail);
if (oldEmailResult.Failure || newEmailResult.Failure)
return;
string oldEmailValue = oldEmailResult.Value;
Customer customer = GetCustomerByEmail(oldEmailValue);
customer.Email = newEmailResult.Value;
}
Do this:
public void Process(Email oldEmail, Email newEmail)
{
Customer customer = GetCustomerByEmail(oldEmail);
customer.Email = newEmail;
}
The domain model should speak domain, not implementation primitives.
Your application component normally owns the responsibility of taking raw data and expressing it in the model's language.

EF 5.0 new object: assign foreign key property does not set the foreign key id, or add to the collection

EF 5.0, using code-first on existing database workflow.
Database has your basic SalesOrder and SalesOrderLine tables with required foreign key on the SalesOrderLine as follows;
public class SalesOrder
{
public SalesOrder()
{
this.SalesOrderLines = new List<SalesOrderLine>();
}
public int SalesOrderID { get; set; }
public int CustomerID { get; set; }
public virtual Customer Customer { get; set; }
public virtual ICollection<SalesOrderLine> SalesOrderLines { get; set; }
}
public class SalesOrderLine
{
public SalesOrderLine()
{
}
public int SalesOrderLineID { get; set; }
public int SalesOrderID { get; set; }
public virtual SalesOrder SalesOrder { get; set; }
}
public SalesOrderLineMap()
{
// Primary Key
this.HasKey(t => t.SalesOrderLineID);
// Table & Column Mappings
this.ToTable("SalesOrderLine");
this.Property(t => t.SalesOrderLineID).HasColumnName("SalesOrderLineID");
this.Property(t => t.SalesOrderID).HasColumnName("SalesOrderID");
// Relationships
this.HasRequired(t => t.SalesOrder)
.WithMany(t => t.SalesOrderLines)
.HasForeignKey(d => d.SalesOrderID);
}
Now according to this page:
http://msdn.microsoft.com/en-us/data/jj713564
...we are told that:
The following code removes a relationship by setting the foreign key
to null. Note, that the foreign key property must be nullable.
course.DepartmentID = null;
Note: If the reference is in the added state (in this example, the
course object), the reference navigation property will not be
synchronized with the key values of a new object until SaveChanges is
called. Synchronization does not occur because the object context does
not contain permanent keys for added objects until they are saved. If
you must have new objects fully synchronized as soon as you set the
relationship, use one of the following methods.
By assigning a new object to a navigation property. The following code
creates a relationship between a course and a department. If the
objects are attached to the context, the course is also added to the
department.Courses collection, and the corresponding foreign key
property on the course object is set to the key property value of the
department.
course.Department = department;
...sounds good to me!
Now my problem:
I have the following code, and yet both of the the Asserts fail - why?
using (MyContext db = new MyContext ())
{
SalesOrder so = db.SalesOrders.First();
SalesOrderLine sol = db.SalesOrderLines.Create();
sol.SalesOrder = so;
Trace.Assert(sol.SalesOrderID == so.SalesOrderID);
Trace.Assert(so.SalesOrderLines.Contains(sol));
}
Both objects are attached to the context - are they not? Do I need to do a SaveChanges() before this will work? If so, that seems a little goofy and it's rather annoying that I need to set all of the references on the objects by hand when a new object is added to a foreign-key collection.
-- UPDATE --
I should mark Gert's answer as correct, but I'm not very happy about it, so I'll wait a day or two. ...and here's why:
The following code does not work either:
SalesOrder so = db.SalesOrders.First();
SalesOrderLine sol = db.SalesOrderLines.Create();
db.SalesOrderLines.Add(sol);
sol.SalesOrder = so;
Trace.Assert(so.SalesOrderLines.Contains(sol));
The only code that does work is this:
SalesOrder so = db.SalesOrders.First();
SalesOrderLine sol = db.SalesOrderLines.Create();
sol.SalesOrder = so;
db.SalesOrderLines.Add(sol);
Trace.Assert(so.SalesOrderLines.Contains(sol));
...in other words, you have to set all of your foreign key relationships first, and then call TYPE.Add(newObjectOfTYPE)
before any of the relationships and foreign-key fields are wired up. This means that from the time the Create is done until the time you do the Add(), the object is basically in a half-baked state. I had (mistakenly) thought that since I used Create(), and since Create() returns a sub-classed dynamic object (as opposed to using "new" which returns a POCO object) that the relationships wire-ups would be handled for me. It's also odd to me, that you can call Add() on an object created with the new operator and it will work, even though the object is not a sub-classed type...
In other words, this will work:
SalesOrder so = db.SalesOrders.First();
SalesOrderLine sol = new SalesOrderLine();
sol.SalesOrder = so;
db.SalesOrderLines.Add(sol);
Trace.Assert(sol.SalesOrderID == so.SalesOrderID);
Trace.Assert(so.SalesOrderLines.Contains(sol));
...I mean, that's cool and all, but it makes me wonder; what's the point of using "Create()" instead of new, if you're always going to have to Add() the object in either case if you want it properly attached?
Most annoying to me is that the following fails;
SalesOrder so = db.SalesOrders.OrderBy(p => p.SalesOrderID).First();
SalesOrderLine sol = db.SalesOrderLines.Create();
sol.SalesOrder = so;
db.SalesOrderLines.Add(sol);
// NOTE: at this point in time, the SalesOrderId field has indeed been set to the SalesOrderId of the SalesOrder, and the Asserts will pass...
Trace.Assert(sol.SalesOrderID == so.SalesOrderID);
Trace.Assert(so.SalesOrderLines.Contains(sol));
sol.SalesOrder = db.SalesOrders.OrderBy(p => p.SalesOrderID).Skip(5).First();
// NOTE: at this point in time, the SalesOrderId field is ***STILL*** set to the SalesOrderId of the original SO, so the relationships are not being maintained!
// The Exception will be thrown!
if (so.SalesOrderID == sol.SalesOrderID)
throw new Exception("salesorderid not changed");
...that seems like total crap to me, and makes me feel like the EntityFramework, even in version 5, is like a minefield on a rice-paper bridge. Why would the above code not be able to sync the SalesOrderId on the second assignment of the SalesOrder property? What essential trick am I missing here?
I've found what I was looking for! (and learned quite a bit along the way)
What I thought the EF was generating in it's dynamic proxies were "Change-Tracking Proxies". These proxy classes behave more like the old EntityObject derived partial classes from the ADO.Net Entity Data Model.
By doing some reflection on the dynamically generated proxy classes (thanks to the information i found in this post: http://davedewinter.com/2010/04/08/viewing-generated-proxy-code-in-the-entity-framework/ ), I saw that the "get" of my relationship properties was being overridden to do Lazy Loading, but the "set" was not being overriden at all, so of course nothing was happening until DetectChanges was called, and DetectChanges was using the "compare to snapshot" method of detecting changes.
Further digging ultimately lead me to this pair of very informative posts, and I recommend them for anyone using EF:
http://blog.oneunicorn.com/2011/12/05/entity-types-supported-by-the-entity-framework/
http://blog.oneunicorn.com/2011/12/05/should-you-use-entity-framework-change-tracking-proxies/
Unfortunately, in order for EF to generate Change-Tracking Proxies, the following must occur (quoted from the above):
The rules that your classes must follow to enable change-tracking
proxies are quite strict and restrictive. This limits how you can
define your entities and prevents the use of things like private
properties or even private setters. The rules are: The class must be
public and not sealed. All properties must have public/protected
virtual getters and setters. Collection navigation properties must be
declared as ICollection<T>. They cannot be IList<T>, List<T>,
HashSet<T>, and so on.
Because the rules are so restrictive it’s easy to get something wrong and the result is you won’t get a change-tracking proxy. For example,
missing a virtual, or making a setter internal.
...he goes on to mention other things about Change-Tracking proxies and why they may show better or worse performance.
In my opinion, the change-tracking proxy classes would be nice as I'm coming from the ADO.Net Entity Model world, and I'm used to things working that way, but I've also got some rather rich classes and I'm not sure if I will be able to meet all of the criteria. Additionally that second bullet point makes me rather nervous (although I suppose I could just create a unit test that loops through all of my entities, does a Create(0 on each and then tests the resulting object for the IEntityWithChangeTracker interface).
By setting all of my properties to virtual in my original example I did indeed get IEntityWithChangeTracker typed proxy classes, but I felt a little ... I don't know... "dirty" ...for using them, so I think I will just have to suck it up and remember to always set both sides of my relationships when doing assignments.
Anyway, thanks for the help!
Cheers,
Chris
No, SalesOrderLine sol is not attached to the context (although it is created by a DbSet). You must do
db.SalesOrderLines.Add(sol);
to have it attached to the context in a way that the ChangeTracker executes DetectChanges() (DbSet.Add() is one of the methods that trigger this) and, thus, also executes relationship fixup, which sets sol.SalesOrderID and ensures that so.SalesOrderLines contains the new object.
So, no, you don't need to execute SaveChanges(), but the object must be added to the context and relationship fixup must have been triggered.

IEnumerable<T>.ConvertAll & DDD

I have an interesting need for an extension method on the IEumerable interface - the same thing as List.ConvertAll. This has been covered before here and I found one solution here. What I don't like about that solution is he builds a List to hold the converted objects and then returns it. I suspect LINQ wasn't available when he wrote his article, so my implementation is this:
public static class IEnumerableExtension
{
public static IEnumerable<TOutput> ConvertAll<T, TOutput>(this IEnumerable<T> collection, Func<T, TOutput> converter)
{
if (null == converter)
throw new ArgumentNullException("converter");
return from item in collection
select converter(item);
}
}
What I like better about this is I convert 'on the fly' without having to load the entire list of whatever TOutput's are. Note that I also changed the type of the delegate - from Converter to Func. The compilation is the same but I think it makes my intent clearer - I don't mean for this to be ONLY type conversion.
Which leads me to my question: In my repository layer I have a lot of queries that return lists of ID's - ID's of entities. I used to have several classes that 'converted' these ID's to entities in various ways. With this extension method I am able to boil all that down to code like this:
IEnumerable<Part> GetBlueParts()
{
IEnumerable<int> keys = GetBluePartKeys();
return keys.ConvertAll<Part>(PartRepository.Find);
}
where the 'converter' is really the repository's Find-by-ID method. In my case, the 'converter' is potentially doing quite a bit. Does anyone see any problems with this approach?
The main issue I see with this approach is it's completely unnecessary.
Your ConvertAll method is nothing different than Enumerable.Select<TSource,TResult>(IEnumerable<TSource>, Func<TSource,TResult>), which is a standard LINQ operator. There's no reason to write an extension method for something that already is in the framework.
You can just do:
IEnumerable<Part> GetBlueParts()
{
IEnumerable<int> keys = GetBluePartKeys();
return keys.Select<int,Part>(PartRepository.Find);
}
Note: your method would require <int,Part> as well to compile, unless PartRepository.Find only works on int, and only returns Part instances. If you want to avoid that, you can probably do:
IEnumerable<Part> GetBlueParts()
{
IEnumerable<int> keys = GetBluePartKeys();
return keys.Select(i => PartRepository.Find<Part>(i)); // I'm assuming that fits your "Find" syntax...
}
Why not utilize the yield keyword (and only convert each item as it is needed)?
public static class IEnumerableExtension
{
public static IEnumerable<TOutput> ConvertAll<T, TOutput>
(this IEnumerable<T> collection, Func<T, TOutput> converter)
{
if(null == converter)
throw new ArgumentNullException("converter");
foreach(T item in collection)
yield return converter(item);
}
}

Resources