DDD model issue - domain-driven-design

type Resource struct {
id string
tenantID string
FieldValues map[string]interface{}
created time.Time
updated time.Time
}
type ResourceField struct {
id string
tenantID string
name string
fieldType string
created time.Time
updated time.Time
}
I'm new on Domain driven Design and i need some help to model this in DDD with event sourcing. In this scenario the ResourceField is a global concept, i.e. is the same for all the Resource instances.
I try to model both of them as AR: ok but when i have to delete a ResourceField i have to update all the Resource instances removing from the FieldValues map the correct key. So when a ResourceField Delete command is received and performed, then an event is emitted. Now Resource listen to this event and ... the problem: i have to update all the Resources with that field ID. I have to load all the Resources, update each of them, and save the new events. But if i have thousand of thousand of Resources?

What you are trying to do is the #1 antipattern in DDD which is trying to create a single unified model with all data in memory. This would require a disproportionate amount of computing power.
The purpose of DDD is to model your business in a dedicated layer called domain layer. The purpose of that layer is to provide your application with code that can validate your business rules, preventing rules violation by the rest of the application. Rules violation cannot happen during query operations, as queries, by definition, do not alter the system's state. It is useful only in command operations. Also, most commands only alter a small part of the domain model, called a context, and you don't need to use code from other contexts when trying to validate an operation in a given use case. This is why it is paramount to model your domain layer based on command use cases, and not as a single relational model which is what you do at the persistence layer level (or equivalent).
Each context uses a model to validate rules integrity. This model is either a single entity or a structure of multiple objects, called an aggregate, among wich one is an entity and called the aggregate root. By modeling both Resource and ResourceField as aggregate roots, you place them in different contexts. This requires that your business allows the existence of each independantly: you can create a Resource without a ResourceField, and you can create a ResourceField without a Resource. But of course, DDD allows reference between two contexts, as long as you only reference data from another context to the identity property of the aggregate root entity of the other context. This means Resource.FieldValues can only reference ResourceField.id.
Now your question considers a saga which sequences two use cases:
Delete a ResourceField
Read a Resource which previously referenced that ResourceField
When implementing the domain layer for the first use case, you should not consider how the relationship reference gets updated. This is the purpose of the persistance layer, and the domain layer is persistence agnostic. The purpose of the domain layer is to validate that the command does not violate the business rules, so it should focus on cases that would prevent users from deleting the ResourceField. How this is implemented depends on your business rules but common examples would be to deny deleting objects with a specific state, objects with creation date older than a specific delay, etc ...
Then implementation of the reference update is delegated to the persistence layer. Depending on the architecture of your solution, both the Resource and the ResourceField contexts may share the same persistence backend (aka single-database) or not (aka database-per-service).
In a single-database approach, when using a relational DBMS with foreign key constraints, you can delegate that reference update to the databse engine, through usage of the ON DELETE clause. If using a non-foreign key reference, or a non-relational database, it is up to your persistence layer to look for and update references in other tables.
In a database-per-service approach, you cannot rely on the persistence layer to update the reference, because the reference is stored in the persistence store of the other context, which you do not have access to. In that situation, you need to raise an event in the ResourceField context, which the Resource context will register to. Then it is up to the Resource persistence layer to update references in the persistence store.
Once the reference is updated in the persistence layer, the Resource context will be able to query the updated state transparently.

Related

DDD: how should I check state of related aggregates inside one usecase?

Let's say I have three aggregates in the same bounded context:
PhoneAggregate
ServiceCenterAggregate
ServiceWorkAggregate
ServiceWorkAggregate references both PhoneAggregate and ServiceCenterAggregate by id.
Now when new ServiceWorkAggregate is created, a POST request is sent:
{
"phone_id": uuid,
"servicecenter_id": uuid
}
I can't just create a new aggregate using those IDs, I need to perform some checks: at least confirm that those UUIDs are valid entity IDs, maybe also perform some business logic checks involving current state of selected PhoneAggregate and ServiceCenterAggregate. What is the best way to do this?
I could make ServiceWorkAggregate application service or domain service include repositories for all three aggregates.
if I go this route, should application service just pass IDs into domain service and domain service would load aggregates and do all the checks, since there could be business login involved.
or should application service fetch aggregates and pass them into domain service?
I could treat referenced aggregates as value objects and instead of phone_id and servicecenter_id fields inside ServiceWorkAggregate use value objects Repairable and ServiceLocation, which would be immutable copies of those aggregates, fetched from the same table etc. Then I would need only one repository in my servicework application service.
Is there a better way?
A lot of developpers think that the domain layer of an application must be designed as a single normalized data model, like an RDBMS schema. This is not true, and is the cause of many suffering.
The fact that you have 3 different contexts does not mean that the ServiceWork context is not able to manipulate data from the other two concepts. It only means that this context is not able to authoritatively change these concepts states. Put in other words, ServiceWorkRepository can read data about the Phone context, but it cannot create, delete or change a phone's state.
Also, you don't need the same information regarding a phone's state when you are changing a phone's context, or when you are changing a service work's context. This allows you to have two different models for the same concept, depending on the current context. Let's call them ServiceWorkContext.PhoneEntity and PhoneContext.PhoneEntity. These entities can map to the same table/columns in the database, but they can have different level of details: flattened relationships, fewer columns read, and do not allow changing the state.
This concept is called polysemic domain modeling.
In other words, you have the following classes:
MyApp.Domain.PhoneContext.PhoneAggregate
MyApp.Domain.PhoneContext.PhoneRepository
MyApp.Domain.ServiceCenterContext.ServiceCenterAggregate
MyApp.Domain.ServiceCenterContext.ServiceCenterRepository
MyApp.Domain.ServiceWorkContext.PhoneEntity
MyApp.Domain.ServiceWorkContext.ServiceCenterEntity
MyApp.Domain.ServiceWorkContext.ServiceWorkAggregate
MyApp.Domain.ServiceWorkContext.ServiceWorkRepository
MyApp.Domain.PhoneContext.PhoneAggregate represents the phone's state when you want to create or update a phone, and validates business rules regarding these use casses.
MyApp.Domain.ServiceWorkContext.PhoneEntity represents a simplified readonly copy of the phone's state, used when trying to create / update a service work. For instance it can contain a single Country property (which maps to Phone->Owner->Address->Country->Name) to be compared with the ServiceCenter's country, if you have a business rule stating that the phone's country must match the service center's. In that case, you may not need to read Phone.Number from the database for instance.

Validation / Business rules with related objects

Assume I have an class with nested properties: root.propA.propB.propC, where each prop is represented by an class: propA by classA, propB by classB, propC by classC and root = aggregate.
Assume that several objects correlate to each other: root1 --> (root2, root3), root2 --> root4, root4 --> root1.
In DDD the “validation logic” / “business rules” should be in each class. I want to full context in each class for executing rules. For example: propC is represented by classC – there I want a context with:
root1, root2, root3, root4 (since they are related): For example, I want to ensure, that only 1 propC has a certain value
I want the parent of propC: For example, I want to check that the parent has a certain value (should be possible with entity framework as navigation property?)
How can that be solved?
Inject a context in CTR of every class and set the properties of related root objects afterwards?
Every class has a List of related root objects?
I couldnt find examples with DDD and related objects
root1, root2, root3, root4 (since they are related): For example, I want to ensure, that only 1 propC has a certain value
In DDD, an aggregate is a collection of related objects that are validated atomically by the business layer. The validation logic code is split across all classes (root, classA, classB, classC) for easier implementation, but the command ultimately makes updates to the aggregate's state and then validates that this target state does not violate any business rule. Once validated, it delegates the state persistence to the infrastructure layer, that will update storage in a transaction.
The root aggregate instance is a boundary that cannot be crossed inside the transaction or for business rule validation. If you need to enforce any business rules across multiple aggregate instances, then you may:
make a wider aggregate that contains all related root objects
use a two-phase commit distributed transaction
use a saga pattern
I want the parent of propC: For example, I want to check that the parent has a certain value (should be possible with entity framework as navigation property?)
The aggregate is atomic, so you always manipulate the root object and all dependent properties. The parent of root.propA.propB.propC is root.propA.propB so you can access that. You may add a navigation property to ClassC so you can access propB from propC if you want.
A word of warning though: Entity Framework is an ORM, it is used for persistence in the infrastructure layer, and is a unit of work pattern implementation. It does not apply to domain modeling, because persisting your domain state to a RDBMS with EF rather that storing the state in an event sourcing repository, an xml file accessed through FTPS, or a nosql document database, is an infrastructure implementation detail.
Matching your domain model and your persistence model works only for very simple applications. You will quickly need to restrain the complexity of your business model to smaller polysemic designs instead of reading your whole persistence store.
It's up to you to choose where you draw the limit between merging overlapping aggregates data into a shared storage, and splitting them into different database-per-service. This limit is called a bounded context.

What is the purpose of child entity in Aggregate root?

[ Follow up from this question & comments: Should entity have methods and if so how to prevent them from being called outside aggregate ]
As the title says: i am not clear about what is the actual/precise purpose of entity as a child in aggregate?
According to what i've read on many places, these are the properties of entity that is a child of aggregate:
It has identity local to aggregate
It cannot be accessed directly but through aggregate root only
It should have methods
It should not be exposed from aggregate
In my mind, that translates to several problems:
Entity should be private to aggregate
We need a read only copy Value-Object to expose information from an entity (at least for a repository to be able to read it in order to save to db, for example)
Methods that we have on entity are duplicated on Aggregate (or, vice versa, methods we have to have on Aggregate that handle entity are duplicated on entity)
So, why do we have an entity at all instead of Value Objects only? It seams much more convenient to have only value objects, all methods on aggregate and expose value objects (which we already do copying entity infos).
PS.
I would like to focus to child entity on aggregate, not collections of entities.
[UPDATE in response to Constantin Galbenu answer & comments]
So, effectively, you would have something like this?
public class Aggregate {
...
private _someNestedEntity;
public SomeNestedEntityImmutableState EntityState {
get {
return this._someNestedEntity.getState();
}
}
public ChangeSomethingOnNestedEntity(params) {
this._someNestedEntity.someCommandMethod(params);
}
}
You are thinking about data. Stop that. :) Entities and value objects are not data. They are objects that you can use to model your problem domain. Entities and Value Objects are just a classification of things that naturally arise if you just model a problem.
Entity should be private to aggregate
Yes. Furthermore all state in an object should be private and inaccessible from the outside.
We need a read only copy Value-Object to expose information from an entity (at least for a repository to be able to read it in order to save to db, for example)
No. We don't expose information that is already available. If the information is already available, that means somebody is already responsible for it. So contact that object to do things for you, you don't need the data! This is essentially what the Law of Demeter tells us.
"Repositories" as often implemented do need access to the data, you're right. They are a bad pattern. They are often coupled with ORM, which is even worse in this context, because you lose all control over your data.
Methods that we have on entity are duplicated on Aggregate (or, vice versa, methods we have to have on Aggregate that handle entity are duplicated on entity)
The trick is, you don't have to. Every object (class) you create is there for a reason. As described previously to create an additional abstraction, model a part of the domain. If you do that, an "aggregate" object, that exist on a higher level of abstraction will never want to offer the same methods as objects below. That would mean that there is no abstraction whatsoever.
This use-case only arises when creating data-oriented objects that do little else than holding data. Obviously you would wonder how you could do anything with these if you can't get the data out. It is however a good indicator that your design is not yet complete.
Entity should be private to aggregate
Yes. And I do not think it is a problem. Continue reading to understand why.
We need a read only copy Value-Object to expose information from an entity (at least for a repository to be able to read it in order to
save to db, for example)
No. Make your aggregates return the data that needs to be persisted and/or need to be raised in a event on every method of the aggregate.
Raw example. Real world would need more finegrained response and maybe performMove function need to use the output of game.performMove to build propper structures for persistence and eventPublisher:
public void performMove(String gameId, String playerId, Move move) {
Game game = this.gameRepository.load(gameId); //Game is the AR
List<event> events = game.performMove(playerId, move); //Do something
persistence.apply(events) //events contains ID's of entities so the persistence is able to apply the event and save changes usign the ID's and changed data wich comes in the event too.
this.eventPublisher.publish(events); //notify that something happens to the rest of the system
}
Do the same with inner entities. Let the entity return the data that changed because its method call, including its ID, capture this data in the AR and build propper output for persistence and eventPublisher. This way you do not need even to expose public readonly property with entity ID to the AR and the AR neither about its internal data to the application service. This is the way to get rid of Getter/Setters bag objects.
Methods that we have on entity are duplicated on Aggregate (or, vice versa, methods we have to have on Aggregate that handle entity
are duplicated on entity)
Sometimes the business rules, to check and apply, belongs exclusively to one entity and its internal state and AR just act as gateway. It is Ok but if you find this patter too much then it is a sign about wrong AR design. Maybe the inner entity should be the AR instead a inner entity, maybe you need to split the AR into serveral AR's (inand one the them is the old ner entity), etc... Do not be affraid about having classes that just have one or two methods.
In response of dee zg comments:
What does persistance.apply(events) precisely do? does it save whole
aggregate or entities only?
Neither. Aggregates and entities are domain concepts, not persistence concepts; you can have document store, column store, relational, etc that does not need to match 1 to 1 your domain concepts. You do not read Aggregates and entities from persitence; you build aggregates and entities in memory with data readed from persistence. The aggregate itself does not need to be persisted, this is just a possible implementation detail. Remember that the aggregate is just a construct to organize business rules, it's not a meant to be a representation of state.
Your events have context (user intents) and the data that have been changed (along with the ID's needed to identify things in persistence) so it is incredible easy to write an apply function in the persistence layer that knows, i.e. what sql instruction in case of relational DB, what to execute in order to apply the event and persist the changes.
Could you please provide example when&why its better (or even
inevitable?) to use child entity instead of separate AR referenced by
its Id as value object?
Why do you design and model a class with state and behaviour?
To abstract, encapsulate, reuse, etc. Basic SOLID design. If the entity has everything needed to ensure domain rules and invariants for a operation then the entity is the AR for that operation. If you need extra domain rules checkings that can not be done by the entity (i.e. the entity does not have enough inner state to accomplish the check or does not naturaly fit into the entity and what represents) then you have to redesign; some times could be to model an aggregate that does the extra domain rules checkings and delegate the other domain rules checking to the inner entity, some times could be change the entity to include the new things. It is too domain context dependant so I can not say that there is a fixed redesign strategy.
Keep in mind that you do not model aggregates and entities in your code. You model just classes with behaviour to check domain rules and the state needed to do that checkings and response whith the changes. These classes can act as aggregates or entities for different operations. These terms are used just to help to comunicate and understand the role of the class on each operation context. Of course, you can be in the situation that the operation does not fit into a entity and you could model an aggregate with a V.O. persistence ID and it is OK (sadly, in DDD, without knowing domain context almost everything is OK by default).
Do you wanna some more enlightment from someone that explains things much better than me? (not being native english speaker is a handicap for theese complex issues) Take a look here:
https://blog.sapiensworks.com/post/2016/07/14/DDD-Aggregate-Decoded-1
http://blog.sapiensworks.com/post/2016/07/14/DDD-Aggregate-Decoded-2
http://blog.sapiensworks.com/post/2016/07/14/DDD-Aggregate-Decoded-3
It has identity local to aggregate
In a logical sense, probably, but concretely implementing this with the persistence means we have is often unnecessarily complex.
We need a read only copy Value-Object to expose information from an
entity (at least for a repository to be able to read it in order to
save to db, for example)
Not necessarily, you could have read-only entities for instance.
The repository part of the problem was already addressed in another question. Reads aren't an issue, and there are multiple techniques to prevent write access from the outside world but still allow the persistence layer to populate an entity directly or indirectly.
So, why do we have an entity at all instead of Value Objects only?
You might be somewhat hastily putting concerns in the same basket which really are slightly different
Encapsulation of operations
Aggregate level invariant enforcement
Read access
Write access
Entity or VO data integrity
Just because Value Objects are best made immutable and don't enforce aggregate-level invariants (they do enforce their own data integrity though) doesn't mean Entities can't have a fine-tuned combination of some of the same characteristics.
These questions that you have do not exist in a CQRS architecture, where the Write model (the Aggregate) is different from a Read model. In a flat architecture, the Aggregate must expose read/query methods, otherwise it would be pointless.
Entity should be private to aggregate
Yes, in this way you are clearly expressing the fact that they are not for external use.
We need a read only copy Value-Object to expose information from an entity (at least for a repository to be able to read it in order to save to db, for example)
The Repositories are a special case and should not be see in the same way as Application/Presentation code. They could be part of the same package/module, in other words they should be able to access the nested entities.
The entities can be viewed/implemented as object with an immutable ID and a Value object representing its state, something like this (in pseudocode):
class SomeNestedEntity
{
private readonly ID;
private SomeNestedEntityImmutableState state;
public getState(){ return state; }
public someCommandMethod(){ state = state.mutateSomehow(); }
}
So you see? You could safely return the state of the nested entity, as it is immutable. There would be some problem with the Law of Demeter but this is a decision that you would have to make; if you break it by returning the state you make the code simpler to write for the first time but the coupling increases.
Methods that we have on entity are duplicated on Aggregate (or, vice versa, methods we have to have on Aggregate that handle entity are duplicated on entity)
Yes, this protect the Aggregate's encapsulation and also permits the Aggregate to protect it's invariants.
I won't write too much. Just an example. A car and a gear. The car is the aggregate root. The gear is a child entity

Solve apparent need for outside reference to entity inside aggregate (DDD)

I'm trying to follow DDD principles to create a model for determining whether or not an Identity has access to an Action belonging to a Resource.
A Resource (e.g. a webservice) is something that holds a number of Actions (e.g. methods), which can be accessed or not. An Identity is something that wants to access one or more Actions on a Resource. For example, someone uses an api-key in a call to a webservice method, and it must be determined whether or not access is allowed.
As I currently see it, Identity and Resource are aggregate roots, and Action is an entity belonging to Resource. It doesn't seem to make sense for an Action to live on its own; it will always belong to one Resource. An Identity needs to know to which Resource Actions it has access. This seems to suggest the following model.
However, as I understand it, this violates the principle that something outside an aggregate cannot reference an entity within the aggregate. It must go through the root. Then I'm thinking, what if Action was the aggregate root and Resource an entity? But that doesn't seem very logical to me. I've also been thinking of merging Resource and Action into one entity, which would then be an aggregate root, but that also seems wrong to me.
So it leaves me kind of stuck on how to model this correctly using DDD principles. Anyone have a good idea on how to model this?
Update: The model I'm trying to create is the identity model for defining which resource actions an Identity is allowed to access. It is not a model for the actual implementation of resources and actions.
Update 2 - invariants:
Id of all objects is given at birth, is unique, and doesn't change. ApiKey of Identity must be unique across all Identities.
Name of Action must be unique within aggregate, but two different Resources can have Actions with same names, e.g. Resource "R1" can have an Action "A1" and Resource "R2" can also have an Action "A1", but the two "A1"s are not the same.
Query or Write Operation?
The domain model in terms of aggregates and entities has it's purpose in DDD in order to simplify expression and enforcement of the invariants - as write operations are applied to the model.
As mentioned in #VoiceOfUnreason's answer, the question 'Can this user do action A on resource R' is a question that doesn't necessarily need to flow through the domain model - it can be answered with a query against either a pre-projected read-only model, or standard SQL querying against the tables that make up the write model persistence (depend on your needs).
Splitting Contexts to Simplify Invariants
However, your question, whilst mostly about how to identify if an identity is allowed to carry out an action, is implicitly seeking a simpler model for the updating of resources, actions and permissions. So to explore that idea... there are implicitly two types of write operations:
Defining available resources and actions
Defining which resource action combinations a particular identity is permitted to carry out
It's possible that the model for these two types of operations might by simplified if they were split into different bounded contexts.
In the first, you'd model as you have done, an Aggregate with Resource as the aggregate root and Action as a contained entity. This permits enforcing the invariant that the action name must be unique within a resource.
As changes are made in this context, you publish events e.g. ActionAddedToResource, ActionRemovedFromResource.
In the second context, you'd have three aggregates:
Identity
ResourceAction
Properties: Id, ResourceId, ResourceName, ActionId, ActionName
Permission
ResourceAction instances would be updated based events published from the first context - created on ActionAddedToResource, removed on ActionRemovedFromResource. If there is a resource with no actions, there is no ResourceAction at all.
Permission would contain two identity references - IdentityId and ResourceActionId
This way when carrying out the operation "Permit this user to do this action on this resource" the operation is just to create a new Permission instance - reducing the set of operations that affect the Identity aggregate's consistency boundary - assuming there are no invariants that require the concept of a 'permission' to be enforced within an Identity aggregate?
This also simplifies the query side of things, as you just need to search for a Permission entry with matching identityId, resourceName and actionName after joining Permissions to ResourceActions.
Responsibility Layers
The DDD Book in the section on Strategic Design refers to organising your contexts according to responsibility layers. To use the terms from the book, the above suggestion is based on the idea of a 'capability' responsibility layer (defining resources and actions) and an 'operational' responsibility layer (defining identity permissions and checking identity permissions).
For example, someone uses an api-key in a call to a webservice method, and it must be determined whether or not access is allowed.
That's a query. Fundamentally, there's nothing wrong with answering a query by joining two read-only copies of entities that belong to different aggregates.
You do need to be aware that, because the aggregates can change independently of each other, and because they can change independently of your query, the answer you get when you do the join may be stale, and not entirely consistent.
For example, you may be joining a copy of an Identity written 100ms ago to a copy of an Action written 200ms ago. Either of the aggregates could be changing while you are running the query.
Based on the invariants you mention, Identity can contain a Resources dictionary/map where resourceId is the key and the value is a set of unique action names/ids. This gives you uniqueness of action names for each resource per identity:
Map<Resource, Set<Action>>
Alternatively, you could have a set/list of Resources and they have a collection of Actions on them. Uniqueness can be enforced by the collection types available in the language you're coding in:
Set<Resource> Resources
class Resource {
Set<Action> Actions
}
Even simpler, just create a Resource-Action key by combining the two ids and store it in a set or something to give you uniqueness:
Resource1-Action1
Resource1-Action2
Resource2-Action1
...etc
You can then have a method on Identity to add a new Resource-Action combination.
I don't see anything in your description to warrant Actions being Entities as they appear to have no identity of their own.
This is really simple though, so I am presuming you've simplified the domain considerably.
I will also expand on the bit identified by #VoiceOfUnreason:
For example, someone uses an api-key in a call to a webservice method,
and it must be determined whether or not access is allowed.
How would the particular bit of exposed functionality know what security is applied to it? The answer is provided by #Chris Simon: Permission.
I have a common implementation that I use that has not been distilled into an Identity & Access BC of its own but follows closely with what you are attempting --- I hope :)
A Session has a list of Permission strings. Typically I use a uri to represent a permission since it is quite readable. Something like my-system:\\users\list. Anyway, how the user is assigned these permissions could be anything. There may very well be a Role containing permissions and a user is assigned to one or more roles; or the user may even have a custom list of permissions.
When a SessionToken is requested (via authentication) the server retrieves the permissions for the relevant user and creates a session with the relevant permissions assigned to it. This results in a read-side token/permission.
Each exposed bit of functionality (such as a rest endpoint) is assigned a permission. In c# web-api it is simply an attribute on the method:
[RequiredPermission("my-system:\\order\create")]
My session token is passed in the header and a quick check determines whether the session has expired and whether the session (assigned to the user) has access to the resource.
Using your design the Action may very well carry the required Permission. The User would still require a list of either roles or UserAction entries that contain, perhaps, the ResourceId and ActionId. When the use logs in the read-optimized session structures are created using that structure.
If there is an arbitrary Action list that can be assigned to any Resource then both Resource and Action are probably aggregates. Then you would need a ResourceAction as mentioned by #Chris Simon. The ResourceAction would then contain the Permission.
That's my take on it...

When is it appropriate to map a DTO back to its Entity counterpart

From what I've read and implemented, DTO is the object that hold a subset of value from a Data model, in most cases these are immutable objects.
What about the case where I need to pass either new value or changes back to the database?
Should I work directly with the data model/actual entity from my DAL in my Presentation layer?
Or should I create a DTO that can be passed from the presentation layer to the business layer then convert it to an entity, then be updated in the DB via an ORM call. Is this writing too much code? I'm assuming that this is needed if the presentation layer has no concept of the data model. If we are going with this approach, should I fetch the object again at the BLL layer before committing the change?
A few thoughts :
DTO is a loaded term, but as it stands for Data Transfer Object, I see it more as a purely technical, potentially serializable container to get data through from one point to another, usually across tiers or maybe layers. Inside a layer that deals with business concerns, such as the Domain layer in DDD, these little data structures that circulate tend to be named Value Objects instead, because they have a business meaning and are part of the domain's Ubiquitous Language. There are all sorts of subtle differences between DTO's and Value Objects, such as you usually don't need to compare DTO's, while comparison and equality is an important concern in VO's (two VO's are equal if their encapsulated data is equal).
DDD has an emphasis on the idea of rich domain model. That means you usually don't simply map DTO's one-to-one to domain entities, but try to model business actions as intention-revealing methods in your entities. For instance, you wouldn't use setters to modify a User's Street, City and ZipCode but rather call a moveTo(Address newAddress) method instead, Address being a Value Object declared in the Domain layer.
DTO's usually don't reach the Domain layer but go through the filter of an Application layer. It can be Controllers or dedicated Application Services. It's Application layer objects that know how to turn DTO's they got from the client, into the correct calls to Domain layer Entities (usually Aggregate Roots loaded from Repositories). Another level of refinement above that is to build tasked-based UIs where the user doesn't send data-centric DTO's but Commands that reflect their end goal.
So, mapping DTO's to Entities is not really the DDD way of doing things, it denotes more of a CRUD-oriented approach.
Should I work directly with the data model/actual entity from my DAL in my Presentation layer?
This is okay for small to medium projects. But when you have a large project with more than 5 developers where different layers are assigned to different teams, then the project benefits from using a DTO to separate the Data Layer from the Presentation Layer.
With a DTO in the middle, any changes in the presentation layer won't affect the data layer (vice versa)
Or should I create a DTO that can be passed from the presentation layer to the business layer then convert it to an entity, then be updated in the DB via an ORM call. Is this writing too much code? I'm assuming that this is needed if the presentation layer has no concept of the data model. If we are going with this approach, should I fetch the object again at the BLL layer before committing the change?
For creating a new entity, then that is the usual way to go (for example "new user"). For updating an existing entity, you don't convert a DTO to an entity, rather you fetch the existing entity, map the new values then initiate an ORM update.
UpdateUser(UserDto userDto)
{
// Fetch
User user = userRepository.GetById(userDto.ID);
// Map
user.FirstName = userDTO.FirstName;
user.LastName = userDTO.LastName;
// ORM Update
userRepository.Update(user);
userRepository.Commit();
}
For large projects with many developers, the disadvantage of writing too much code is minimal compared to the huge advantage of decoupling it provides.
See my post about Why use a DTO
My opinion is that DTOs represent the contracts (or messages, if you will) that form the basis for interaction between an Aggregate Root and the outside world. They are defined in the domain, and the AR needs to be able to both handle incoming instances and provide outgoing instances. (Note that in most cases, the DTO instances will be either provided by the AR or handled by the AR, but not both, because having one DTO that flows both ways is usually a violation of separation of concerns.)
At the same time, the AR is responsible for providing the business logic through which the data contained in the DTOs are processed. The presentation layer (or any other actor including the data access layer, for that matter) is free to put whatever gibberish it wants into a DTO and request that the AR process it, and the AR must be able to interpret the contents of the DTO as gibberish and raise an exception.
Because of this requirement, it is never appropriate to simply map a DTO back to its Entity counterpart.
The DTO must always be processed through the logic in the AR in order to affect changes in the Entity that may bring it to the state described by the DTO.

Resources