Should the calling or the calling object process data? - object

I usually try to encapsulate my code, meaning each objects works on its data via public methods. The problem is, that a lot of stakeholder/other objects of an object lead to a lot of requirements and thus a lot of methods in this object, which are then only used by one stakeholder. In such a scenario, should I rather pass the raw data to the stakeholder, so that it can process the data itself?
When should an object process its own data for other objects (offering methods) and when should it simply return its data (via getters) to let the caller do the processing itself?

I think it depends on many factors such as:
purpose of class (some real world object)
data that object has (description of this object)
desired behavior (some actions of this behavior)
For example, we have two objects:
a big bucket with water
kettle.
Then we can ask myself:
can big bucket have method boil() or should big bucket just have method giveWater()? So it can be concluded that bucket of water should just expose getter for exposing water for other users such as kettle
And by using this technique you can decide whether should an object process its own data for other objects (offering methods) and when should it simply return its data (via getters)
In addition, you should apply Single Responsibility Principle of SOLID in your classes. As Robert C. Martin says:
Martin defines a responsibility as a reason to change, and concludes
that a class or module should have one, and only one, reason to be
changed (e.g. rewritten).
So SRP defines a responsibility of a class as a reason to change, and states that a class might only have one reason to change.
So if someone edits logging code in class Radio, then it means that she/he violates SRP principle. She/he should edit logging code in Log class. These classes can communicate through references or composition.

Related

ABAP DDD how to correctly implement add/update/remove child from aggregate via RFC?

I'm trying to follow DDD principles for my current project. Unfortunately I have to use RFCs due to technical constraints, so no OData and no REST. It's quite a long question I hope it's OK to ask this in Stackoverflow.
In any case, I have an entity class WorkOrder with a list of Operation objects.
I have a WorkOrderRepository class with a SAVE method that only receives a WorkOrder object and is able to save everything (header data, address, etc) in one go. No matter if it's creation, update or deleted. The repository hides the BAPI calls from the rest.
Now I want to implement the logic to add/update/remove Operations to the work order object and I'm not sure even if the names I give to the methods are correct. Maybe they should be insert/edit/delete... I'm quite confused with this since in every place I look they use different names.
But the most important are my 2 specific doubts:
Should I have only 1 RFC that receives all the updates to a WorkOrder entity including header, operations? Or should I create 1 RFC per operation that only handles one operation at a time? Bear in mind that the UI mockup expects that the user can add/delete multiple operations before clicking a SAVE button and RFC has implicit commit, and that to my knowledge a DDD entity should be updated always in one call.
Option 1:
FUNCTION ZWORKORDER_HDR_UPD
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_WORKORDER_HDR_CHG) TYPE ZWORKORDER_HDR_CHG
VALUE(I_WORKORDER_HDR_UPD) TYPE ZWORKORDER_HDR_UPD "X structure for the BAPI
VALUE(I_OPERATIONS_CHG) TYPE ZOPERATIONS_CHG
VALUE(I_OPERATIONS_UPD) TYPE ZOPERATIONS_UPD
VALUE(I_OPERATIONS_DEL) TYPE ZOPERATIONS_DEL
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
Option 2
FUNCTION ZWORKORDER_OPERATION_CRT
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION) TYPE ZOPERATION_CHG
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
FUNCTION ZWORKORDER_OPERATION_UPD
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION_CHG) TYPE ZOPERATION_CHG
VALUE(I_OPERATION_UPD) TYPE ZOPERATION_UPD
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
FUNCTION ZWORKORDER_OPERATION_DEL
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION_ID) TYPE ZOPERATION_ID
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
How should my Workorder methods look to handle this? I'm specially confused with the update method, since I'm not sure if I should first get the existing operation and then update it or let the parent class do it. But maybe my approaches are completely wrong from the root.
Option 1:
workorder->add_operation( i_operation ). "Pass flat structure from RFC? Or first create object?
workorder->update_operation( i_operation_chg
i_operation_upd ).
workorder->delete_operation( i_operation_id ).
Option 2:
workorder->add_operation( ).
operation = workorder->get_operation(i_operation_chg->get_id())
operation->update( i_operation_chg
i_operation_upd ).
operation->delete_operation( i_operation_id ).
The simplest solution is always the best (KISS and YAGNI principles). It doesn't really matter if you create 1 or 3 RFC-enabled function module, so if you can achieve your goal with one function module, then do it with one.
I think you need to have two RFC-enabled function modules. One to validate the maintained operations (do the validations as far as possible), but that should not save anything to the database, and another one called after the user clicks the SAVE button, to save the whole "WorkOrder", including the maintained operations (at this time, there will be the complete validation also).
If you don't need to define an "operation" class for something else, right now, then keep it simple, no need to instantiate an object. Note that you may create an "operation" class with private static methods, and being a friend of the "workorder" class (only this class can use the operation class), just to organize better your code.
PS: although I don't know what is "Domain-Driven Design", I don't see how your question is related to it, because it just looks like simple program design.
We also work with DDD, although luckily not with RFCs but instead Gateway/OData. Since REST by defintion is stateless we always commit in our application service layer.
What we do is have three "DDD" entities
app_Service, domain_service, repository (and also some data containers like aggregates), where the app_service in your case would expose the methods create, update and delete, and possibly also validate.
Then I would write four thin RFCs (crud + validate) that basically passes the data to the app service.
As for being able to handle multiple updates in each save, we always model our OData (your RFC) after the UI requirements, then it's the app_service task to make sense of the data from a system point of view.
We make heavy use of Abap Clean Code (it's even part of our ATC check), and they clearly state that you should have seperate methods, which is also OO best practices.
https://github.com/SAP/styleguides/blob/master/clean-abap/CleanABAP.md#split-methods-instead-of-adding-optional-parameters

What is the purpose of child entity in Aggregate root?

[ Follow up from this question & comments: Should entity have methods and if so how to prevent them from being called outside aggregate ]
As the title says: i am not clear about what is the actual/precise purpose of entity as a child in aggregate?
According to what i've read on many places, these are the properties of entity that is a child of aggregate:
It has identity local to aggregate
It cannot be accessed directly but through aggregate root only
It should have methods
It should not be exposed from aggregate
In my mind, that translates to several problems:
Entity should be private to aggregate
We need a read only copy Value-Object to expose information from an entity (at least for a repository to be able to read it in order to save to db, for example)
Methods that we have on entity are duplicated on Aggregate (or, vice versa, methods we have to have on Aggregate that handle entity are duplicated on entity)
So, why do we have an entity at all instead of Value Objects only? It seams much more convenient to have only value objects, all methods on aggregate and expose value objects (which we already do copying entity infos).
PS.
I would like to focus to child entity on aggregate, not collections of entities.
[UPDATE in response to Constantin Galbenu answer & comments]
So, effectively, you would have something like this?
public class Aggregate {
...
private _someNestedEntity;
public SomeNestedEntityImmutableState EntityState {
get {
return this._someNestedEntity.getState();
}
}
public ChangeSomethingOnNestedEntity(params) {
this._someNestedEntity.someCommandMethod(params);
}
}
You are thinking about data. Stop that. :) Entities and value objects are not data. They are objects that you can use to model your problem domain. Entities and Value Objects are just a classification of things that naturally arise if you just model a problem.
Entity should be private to aggregate
Yes. Furthermore all state in an object should be private and inaccessible from the outside.
We need a read only copy Value-Object to expose information from an entity (at least for a repository to be able to read it in order to save to db, for example)
No. We don't expose information that is already available. If the information is already available, that means somebody is already responsible for it. So contact that object to do things for you, you don't need the data! This is essentially what the Law of Demeter tells us.
"Repositories" as often implemented do need access to the data, you're right. They are a bad pattern. They are often coupled with ORM, which is even worse in this context, because you lose all control over your data.
Methods that we have on entity are duplicated on Aggregate (or, vice versa, methods we have to have on Aggregate that handle entity are duplicated on entity)
The trick is, you don't have to. Every object (class) you create is there for a reason. As described previously to create an additional abstraction, model a part of the domain. If you do that, an "aggregate" object, that exist on a higher level of abstraction will never want to offer the same methods as objects below. That would mean that there is no abstraction whatsoever.
This use-case only arises when creating data-oriented objects that do little else than holding data. Obviously you would wonder how you could do anything with these if you can't get the data out. It is however a good indicator that your design is not yet complete.
Entity should be private to aggregate
Yes. And I do not think it is a problem. Continue reading to understand why.
We need a read only copy Value-Object to expose information from an entity (at least for a repository to be able to read it in order to
save to db, for example)
No. Make your aggregates return the data that needs to be persisted and/or need to be raised in a event on every method of the aggregate.
Raw example. Real world would need more finegrained response and maybe performMove function need to use the output of game.performMove to build propper structures for persistence and eventPublisher:
public void performMove(String gameId, String playerId, Move move) {
Game game = this.gameRepository.load(gameId); //Game is the AR
List<event> events = game.performMove(playerId, move); //Do something
persistence.apply(events) //events contains ID's of entities so the persistence is able to apply the event and save changes usign the ID's and changed data wich comes in the event too.
this.eventPublisher.publish(events); //notify that something happens to the rest of the system
}
Do the same with inner entities. Let the entity return the data that changed because its method call, including its ID, capture this data in the AR and build propper output for persistence and eventPublisher. This way you do not need even to expose public readonly property with entity ID to the AR and the AR neither about its internal data to the application service. This is the way to get rid of Getter/Setters bag objects.
Methods that we have on entity are duplicated on Aggregate (or, vice versa, methods we have to have on Aggregate that handle entity
are duplicated on entity)
Sometimes the business rules, to check and apply, belongs exclusively to one entity and its internal state and AR just act as gateway. It is Ok but if you find this patter too much then it is a sign about wrong AR design. Maybe the inner entity should be the AR instead a inner entity, maybe you need to split the AR into serveral AR's (inand one the them is the old ner entity), etc... Do not be affraid about having classes that just have one or two methods.
In response of dee zg comments:
What does persistance.apply(events) precisely do? does it save whole
aggregate or entities only?
Neither. Aggregates and entities are domain concepts, not persistence concepts; you can have document store, column store, relational, etc that does not need to match 1 to 1 your domain concepts. You do not read Aggregates and entities from persitence; you build aggregates and entities in memory with data readed from persistence. The aggregate itself does not need to be persisted, this is just a possible implementation detail. Remember that the aggregate is just a construct to organize business rules, it's not a meant to be a representation of state.
Your events have context (user intents) and the data that have been changed (along with the ID's needed to identify things in persistence) so it is incredible easy to write an apply function in the persistence layer that knows, i.e. what sql instruction in case of relational DB, what to execute in order to apply the event and persist the changes.
Could you please provide example when&why its better (or even
inevitable?) to use child entity instead of separate AR referenced by
its Id as value object?
Why do you design and model a class with state and behaviour?
To abstract, encapsulate, reuse, etc. Basic SOLID design. If the entity has everything needed to ensure domain rules and invariants for a operation then the entity is the AR for that operation. If you need extra domain rules checkings that can not be done by the entity (i.e. the entity does not have enough inner state to accomplish the check or does not naturaly fit into the entity and what represents) then you have to redesign; some times could be to model an aggregate that does the extra domain rules checkings and delegate the other domain rules checking to the inner entity, some times could be change the entity to include the new things. It is too domain context dependant so I can not say that there is a fixed redesign strategy.
Keep in mind that you do not model aggregates and entities in your code. You model just classes with behaviour to check domain rules and the state needed to do that checkings and response whith the changes. These classes can act as aggregates or entities for different operations. These terms are used just to help to comunicate and understand the role of the class on each operation context. Of course, you can be in the situation that the operation does not fit into a entity and you could model an aggregate with a V.O. persistence ID and it is OK (sadly, in DDD, without knowing domain context almost everything is OK by default).
Do you wanna some more enlightment from someone that explains things much better than me? (not being native english speaker is a handicap for theese complex issues) Take a look here:
https://blog.sapiensworks.com/post/2016/07/14/DDD-Aggregate-Decoded-1
http://blog.sapiensworks.com/post/2016/07/14/DDD-Aggregate-Decoded-2
http://blog.sapiensworks.com/post/2016/07/14/DDD-Aggregate-Decoded-3
It has identity local to aggregate
In a logical sense, probably, but concretely implementing this with the persistence means we have is often unnecessarily complex.
We need a read only copy Value-Object to expose information from an
entity (at least for a repository to be able to read it in order to
save to db, for example)
Not necessarily, you could have read-only entities for instance.
The repository part of the problem was already addressed in another question. Reads aren't an issue, and there are multiple techniques to prevent write access from the outside world but still allow the persistence layer to populate an entity directly or indirectly.
So, why do we have an entity at all instead of Value Objects only?
You might be somewhat hastily putting concerns in the same basket which really are slightly different
Encapsulation of operations
Aggregate level invariant enforcement
Read access
Write access
Entity or VO data integrity
Just because Value Objects are best made immutable and don't enforce aggregate-level invariants (they do enforce their own data integrity though) doesn't mean Entities can't have a fine-tuned combination of some of the same characteristics.
These questions that you have do not exist in a CQRS architecture, where the Write model (the Aggregate) is different from a Read model. In a flat architecture, the Aggregate must expose read/query methods, otherwise it would be pointless.
Entity should be private to aggregate
Yes, in this way you are clearly expressing the fact that they are not for external use.
We need a read only copy Value-Object to expose information from an entity (at least for a repository to be able to read it in order to save to db, for example)
The Repositories are a special case and should not be see in the same way as Application/Presentation code. They could be part of the same package/module, in other words they should be able to access the nested entities.
The entities can be viewed/implemented as object with an immutable ID and a Value object representing its state, something like this (in pseudocode):
class SomeNestedEntity
{
private readonly ID;
private SomeNestedEntityImmutableState state;
public getState(){ return state; }
public someCommandMethod(){ state = state.mutateSomehow(); }
}
So you see? You could safely return the state of the nested entity, as it is immutable. There would be some problem with the Law of Demeter but this is a decision that you would have to make; if you break it by returning the state you make the code simpler to write for the first time but the coupling increases.
Methods that we have on entity are duplicated on Aggregate (or, vice versa, methods we have to have on Aggregate that handle entity are duplicated on entity)
Yes, this protect the Aggregate's encapsulation and also permits the Aggregate to protect it's invariants.
I won't write too much. Just an example. A car and a gear. The car is the aggregate root. The gear is a child entity

Initializing Domain Objects - observing SOLID, Tell, Don't Ask

I'm trying to follow some of the more current design principles including SOLID and Domain Driven Design. My question is around how people handle "Initializing" Domain Objects.
Here's a simple example:
Based on SOLID, I should not depend on concretions, so I create an interface and a class. Since I'm taking advantage of Domain Driven Design, I create an object with relevant methods. (i.e. not anemic).
Interface IBookstoreBook
{
string Isbn {get; set;}
int Inventory {get; set;}
void AddToInventory(int numBooks);
void RemoveFromInventory(int numBooks);
}
public class BookstoreBook : IBookstoreBook
{
public string Isbn {get; set;}
public int Inventory {get; private set;}
public void AddToInventory(int numBooks);
public void RemoveFromInventory(int numBooks);
}
To help with testing and be more loosely coupled, I also use an IoC container to create this book. So when the book is created it is always created empty. But, if a book doesn't have an ISBN and Inventory it is invalid.
BookstoreBook(string bookISBN, int bookInventory) {..} // Does not exist
I could have 4 or 5 different classes that use a BookstoreBook. For one,
public class Bookstore : IBookstore
{
...
public bool NeedToIncreaseInventory(BookstoreBook book) { ...}
...
}
How does any method know is getting a valid book? My solutions below seem to violate the "Tell Don't Ask" principle.
a) Should each method that uses a Bookstore book test for validity? (i.e. should NeedToIncreaseInventory test for a books validity? I'm not sure it should have to know what makes a BookstoreBook valid.)
b) Should I have a "CreateBook" on the IBookstoreBook object and just "assume" that clients know they have to call this anytime they want to initialize a BookstoreBook? That way, NeedToIncreaseInventory would just trust that "CreateBook" was already called on BookstoreBook.
I'm interested in what the recommended appreach is here.
First off, I think your BookstoreBook doesn't have any really relevant methods, which means it doesn't have any relevant behavior, no business rules at all. And since it doesn't contain any business rules it actually is anemic. It just has a bunch of Getters and Setters. I would argue that having a method like AddToInventory that ends up just adding +1 to a property is no meaningful behavior.
Also, why would your BookstoreBook know how many of its type are in your Bookstore? I feel like this is probably something the Bookstore itself should keep track of.
As for point a): no, if you're creating books from user input you should check the provided data before you even create a new book. That prevents you from ever having invalid books in your system.
As for the creation of the object, the question is will you ever have more than one book type? If the answer is no you can drop the interface and just instantiate a book in a class that is responsible for creating new books from user input for example. If you need more book types an abstract factory may be useful.
First of all, a great way to make sure that entity state only can be set by behavior (methods) so to make all property setters private. It also allows you to make sure that all related properties are set when the state changes.
But, if a book doesn't have an ISBN and Inventory it is invalid.
There you have two business rules. Let's start with ISBN. If a book is not valid without it it HAVE to be specified in the constructor. Otherwise it's fully possible to create a book which is invalid. An ISBN also have a specified format (at least the length). So that format have to be validated too.
Regarding the inventory I believe that it's not true. You might have books that are sold out or books that can be booked before their release. Right? So a book CAN exist without an inventory, it's just not likely.
If you look at the relation between inventory and books from the domain perspective they are two separate entities with different responsibilities.
A book is representing something that the user can read about and use that information to decide whether it should be rented or purchased.
An inventory is used to make sure that your application can fulfill your customers request. Typically it can be done by a delivery directly (decrease the inventory) or by a backorder (order more copies from your supplier and then deliver the book).
Thus the inventory part of the application do not really need to know everything there is to know about the book. Thus I would recommend that the inventory only knows about the book identity (that's typically how root aggregates can reference each other according to Martin Fowler's book).
An inversion of control container is typically used to to manage services (in DDD the application services and the domain services). It's job is not to act as a factory for domain entities. It will only complicate things without any benefit.
To help with testing and be more loosely coupled, I also use an IoC container to create this book.
Why is your IoC container creating books? That's a bit strange. Your domain model should by container agnostic (wiring together the interfaces and the implementation is the concern of your composition root).
How does any method know is getting a valid book?
The domain model knows it is getting a valid book, because it says so right there in the interface.
The data model knows it is producing a valid book, because the constructor/factory method accepted its arguments without throwing an exception.
Should each method that uses a Bookstore book test for validity?
No, once you have a Book, it is going to stay valid (there shouldn't be any verbs defined in your domain model that would create an invalid data model).
Should I have a "CreateBook" on the IBookstoreBook object and just "assume" that clients know they have to call this anytime they want to initialize a BookstoreBook? That way, NeedToIncreaseInventory would just trust that "CreateBook" was already called on BookstoreBook.
It's normal to have a factory for creating objects. See Evans, chapter 6.
books can be created from a database and many other places. I'm assuming others have had to solve this issue if they are using DDD and I am wondering about their approach. Should we all be using factories - as you suggest that take the needed data as input?
There are really only two sources for data -- your own book of record (in which case, you load the data via a repository), and everywhere else (where you need to make sure that the data conforms to the assumptions of your model.
Based on SOLID, I should not depend on concretions
If you're referring to the Dependency Inversion principle, it does not exactly say that.
- High-level modules should not depend on low-level modules. Both should depend on abstractions.
- Abstractions should not depend on details. Details should depend on abstractions.
No domain entity is of a higher level than another and normally no object in the Domain layer is a "detail", so DIP usually doesn't apply to domain entities.
I also use an IoC container to create this book
Considering that BookstoreBook has no dependency, I'm not sure why you would do that.
How does any method know is getting a valid book?
By assuming that the book is Always Valid, always consistent. This usually requires having a single Book constructor that checks all relevant rules at creation time, and state-changing methods that enforce invariants about the Book.
a) ...
b) ...
You're mixing up two concerns here - making sure that Book is in a consistent state wherever it is used, and initializing a Book. I'm not sure what your question is really about in the end, but if you apply the "always valid" approach and forget about Book being an interface/higher level abstraction, you should be good to go.

Can a "rich domain model" violate the Single Responsibility Principle?

An interesting thread came up when I typed in this question just now. I don't think it answers my question though.
I've been working a lot with .NET MVC3, where it's desirable to have an anemic model. View models and edit models are best off as dumb data containers that you can just pass off from a controller to a view. Any kind of application flow should come from the controllers, and the views handle UI concerns. In MVC, we don't want any behavior in the model.
However we don't want any business logic in the controllers either. For larger applications its best to keep the domain code separate from and independent of the models, views, and controllers (and HTTP in general for that matter). So there is a separate project which provides, before anything else, a domain model (with entities and value objects, composed into aggregates according to DDD).
I've made a few attempts to move away from an anemic model towards a richer one in domain code, and I'm thinking of giving up. It seems to me like having entity classes which both contain data and behavior violates the SRP.
Take for example one very common scenario on the web, composing emails. Given some event, it is the domain's responsibility to compose an EmailMessage object given an EmailTemplate, EmailAddress, and custom values. The template exists as an entity with properties, and custom values are provided as input by the user. Let's also say for the sake of argument that the EmailMessage's FROM address can be provided by an external service (IConfigurationManager.DefaultFromMailAddress). Given those requirements, it seems like a rich domain model could give the EmailTemplate the responsibility of composing the EmailMessage:
public class EmailTemplate
{
public EmailMessage ComposeMessageTo(EmailAddress to,
IDictionary<string, string> customValues, IConfigurationManager config)
{
var emailMessage = new EmailMessage(); // internal constructor
// extension method
emailMessage.Body = this.BodyFormat.ApplyCustomValues(customValues);
emailMessage.From = this.From ?? config.DefaultFromMailAddress;
// bla bla bla
return emailMessage;
}
}
This was one of my attempts at rich domain model. After adding this method though, it was the EmailTemplate's responsibility to both contain the entity data properties and compose messages. It was about 15 lines long, and seemed to distract the class from what it really means to be an EmailTemplate -- which, IMO, is to just store data (subject format, body format, attachments, and optional from/reply-to addresses).
I ended up refactoring this method into a dedicated class who's sole responsibility is composing an EmailMessage given the previous arguments, and I am much happier with it. In fact, I'm starting to prefer anemic domains because it helps me keep responsibilities separate, making classes and unit tests shorter, more concise, and more focused. It seems that making entities and other data objects "devoid of behavior" can be good for separating responsibility. Or am I off track here?
The argument in favor of a rich domain model instead of an anemic model hinges on one of the value propositions of OOP, which is keeping behavior and data next to each other. The core benefit is that of encapsulation and cohesion which assists in reasoning about the code. A rich domain model can also be viewed as an instance of the information expert pattern. The value of all of these patterns however is to a great extent subjective. If it is more helpful for you to keep data and behavior separate, then so be it, though you might also consider other people that will be looking at the code. I prefer to encapsulate as much as I can. The other benefit of a richer domain model in this case would be the possibility to make certain properties private. If a property is only used by one method on the class, why make it public?
Whether a rich domain model violates SRP depends on your definition of responsibility. According to SRP, a responsibility is a reason to change, which itself calls for a definition. This definition will generally depend on the use-case at hand. You can declare that the responsibility of the template class is to be a template, with all of the implications that arise, one of which is generating a message from the template. A change in one of the template properties can affect the ComposeMessageTo method, which indicates that perhaps these are a single responsibility. Moreover, the ComposeMessageTo method is the most interesting part of the template. Clients of the template don't care how the method is implemented or what properties are present of the template class. They only want to generate a message based on the template. This also votes in favor of keeping data next to the method.
Well, it depends on how you want to look at it.
Another way is: "Can the Single Responsibility Principle violate a rich domain model?"
Both are guidelines. There is no "principle" anywhere in software design. There are, however, good designs and bad designs. Both these concepts can be used in different ways, to achieve a good design.

Value Objects in CQRS - where to use

Let's say we have CQRS-inspired architecture, with components such as Commands, Domain Model, Domain Events, Read Model DTOs.
Of course, we can use Value Objects in our Domain Model. My question is, should they also be used in:
Commands
Events
DTOs
I haven't seen any examples where Value Objects (VO) are used in the components mentioned above. Instead, primitive types are used. Maybe it's just the simplistic examples. After all, my understanding of VOs use in DDD is that they act as a glue for the whole application.
My motivation:
Commands.
Let's say user submits a form which contains address fields. We have Address Value Object to represent this concept. When constructing command in the client, we should validate user input anyway, and when it is well-formed, we can create Address object right there and initialize Command with it. I see no need to delegate creation of Address object to command handler.
Domain Events.
Domain Model already operates in terms of Value Objects, so by publishing events with VOs instead of converting them to primitive types, we can avoid some mapping code. I'm pretty sure it's alright to use VOs in this case.
DTOs.
If our query-side DTOs can contain Value Objects, this allows for some more flexibility. E.g., if we have Money object, we can choose whether to display it in EUR or USD, no need to change Read Model.
Ok I've changed my mind. I have been trying to deal with VOs a bunch lately and after watching this http://www.infoq.com/presentations/Value-Objects-Dan-Bergh-Johnsson it clarified a couple of things for me.
Commands and Event are messages (and not objects, objects are data + behavior), in some respects much like DTOs, they communicate data about an event and they themselves encapsulate no behavior.
Value Objects are not like DTOs at all. They are a domain representation and they are, generally speaking, rich on behavior like all other domain representations.
Commands and Events communicate information into and out of the domain respectively, but they themselves do not encapsulate any behavior. From that perspective it seems wrong and a possibly a violation context boundaries to pass VOs inside of them.
To paraphrase Oren (though he was referring to nHibernate and WCF) "Don't send your domain across the wire".
http://ayende.com/Blog/archive/2009/05/14/the-stripper-pattern.aspx
If you want to communicate a value object, then I suggest passing the necessary attributes needed to re-construct the VO within them instead.
Original Text (for posterity):
If you are asking if Value Objects can be passed by the domain model to events or passed in by commands, I don't really see a huge problem with the former, though the latter may violate some of the rules of the aggregate root being the "owner" of values.
That said a value object represents concepts like for example a color. You don't have green, you are green or not. There seems to be nothing intrinsically wrong with a command telling you that you are green by passing this.
Reading the chapter from DDD on the Aggregate Root pattern explains Entities and Value Objects quite well and is worth reading over a few times.
I say it's a bad idea.
There's a reason we don't do the same with entities - to avoid coupling other parts of the system to the domain (in the wrong places). The same is true for Value Objects, the only difference between value objects and entities is lifetime and ownership - these differences do not affect how we should and should not couple to them.
Imagine that you make an event contain a VO. A change in your domain requires you to change that VO. You've now boxed yourself into a corner where your event is also forced to change, ditto for any Commands or DTO's it's a part of.
This is to be avoided.
Use DTO's and/or primitives. Map them (AutoMapper makes it a 1-line deal).
Similar to other answers, in SOA this would break encapsulation of the service as the domain is now leaking out.
According to Clean Code your DTOs are data structures (just to add another term), while value objects are objects. The difference that objects can have behavior. Mixing data structures with objects is generally a very bad idea, because it will be hard to maintain the hybrid you get.
I don't feel right to put value objects to DTOs from an architecture perspective as well. The value objects are inside of the domain model while the DTOs you mentioned are defining the interface of the model. We usually build an interface to decouple the outside world from the inside of something. So in the current case we added DTOs to decouple the outside world from the value objects (and other model related stuff). After that adding value objects to the interface is crazy.
So you haven't met this solution yet because it is an anti-pattern.
Value Objects are or at least should be immutable. Once instantiated with a value that value will never change throughout the lifetime of the object. Thus, passing VOs as data to DTOs (such as Events) should not be an issue since all you can do with them is get their value. At the most their value in a differing representation such as toString() as opposed to an original getValue() which might return an integer or whatever the value is.

Resources