Is it correct to update a child entity when it's not part of its Aggregate Root?
Lets consider an ecommerce site. There's a product entity, which has multiple variants. If we consider that the product is a T-shirt, then a product variant might be a t-shirt with a color of White or Green.
class Product {
id: number
slug: string
description: String
variants: ProductVariant[]
}
class ProductVariant {
name: string
quantity: number
price: float
images: string[]
}
Lets exclude the non-variant related rules and focus on the variants.
A variant cannot exist on its own, it must be part of a product
A product can have up to 10 variants
A variant must have at least one image, name, quantity and price
Now, how does a variant come into existence? Well, according to pt.1, our product must implement a product.addVariant(variant: ProductVariant) method, as the variant cannot exist on its own.
Since the variant contains its own logic, we can implement a factory method. For simplicity, we can assume a private constructor, so the implementation can be like
class ProductVariant {
private constructor(...) {}
public static create(productVariantProps): ProductVariant {
// Validate input, if valid
return new ProductVariant(productVariantProps)
}
// Rules for adding new images
addImage(productVariantImage) {...}
}
And the way that the product variant can come into existence can be like so
const product = await ProductRepository.find(...)
cosnt variant = ProductVariant.create({
name: "White",
quantity: 1,
price: 25.50,
images: [ ... ]
})
product.addVariant(variant)
await ProductRepository.save(product)
Error handling is excluded for simplicity, the addVariant method can return a nullable error, for example, but it can be implemented in multiple ways.
Now, this example contains a product, which enforces validation rules according to the number of variants that can be added to it, and a separate ProductVariant entity, which enforces validation rules according to itself.
In the UI, we used to have one product form for the product and all its variants were edited /created/deleted on the same page. The UX team decides that this is very complex for our customers and it would be better if we have a dedicated page for editing a variant.
So, we were editing the variants here: /products/edit/:productId, and now we'd like to have to go here /products/edit/:productId/:variantId to edit a variant.
Now, if a variant exists, all its rules/business logic are encapsulated in the ProductVariant class, so we should be able to just fetch it from the database, make our changes, and save it through the repository.
However, the ProductVariant is a child entity of the Product Aggregate Root.
I'm inclined to do something to the lines of
const variant = await ProductRepository.findVariant(req.params.variantId)
// make the changes
await ProductRepository.save(variant)
But should I? Should it be the ProductRepository? Shouldn't I only save Aggregate Roots?
From what I know, the more proper way to achieve this would be
const product = await ProductRepository.find(req.params.productId)
const variant = product.getVariant(req.params.variantId)
// make the changes to the variant
await ProductRepository.save(product)
But
It involves more work
Unnecessary data is fetched for the given operation ( one product might have many child entities, like variants, images, which have locales etc ), when I know for certain I'm only interested in this one variant
Edit: Sorry for the broken code examples, I thought I would write it in Typescript, so that its more understandable, but I'm not 100% familiar with it and very tired. Hopefully the point is clear.
As long as Product and Variant don't have any business rules that will affect ProductVariant, I see no reason to load extra data through the aggregate root. If this decision creates a problem in the future, just change it. Keep a pragmatic mind and don't fall into analysis paralysis.
Splitting your UI into different parts is a good idea. A Task-based UI makes it easier for you as a developer to know what is happening in the UI.
With this in mind, creating and changing ProductVariant can be implemented as a command from the UI that only handles this entity. I would use the same repository as Product for this.
I know you didn't ask for feedback on your REST API, but I recommend that you build the URLs like this /resource/:id/resource/:id/(action). This way, you rarely need to use the action part of the URL because you use the HTTP verbs to decide if you are doing a PUT (update), POST (create) or DELETE. Also, it's easier for users of your API to know what the ID represents.
A variant cannot exist on its own, it must be part of a product
This one can be implicitly enforced in code by making sure no code paths allow it. See answer about #2.
A product can have up to 10 variants
As we can see, the constraint that spans many variants is their count, but not the variant states per se. That means you could simply track the count in Product and involve Product in the creation and removal process of a ProductVariant. All other code paths would deal with the ProductVariant directly, which is an AR of it's own.
e.g.
// Service layer
addProduct(productId, name, quantity, ...) {
transaction {
product = productRepo.productOfId(productId);
variant = product.addVariant(name, quantity, ...); //tracks count & enforces rule
variantRepository.save(variant);
}
}
renameVariant(variantId, newName) {
transaction {
variant = variantRepo.variantOfId(varitantId);
variant.rename(newName);
}
}
A variant must have at least one image, name, quantity and price
This rule can be enforced in the ProductVariant AR itself.
Related
Background
A data structure in my database consists of "sections"; lists of custom objects. The number of sections may expand in the future, so to keep my code as DRY as possible, I wanted to determine the section to add/update/delete an item from to be defined dynamically as a parameter.
I quickly realised that doing something like #Body() section: SectionA | SectionB | SectionC... disables validation so I needed a single DTO Section that could encompass all sections. To do that I need to define dynamically which validators to apply as I have several #IsNotEmpty constraints.
So I came across this post whose selected answer recommends the usage of groups.
This posed the following challenges:
I now have to write a custom validation pipe. Relied heavily on this
I want to override the global validator pipe that I already had running and use my custom one for just that method. Outcome: didn't work, had to start defining the pipe on every controller method, a tradeoff I am willing to accept. Looks like there is no simple alternative.
However, I'm now faced with the final problem: how to use the parameters in the request to define these groups in the validator; another brick wall. No simple solution.
Solution
This question has been asked here but no satisfactory solution was actually given.
Option one recommended redefining the scope of the pipe to "request" level but didn't explain how, and solutions found online didn't work.
The second solution, using a custom decorator to perform the validation instead, did work, very well in fact here is a simplified version of the code:
export const ProfileSectionData = createParamDecorator(
async (data: unknown, ctx: ExecutionContext) => {
const request = ctx.switchToHttp().getRequest();
let object = plainToInstance(SectionDto, request.body); // I don't need to access the metatype from the request because I know what type I need but I'm sure I could if need-be.
const groups = [request.params.profileSection];
let validatorOptions = { groups, ...defaultOptions };
const errors = await validate(object, validatorOptions);
if (errors.length > 0) {
throw new BadRequestException();
}
return request.body;
},
);
Implications?
Here's my question. When Jay McDoniel recommended using a custom decorator, they warn: "Do note, that this could impact how the ValidationPipe is functioning if that is bound globally, at the class, or method level."
What does this mean?
Are there any vulnerabilities or performance drawbacks associated with this solution?
Obviously, one drawback is that you are using validation outside a validation pipe which is not ideal from a point of view of order and single-responsibility but I can't think of tangible inconveniences beyond aesthetics and maintainability.
Knowing the background, would you have approached the problem in a completely different way?
Can we use value object in command ?
Suppose I have a Shop (aggregate) in which there is one value object Address.
In the value object constructor Address ,I was put the some validation logic for address.
So if I am using that Address object in command (CreateShopCmd) , then it get validated at the making of command , but What I want or Read that validation should be present in command handler.
But problem is that , I have to put that validation again in command handler (Since validation is already present in it Address constructor) and if I am not putting that in command handler , then the validation will occur when I am making the Address object in event handler and assign to Shop aggregate(Which is incorrect)
So, please guide me.
Below are code example
#Aggregate
#AggregateRoot
public class Shop {
#AggregateIdentifier
private ShopId shopId;
private String shopName;
private Address address;
#CommandHandler
public Shop(CreateShopCmd cmd){
//Validation Logic here , if not using the Address in
// in cmd
//Fire an event after validation
ShopRegistredEvt shopRegistredEvt = new ShopRegistredEvt();
AggregateLifecycle.apply(shopRegistredEvt);
}
#EventSourcingHandler
public void on(ShopRegistredEvt evt) {
this.shopName = evt.getShopName();
//Validation happend here if not put in cmd at the time of making
//Address object - this is wrong
this.address = new Address(evt.getCity(),evt.getCountry(),evt.getZipCode())
}
}
public class CreateShopCmd{
private String shopId;
private String shopName;
private String city;
private String zipCode;
private String country;
}
public ShopCreatedEvent{
private String shopId;
private String shopName;
private String city;
private String zipCode;
private String country;
}
There is nothing conceptually wrong with using Value Objects in Commands or Events. However, you should use them with caution.
The structure of a Message may change over time. If you have used Value Object excessively inside your messages, it may become less clear how a change in one of the value objects changes the structure of different messages.
For Value Objects that represent a "common" concept, such as an Address, this is not so much of a problem. But as soon as the Value Objects become more domain-specific, this may come up as an issue.
This is a very good question and I have been thoroughly thinking about embedding value objects in commands or not. I came to the conclusion you should definitely not use Value Objects in commands:
Commands are part of the application layer, they are supposed to work as simple as possible, avoiding any typed objects, and work best using literal (think serialization). What happen when an external system wants to plugin on your hexagon (application layer) and send commands to your application, do they need your command library to be able to use the objects and the structure defined ? Hell no ! You don't want that, so keep command simple.
Another reason is, as DmitriBodiu said, VO contains business logic and validation, they belong to the domain layer, do not ever put them in commands. Application service will do the translation, and be responsible of throwing validation error to any non conforming commands at the client.
There is nothing wrong in your design, its actually how Vaughn Vernon (the author of Implementing Domain Driven Design - IDDD book) did in his repository, you might want to check the application layer at this link:
https://github.com/VaughnVernon/IDDD_Samples/blob/master/iddd_identityaccess/src/main/java/com/saasovation/identityaccess/application/IdentityApplicationService.java
Notice how he reconstruct every objects from flat commands to value object belonging to the domain layer:
#Transactional
public void changeUserContactInformation(ChangeContactInfoCommand aCommand) {
User user = this.existingUser(aCommand.getTenantId(), aCommand.getUsername());
this.internalChangeUserContactInformation(
user,
new ContactInformation(
new EmailAddress(aCommand.getEmailAddress()),
new PostalAddress(
aCommand.getAddressStreetAddress(),
aCommand.getAddressCity(),
aCommand.getAddressStateProvince(),
aCommand.getAddressPostalCode(),
aCommand.getAddressCountryCode()),
new Telephone(aCommand.getPrimaryTelephone()),
new Telephone(aCommand.getSecondaryTelephone())));
}
Commands must not contain business logic, so they cannot carry a value object.
I wouldn't suggest using Value Objects in commands. Cause your commands are part of the application layer, but Value Objects are kept in Domain Layer. You can use your ValueObjects in DomainEvens though. Because if domain model changes, modification of your domain event wouln't be that painful, cause the modification is done in the same bounded context. You should never use ValueObjects in integration events though.
Short answer: Have you ever thought about Integer, String, Boolean, etc.? Those are Value Objects, too. The only difference is, that you didn't create them yourself. Now try to build a Command without any Value Objects ;-)
Long answer:
In general I don't see any issue with Value Objects within Commands. As long as you follow a few simple guidelines:
The most important code in your application is your Domain Model. The Domain Model defines the data structures it expects for Command handling. This means: The only reason to change your Command Model is if your Domain Model requires this change. The same applies to your Value Objects: Value Objects only change if this change is required by your Domain Model. No exceptions!
Commands can in general fail either because of business constraints, or because of invalid data (or because of optimistic locking, or whatever).
As said above: Integers and Strings are Value Objects, too. If you only use basic types within your Command, it will already throw an exception if you try new SetAgeCommand(aggId, "foo"), because String cannot be assigned to int. The same applies if you don't provide an Aggregate ID to your UpdatePersonCommand. These are no business constraints, but instead very basic data and type validation. Your Command will never be created if you pass malformed data.
Now let's say you have a PersonAge Value Object. I doesn't matter where you construct this object, because in any case it must throw an Exception if you try to construct it with a negative number: -5 cannot be assigned to PersonAge - looks familiar? As long as you can make sure that your code created those Value Object instances, you can know for sure that they are valid.
Business rules should be checked by the Command Handler within your Domain Model. In general business constraints are specific to your Domain, and most often they rely on the data within your Aggregate. Take for example SendMoneyCommand. Your Money Value Object can validate if it's a valid currency, but it cannot validate if the user's bank account has enough money to execute the transaction. This is a business validation and it's part of your Domain Model.
And a word regarding Events: I'd suggest to only use very basic Value Objects inside your events. For example: String, Integer, Date, etc. Basically every kind of Value Object that will never change. The reason behind it: Business requirements can change. For example: Maybe your Domain Model requires your Address Value Object to change, and it's now required to provide geo-coordinates. Then this will implicitly change your NewAddressAddedEvent. But your already persisted Events didn't have this requirement, though you're unable to construct Address Value Objects from your past event data, because the new Address Value Object will throw an Exception if there are no geo-coordinates provided.
There are (at least) two solutions for this problem:
Versioned Events: After modifying your Address Value Object, you have now a NewAddressAddedEvent_Version2 which uses the new Address Value Object, and you have the old NewAddressAddedEvent which must use a backup copy of the old Address Value Object.
Write a Script that "repairs" your event database by adding geo-coordinates to every Event that uses the Address Value Object. So you can throw away the old NewAddressAddedEvent.
That's OK as long as the value objects are conceptually a part of your message contract, and not used in entities.
And if they are a part of your entity, don't expose them as public properties of your message or you'll be in soop.
Suppose that I have 2 aggregate roots (AR) in my domain and invoking some method on the 1st requires access to an instance of the 2nd. In DDD how and where should retrieval and creation of the 2nd AR happen?
Here's a contrived example TravelerEntity that needs access to a SuitcaseEntity. I'm looking for an answer that doesn't pollute the domain layer with infrastructure code.
public class TravelerEntity {
// null if traveler has no suitcase yet.
private String suitcaseId = ...;
...
// Returns an empty suitcase ready for packing. Caller
public SuitcaseEntity startTrip(SuitcaseRepository repo) {
SuitcaseEntity suitcase;
if (suitcaseId == null) {
suitcase = new SuitcaseFactory().create();
suitcase = repo.save(suitcase);
suitcaseId = suitcase.getId();
} else {
suitcase = repo.findOne(suitcaseId);
}
suitcase.emptyContents();
return suitcase;
}
}
An application layer service handling the start trip request would get the appropriate SuitcaseRepository implementation via DI, get the TravelerEntity via a TravelerRepository implementation and call its startTrip() method.
The only alternative I thought of was to move SuitcaseEntity management to a domain service, but I don't want to create the suitcase before starting the trip, and I don't want to end up with an anemic TravelerEntity.
I'm a little uncertain about one AR creating and saving another AR. Is this OK since the repo and factory encapsulate specifics about the 2nd AR? Is there a danger I'm missing? Is there a better alternative?
I'm new enough to DDD to question my thinking on this. And the other questions I found about ARs seem to focus on identifying them properly, not on managing their lifecycles in relation to one another.
Ideally TravelerEntity wouldn't manipulate a SuitcaseRepository because it shouldn't know about an external thing where suitcases are stored, only about its own internals. Instead, it could new up a SuitCase and add it to its internal [list of] suitcases. If you wanted that to work with ORMs without specifically adding the suitcase to the repository though, you'd have to store the whole suitcase object in TravelerEntity.suitcaseList and not just its ID, which conflicts with the "store references to other AR's as IDs" best practice.
Moreover, TravelerEntity.startTrip() returning a suitcase seems a bit artificial and unexplicit and you'll be in trouble if you need to return other entities created by startTrip(). So a good solution could be to have TravelerEntity emit a SuitcaseAdded event with the suitcase data in it once it has added the suitcase to its list. An application service could subscribe to the event, add the suitcase to SuitcaseRepository and commit the transaction, effectively saving both the new suitcase and the modified traveler to the database.
Alternatively, you could place startTrip() in a Domain Service instead of an Entity. There it might be more legit to use SuitcaseRepository since a domain service is allowed know about multiple domain entities and the overall domain process going on.
First of all persistence is not domain's job so i would get rid of all the repositories from the domain models and create a service that would use them.
Second of all you should rethink your design. Why a StartTrip method of a Traveller should return a SuitCase?
A Traveller either has or hasn't a suitcase. Once you have retrieved the Traveller you should already have their SuitCases too.
public class StartTripService {
public void StartTrip(int travellerId) {
var traveller = travellerRepo.Get(travellerId);
traveller.StartTrip();
}
}
Currently diving into DDD and i've read most of the big blue book of Eric Evans. Quite interesting so far :)
I've been modeling some aggregates where they hold a collection of entities which expire. I've come up with a generic approach of expressing that:
public class Expirable<T>
{
public T Value { get; protected set; }
public DateTime ValidTill { get; protected set; }
public Expirable(T value, DateTime validTill)
{
Value = value;
ValidTill = validTill;
}
}
I am curious what the best way is to invalidate an Expirable (nullify or omit it when working in a set). So far I've been thinking to do that in the Repository constructor since that's the place where you access the aggregates from and acts as a 'collection'.
I am curious if someone has come up with a solution to tackle this and I would be glad to hear it :) Other approaches are also very welcome.
UPDATE 10-1-2013:
This is not DDD with the CQRS/ES approach from Greg Young. But the approach Evans had, since I just started with the book and the first app. Like Greg Young said, if you have to make good tables, you have to make a few first ;)
There are probably multiple ways to approach this, but I, personally, would solve this using the Specification pattern. Assuming object expiration is a business rule that belongs in the domain, I would have a specification in addition to the class you have written. Here is an example:
public class NotExpiredSpecification
{
public bool IsSatisfiedBy(Expirable<T> expirableValue)
{
//Return true if not expired; otherwise, false.
}
}
Then, when your repositories are returning a list of aggregates or when performing any business actions on a set, this can be utilized to restrict the set to un-expired values which will make your code expressive and keep the business logic within the domain.
To learn more about the Specification pattern, see this paper.
I've added a method to my abstract repository InvalidateExpirable. An example would be the UserRepository where I remove in active user sessions like this: InvalidateExpirable(x => x.Sessions, (user, expiredSession) => user.RemoveSession(expiredSession));.
The signature of InvalidateExpirable looks like this: protected void InvalidateExpirable<TExpirableValue>(Expression<Func<T, IEnumerable<Expirable<TExpirableValue>>>> selector, Action<T, Expirable<TExpirableValue>> remover). The method itself uses reflection to extract the selected property from the selector parameter. That property name is glued in a generic HQL query which will traverse over the set calling the remove lambda. user.RemoveSession will remove the session from the aggregate. This way the I keep the aggregate responsible for it's own data. Also in RemoveSession an domain event is raised for future cases.
See: https://gist.github.com/4484261 for an example
Works quite well sofar, I have to see how it works further down in the application though.
Have been reading up on DDD with CQRS/ES (Greg Young approach) and found a great example on the MSDN site about CQRS/ES: http://msdn.microsoft.com/en-us/library/jj554200.aspx
In this example they use the command message queue to queue a Expire message in the future, which will call the Aggregate at the specified time removing/deactivate the expirable construct from the aggregate.
I'm refactoring a project using DDD, but am concerned about not making too many Entities their own Aggregate Root.
I have a Store, which has a list of ProductOptions and a list of Products. A ProductOption can be used by several Products. These entities seem to fit the Store aggregate pretty well.
Then I have an Order, which transiently uses a Product to build its OrderLines:
class Order {
// ...
public function addOrderLine(Product $product, $quantity) {
$orderLine = new OrderLine($product, $quantity);
$this->orderLines->add($orderLine);
}
}
class OrderLine {
// ...
public function __construct(Product $product, $quantity) {
$this->productName = $product->getName();
$this->basePrice = $product->getPrice();
$this->quantity = $quantity;
}
}
Looks like for now, DDD rules as respected. But I'd like to add a requirement, that might break the rules of the aggregate: the Store owner will sometimes need to check statistics about the Orders which included a particular Product.
That means that basically, we would need to keep a reference to the Product in the OrderLine, but this would never be used by any method inside the entity. We would only use this information for reporting purposes, when querying the database; thus it would not be possible to "break" anything inside the Store aggregate because of this internal reference:
class OrderLine {
// ...
public function __construct(Product $product, $quantity) {
$this->productName = $product->getName();
$this->basePrice = $product->getPrice();
$this->quantity = $quantity;
// store this information, but don't use it in any method
$this->product = $product;
}
}
Does this simple requirement dictates that Product becomes an aggregate root? That would also cascade to the ProductOption becoming an aggregate root, as Product has a reference to it, thus resulting in two aggregates which have no meaning outside a Store, and will not need any Repository; looks weird to me.
Any comment is welcome!
Even though it is for 'reporting only' there is still a business / domain meaning there. I think that your design is good. Although I would not handle the new requirement by storing OrderLine -> Product reference. I would do something similar to what you already doing with product name and price. You just need to store some sort of product identifier (SKU?) in the order line. This identifier/SKU can later be used in a query. SKU can be a combination of Store and Product natural keys:
class Sku {
private String _storeNumber;
private String _someProductIdUniqueWithinStore;
}
class OrderLine {
private Money _price;
private int _quantity;
private String _productName;
private Sku _productSku;
}
This way you don't violate any aggregate rules and the product and stores can be safely deleted without affecting existing or archived orders. And you can still have your 'Orders with ProductX from StoreY'.
Update: Regarding your concern about foreign key. In my opinion foreign key is just a mechanism that enforces long-living Domain relationships at the database level. Since you don't have a domain relationship you don't need the enforcement mechanism as well.
In this case you need the information for reporting which has nothing to do with the aggregate root.
So the most suitable place for it would be a service (could be a domain service if it is related to business or better to application service like querying service which query the required data and return them as DTOs customizable for presentation or consumer.
I suggest you create a statistics services which query the required data using read only repositories (or preferable Finders) which returns DTOs instead of corrupting the domain with query models.
Check this