I'm reading Vernon's article Effective Aggregate Design. And I have a question about why modifies only one aggregate instance per transaction?
Let's take an example, consider a Warehouse invertory management story.
Inventory represents an item with quantity in a warehouse. 5 Implementing Domain Driven Design books in Shanghai warehouse for instance.
Entry represents a log about an in/out operation on an Inventory. Entering 2 Implementing Domain Driven Design books in Shanghai warehouse for instance.
An Inventory's quantity need to be changed if an Entry is submitted.
It easily comes to my mind, this is an invarient could be implemented by transactional consistency.
Solution A: Using one Aggregate and cluster Entry into Inventory.
public class Inventory implements Aggregate<Inventory> {
private InventoryIdentity id;
private Sku sku;
private int quantity;
private List<Entry> entries;
public void add(Entry entry) {
this.quantity += entry.getQuantity();
this.entries.add(entry);
}
}
public class Entry implements LocalEntity<Entry> {
private int quantity;
// some other attributes such as whenSubmitted
}
public class TransactionalInventoryAdminService impelments InventoryAdminService, ApplicationService {
#Override
#Transactional
public void handle(InventoryIdentity inventoryId, int entryQuantity, ...other entry attributes)
Inventory inventory = inventoryRepository.findBy(inventoryId);
Entry entry = inventory.newEntry(entryQuantity, ..);
inventory.add(entry);
inventoryRepository.store(inventory);
}
}
Solution B: Using seperate Aggregate for Inventory and Entry.
public class Inventory implements Aggregate<Inventory> {
private InventoryIdentity id;
private Sku sku;
private int quantity;
public void add(int quantity) {
this.quantity += quantity;
}
}
public class Entry implements LocalEntity<Entry> {
private Inventory inventory;
private int quantity;
private boolean handled = false;
// some other attributes such as whenSubmitted
public void handle() {
if (handled) {
throw .....
} else {
this.inverntory.add(quantity);
this.handled = true;
}
}
}
public class TransactionalInventoryAdminService impelments InventoryAdminService, ApplicationService {
#Override
#Transactional
public void handle(InventoryIdentity inventoryId, int entryQuantity, ...other entry attributes)
Inventory inventory = inventoryRepository.findBy(inventoryId);
Entry entry = inventory.newEntry(entryQuantity, ..);
entry.handle();
inventoryRepository.store(inventory);
entryRepository.store(entry);
}
}
Both A and B are feasible, but solution B is kind of inelegant for leaving inadvertent oppertunity to invoke Inventory.add(quantity) without Entry involved. Is this what the rule (Modifies only one aggregate instance per transaction) tries to point out for me? I'm confused mostly why we should modify only one aggregate in a transaction, what goes wrong if we don't.
Update1 start
Is it intend to alleviate concurrency problems (with another rule of "make smaller aggregates")? For example, Entry is an Aggregate with relatively low contention and Inventory is one with relatively high contetion (assuming that multiple user could manipulate one Inventory), it causes unnecessary concurrency failure if I modify them both in a transaction .
Update1 end
Some further problems need to be addressed if I adopt solution A:
1.What if there are many Entry s for an Inventory and I need a paged query UI? How to implement a paged query using Collections? One way is to load all Entry s and picks what the page need, the other way is InventoryRepository.findEntriesBy(invoiceId, paging), but this seems to break the rule of get an local entity only by get it's aggreate then navigate the object graph.
2.What if there are too many Entry s for an Inventory and I have to load all of them when add an new Entry?
I know these questions stem from lacking full understanding. So any idea is welcome, thanks in advance.
Rule of thumb is to keep your aggregates small, since you want to avoid transactional failures due to concurrency. And why would we make the memory footprint big if it shouldn't be?
So, solution A is not optimal. Big aggregates often introduce problems that easily can be avoided.
It's true that another rule of thumb is to only change one aggregate in one transaction. If you make Entry it's own aggregate, you can make the inventory's quantity eventual consistent, meaning the Entry aggregate could raise an event to which the inventory is subscribed. This way you're only changing one aggregate per transaction.
public class Entry {
public Entry(InventoryId inventoryId, int quantity) {
DomainEvents.Raise(new EntryAdded(inventoryId, quantity))
}
}
If you don't feel comfortable with eventual consistency, you can still keep the aggregates separate, but modify them both in one transaction for now - until you're feeling the pain, using an encapsulating domain service. Another option is to keep the domain events in process, so that they're also committed in a single transaction.
public class InventoryService {
public void AddEntryToInventory(Entry entry) {
// Modify Inventory quantity
// Add Entry
}
}
One of the reason you should avoid modifying multiple aggregates in single transaction is that each aggregate may be stored in different db storage and may require some specific transaction handling or impose all the difficulty of managing distributed transactions (two-phase commit etc).
The better approach is eventual consistency along with events and saga pattern.
See also: https://softwareengineering.stackexchange.com/questions/356106/ddd-why-is-it-a-bad-practice-to-update-multiple-aggregate-roots-per-transaction
Related
i'm stuck with this problem while designing aggregates in a DDD project.
Please consider the following scenario:
public abstract class BaseAppType{
public abstract int GetUserOwnerId();
public List<AppTypeHost> Hosts {get;set;} = new List<AppTypeHost>()
}
public class PersonalAppType:BaseAppType //this is an aggregate root
{
public int override GetUserOwnerId(){ return Hosts.Select(h=>h.UserId).Single(); }
}
public class TeamAppType:BaseAppType //this is another aggregate root
{
publi int TeamOwnerId {get;set;}
public int override GetUserOwnerId(){ //this is much harder becase i don't have the info in the object }
}
public class Team {
public List<TeamMember> TeamMembers = new List<TeamMember>();
}
public class TeamMember {
public int TeamId {get;set;}
public int UserId {get;set;}
public TeamMemberRole Role {get;set;} //this might be either Owner or Member
}
So basically i've two types of appointments that share common info, functionality and shape via a root class.
Now i've to implement GetUserOwnerId in the two derived class, which are two distinct aggregates root.
In the PersonalAppType it is kind of easy because the information of the userOwner is within one of the entity of the aggregate so i simply query the object in memory and return it.
In the TeamAppType it is more diffuclt because the information is in another aggregate root ( basically for my business rules, the owner of the TeamAppType is the Owner of the Team AggregateRoot).
Since Team is another AggregateRoot i could not load it into the TeamAppType aggregate and i pretty stuck...
I've tried:
the route of injecting a service in the TeamAppType
so that i can call it within the GetUserOwnerId but i don't like it because it feel "wrong" to inject a service within a domain constructor and it is kind of hard because when i retrieve the aggregate root from ef core, it doesn't inject the service ( because it uses the default construcor with 0 params )
I've also tried the route of doing it in a domain service, something like this:
public class AppTypeOwnerResolverService{
public int GetUserOwnerId (BaseAppType appType)
{
switch (appType.GetType())
{
case "PersonalAppType":
//retrieve owener of PersonalAppType
break
case "TeamAppType":
//retrieve owener of TeamAppType
break
}
}
}
but it feels off because it looks like the GetUserOwnerId should stay within the inherited class and this reduces the benefits of polymorfism.
Do you have any suggestion on how to approach this problem?
Thanks to everyone for the help.
Another option would be to have a Team aggregate emitting domain events, (i.e. TeamOwnerAssigned) and having a domain event handler that modifies the TeamAppType aggregate based on this event.
I am learning DDD and just faced a problem that I can't solve.
Assume we have following domain:
public class Hotel : AggregateRoot {
public List<Room> Rooms { get; private set; }
}
public class Room : Entity {
public string Name { get; set; }
public int Number { get; set; }
}
and now we want to model RoomReservations.
public class RoomReservationRecord : Aggregate {
public string CustomerName { get; set; }
public Room Room { get; set; } // <- this is problem
public DateTime DateFrom { get; set; }
public DateTime DateTo { get; set; }
}
as clearly visible, 2 aggregates contains (share) single entity. It makes sense from business perspective, however from DDD perspective, it looks like 2 aggregates share the same entity.
Is this approach correct, or it violates "Entity can be part of single Aggregate" rule?
Or is there better (obvious way) to model such requirement?
I will try to point out a few things but at the end of the day this is a modeling exercise and often there are compromises to be made.
Invariants - A big part (probably the primary) of why you would group objects in an object graph is to make sure that certain rules are enforced. So if it was a business rule that NO ROOM CAN EVER BE DOUBLE BOOKED, then Hotel would probably be an aggregate root with rooms and reservations as entities on it.
Something like this... but this has some drawbacks...
EG.
class Hotel
{
//members (eg. Rooms and Reservations) ...
public Hotel(string name, ICollection<Room> rooms)
{
//...
}
bool TryMakeReservation(Reservation reservation)
{
// if booked already return false
return true;
}
}
class Room
{
//members ...
public Room(int number, bool isBooked)
{
//..
}
}
class Reservation {
public string CustomerName { get; private set; }
public int RoomNumber { get; private set; }
public DateTime DateFrom { get; private set; }
public DateTime DateTo { get; private set; }
Reservation(Parameters)
{
// ...
}
}
NOTE: To use this model you would need to lock the hotel down each time you make a booking!
This might not be acceptable for a busy hotel. There are ways around this like reserving it for 5 minutes before automatically releasing it unless a reservation is confirmed. Or storing a list of events and if 2 reservations for the same room exist over the same period without a checkout event, kick off a process to notify someone to deal with the double booking.
Contexts - it might be that the list of rooms and the actual bookings are in separate domains. Think about how often bookings are done from multiple sources like AirBnb, Booking.com, hotel website, and/or at the counter in person or over the phone. It might not make sense for bookings and the persistence of available rooms to be in the same domain. What about cleaning schedules. A room needs to be cleaned before it is available but is this really handled in the Booking context?
Performance - as mentioned, sometimes the model we want just isn't possible because of the physics of getting that amount of data queried from a datastore. Users, Product Owners et.c tend not to care how clean your model is if it affects performance too much.
Repositories - following on from the above point, since an aggregate should be a consistent type (ie. data in it should never be in an incorrect state) then when you fetch your aggregate it should be consistent. If Hotel has a repository but contains a room, and Room is an aggregate root and has its own repository, and repositories are calling repositories, I would say you are failing at scaling complexity. The main point of DDD is giving a set of patterns and practices that help you deal with complexity. If by applying DDD principles you increased the complexity by a step that is never capitalized by the future smaller steps in complexity as new features are added, then DDD was probably not the right tool to use at that point in the project.
To speak to the linked article on Ids. Just not using primitive types can mitigate a lot of the problems here. Focusing on finding Value Objects can give a lot of clarity and really help express your domain. Even if you don't use DDD it is a valuable practice, which is why I wrote a series on them without ever mentioning Value Objects... I think.
I hope this helped. DDD, more than even FP to me I guess, has so many super valuable ideas in it that help create maintainable code that scales to requirements. Better than that it has a focus on soft elements outside the code like collaboration and shared a language that brings even more value but they are just guidelines that need to be applied because you want a certain gain (and are willing to pay any incurred costs). They are not rules to apply and very seldom is it just a wrong and right path.
In my attempt to start learning about DDD, I began with a simple domain model that I will gradually build up over time. My domain in this instance is the usual order > order item in order to keep things simple and be able to add on later with invoices, etc. Here's what I have so far:
public class Order
{
private readonly IList<OrderItem> _orderItems;
public Guid Id { get; private set; }
public bool Completed { get; private set; }
public DateTime Created { get; private set; }
public IEnumerable<OrderItem> OrderItems
{
get { return _orderItems; }
}
public Order()
{
Id = new Guid();
Created = DateTime.UtcNow;
_orderItems = new List<OrderItem>();
}
public void AddOrderItem(int quantity, int unitCost)
{
var orderItem = new OrderItem(quantity, unitCost);
_orderItems.Add(orderItem);
}
public void CompleteOrder()
{
Completed = true;
}
}
public class OrderItem
{
public int Quantity { get; private set; }
public int UnitCost { get; private set; }
public OrderItem(int quantity, int unitCost)
{
Quantity = quantity;
UnitCost = unitCost;
}
}
I will eventually turn Quantity and UnitCost into value objects, but that isn't the important part here. As DDD preaches, we always want to protect our invariants, but I'm having a little trouble with one piece of that. From an Order, you can add a new OrderItem by calling the AddOrderItem() method and passing your quantity and unit cost.
My question now becomes what is to stop another coder from creating a new OrderItem with var orderItem = new OrderItem(1, 2)? The OrderItem constructor should probably have an Order order parameter since an OrderItem cannot exist without an Order, but again now that other coder could just call new OrderItem(new Order(), 1, 2)?
Am I missing something? Or is it just accepted that the team working on the model needs to understand the fundamentals of DDD?
Update
Thanks #theDmi, #guillaume31, #Matt as you all have provided some good points. I think it is pretty clear at this point that the repository's interface should be enough to make it clear that you can't do anything with an OrderItem created by itself. Setting the ctor for OrderItem to internal also helps to enforce this restriction as well, but it may not be needed. I plan to see what happens with or without the internal ctor. Ultimately, the reason I accepted #guillaume31's answer is the comment about the bidirectional relationships. That makes a lot of sense and I have encountered this issue in the past with EF for example, so I like the idea of keeping it unilateral as well.
"An OrderItem cannot exist without an Order" is not really an invariant. Well at least it's not an invariant in the Order aggregate. By definition, invariants only look at things that are inside one aggregate (or span across multiple ones), not things that wander around outside an aggregate.
The OrderItem constructor should probably have an Order order
parameter since an OrderItem cannot exist without an Order
I wouldn't model it that way, because
Bidirectional relationships between entities are not recommended. It can lead to synchronization problems (A points to B but B points to something else), it's better to have unidirectional relations if you can.
By doing that, your ultimate goal is to put a constraint on what's happening outside an Aggregate, which is not really the point of DDD, and, as other answers have shown, dispensable. All changes in a DDD system go through an Aggregate and a Repository.
When working with DDD, all attempts to change the state of the system run through a repository, because you need to retrieve the aggregate you want to work on first. So even if someone creates objects that make no sense outside of a certain entity, they will not be able to do anything useful with it.
Regarding this problem, DDD has even advantages over CRUD-based systems: It leads to a high discoverability. First, the repository interface tells you what you can load. Then you get an aggregate, which in turn offers operations that modify the aggregate in a meaningful way.
We have an application which stores its data in two different databases. At some point in the future we may only be storing our data in one database, so we want it to be as painful as possible to make this kind of change. For this reason, we wrap our DbContexts in a single MyDataContext which gets injected into our UnitOfWork and Repository classes.
class MyDataContext : IDataContext {
internal Database1Context Database1;
internal Database2Context Database2;
}
class UnitOfWork : IUnitOfWork {
MyDataContext myDataContext;
public UnitOfWork(MyDataContext myDataContext) {
this.myDataContext = myDataContext;
}
public Save() {
//todo: add transaction/commit/rollback logic
this.myDataContext.Database1.SaveChanges();
this.myDataContext.Database2.SaveChanges();
}
}
class Database1Context : DbContext {
public DbSet<Customer> Customers { get; set; }
}
class Database2Context : DbContext {
public DbSet<Customers> CustomerProfile { get; set; }
}
class CustomerRepository : ICustomerRepository {
MyDataContext myDataContext;
public CustomerRepository(MyDataContext myDataContext) {
this.myDataContext = myDataContext;
}
public GetCustomerById(int id) {
return this.myDataContext.Database1.Customers.Single(...);
}
}
My first question is, am I doing it right? I've been doing a lot of reading, but admittedly DDD is a little bit overwhelming at this point.
My second question is which layer of the application do the IUnitOfWork and IDataContext interfaces reside in? I know that the interfaces for repositories live in the Core/Domain layer/assembly of the application, but not sure about these two. Should these two even have interfaces?
My first question is, am I doing it right?
You can do that, but first reconsider why you're storing data in different places in the first place. Are distinct aggregates at play? Furthermore, if you wish to commit changes to two different databases within a transaction, you will need to use 2-phase commit which is best to avoid. If you have different aggregates, perhaps you can save them separately?
My second question is which layer of the application do the
IUnitOfWork and IDataContext interfaces reside in?
These can be placed in the application layer.
I have doubts about the way the domain should enforce business rules when there is more than one aggregate involved.
Suppose I have the account and external account aggregates:
public class Account {
public String getId() {...}
public void add (Double amount) {}
}
public class ExternalAccount {
public String getId() {...}
public void add (Double amount) {}
}
and this service:
public class TransferService implements TransferServiceInterface {
public void transfer (String AccountId, String ExternalAccountId, Double amount) {
Account fromAccount = accRepository.get(AccountId);
ExternalAccount toAccount = extAccRepository.get(ExternalAccountId);
transferIsValid(fromAccount, toAccount, amount);
fromAccount.add(-amount);
toAccount.add(amount);
}
}
transferIsValid will throw an exception if the transfer doesn't comply with the domain rules.
How can I prevent the the user of this model from not using the Service and execute something like this:
Account fromAccount = accRepository.get(AccountId);
ExternalAccount toAccount = extAccRepository.get(ExternalAccountId);
fromAccount.add(-amount);
toAccount.add(amount);
The user didn't use the service and didn't use transferIsValid(...) to check the integrity. I believe that there is an error in my design because the user shouldn't be able to do something invalid. How can I prevent it? Where is the error in my design?
First of all: Do not use Add() to withdraw. DDD is all about following the domain. And I don't think that you say So when I add a negative amount of money to account A, the equal amount will be added to account B when you talk to the product owner. Add a Widthdraw method.
Remember. No users are involved when coding. programmers are. And all programmers can screw up the code.
Regarding the service: there is nothing you can do to prevent that with code. Unless the only valid way to withdraw money is to transfer it to another account. In that case you can change the Widthdraw() method to take another account as the argument.
Other than that, simply add documentation to your Widthdraw method and say that the service should be used if two accounts are involved. imho any DDD developer should know that the service should be used since it's how we do things in DDD (you & I did, and so should also the next dev with DDD experience).
Business logic should be in domain object, so, instead of putting business logic in TransferService, the better way, I think, to avoid business logic leak to Service is to create new Entity calledAccountTransfer with contains AccountFrom and AccountTo, something like (sorry I use C# in here):
public class AccountTransfer
{
Account From { get; set; }
Account To { get; set; }
// More properties
private bool IsValid(ammount)
{}
public void DoTransfer(int amount)
{
is (IsValid(ammount))
{
From.Withdraw(amount);
To.Add(amount);
}
}
}
You might need more information in object AccountTransfer like:
When to transfer
What kind of transfer: transfer via visa, paypal....
To populate this class into database, you store transfer history to trace them later.
With this way, you also put IsValid method inside AccountTransfer as private method.