DDD => behaviour in root aggregate : instanciate other root aggregate - domain-driven-design

I have 2 root aggregate :
- invoice
- complaint
And I have a rule who say : "I can't delete an invoice if a complaint is opened on it".
On my delete behaviour on invoice aggregate I want to check if a complaint exist like :
Complaint complaint = ComplaintRepository.findByInvoiceId(invoiceId);
if(complaint.isOpened) {
throw new Exception("Open Complain...");
}
else{
...
}
My collegues and I are disagree on this.
They said to me that I can't instanciate a Complaint in my behaviour since Complaint is not in my Aggregate.
My opinion is that I can't have a Complaint attribute in Invoice Class, but :
- I can refered one with a Value Object (they are ok with this)
- I can read/load an instance since I did not call behaviour on it...
Do you have an opinion on this ?

Technically you can do what you're proposing: from a certain point of view, if you're injecting a ComplaintRepository interface into the invoice, either through constructor injection or method injection, you're making the Invoice dependant on the contracts of both the Repository and and the Complaint and that's pretty much allowed.
You are right when you say you can't hold a reference to the complaint, but you can inject DDD artifacts (such as factories/repositories/entities) into operations when they're needed to run.
However the main point you must ask yourself is: do you really want this level of coupling between two distinct aggregates? At this point, they're so coupled together they mostly can't operate without one and the other.
Considering all of this, you might be into the scenario where the complaint might just be a part of the invoice aggregate (although your invoice aggregate probably has other responsibilities and you will start to struggle with the "Design Small Aggregates" goal). If you think about it, that's what the invariant "I can't delete an invoice if a complaint is opened on it" is proposing.
If for all means it's not practical for you to model the complaint as part of the invoice aggregate, you have some other options:
Make these aggregates eventually consistent: instead of trying to delete the invoice in "one shot", mark it as flagged for deletion in one operation. This operation triggers some sort of Domain Event in you messaging mechanism. This event "InvoiceFlaggedForDeletion" will then check for complaints on the Invoice. If you have no complaints, you delete it. If you have complaints, you rollback the Deletion Flag.
Put the deletion process in a Domain Service. That way, the Domain Service will coordinate the efforts of checking for complaints and deleting the invoice when appropriate. The downside of this approach is that your Invoice entity will be less explicit about it's rules, but DDD-wise this sometimes is an acceptable approach.

This statement:
I have 2 root aggregate : - invoice - complaint`
and this
And I have a rule who say : "I can't delete an invoice if a complaint is opened on it"`
are mutually exclusive, if you follow the rule of not having a database transaction bigger than one Aggregate (and you should try to follow it, it's a good rule).
Aggregates are the transactional boundary, this means that what happens inside an Aggregate is strongly consistent with what it will happen in the same Aggregate in the future (the invariants will hold no matter what, the Aggregates are always in a valid state).
However, what happens between different Aggregate instances is eventually consistent, this means that nothing can prevent the system (of multiple Aggregates) to enter in an invalid state without a higher level coordination. Aggregates are responsible only for they data they own.
Code like yours:
Complaint complaint = ComplaintRepository.findByInvoiceId(invoiceId);
//
// at this time a new complain could be added!!!
//
if(complaint.isOpened) {
throw new Exception("Open Complain...");
}
else{
invoiceRepository.delete(invoiceId);// and this would delete the invoice although there is a complain on this invoice!!!
}
would fail to respect the business rule I can't delete an invoice if a complaint is opened on it, unless it is wrapped in a bigger-than-a-single-Aggregate transaction.
Having that said, you have two DDD-ish options:
Review your design: merge the two Aggregates into one, for example, make the Compliant a nested entity inside the Invoice.
Use a higher level coordinator that would model the "deletion" of an Invoice as a long running business process. For this you can use a Saga/Process manager. The "simplest" such a Saga would also delete the Complains that were added after the Invoice was deleted. A more complex Saga could even prevent the Complain to be added after the Invoice was deleted (for this it would need to somehow intercept the opening of a Complain).

Aggregate roots should not hold references to repository. This approach has number of issues. Instead, load all objects from repository in application service (command handler) and pass to domain for manipulation. If manipulation engulfs multiple aggregates, either the domain logic is wrong (missing concept) or you might need a domain service. Either way, aggregates are best kept out of asking repository anything.

Another consideration should be - what does delete an invoice mean in this domain?
See - http://udidahan.com/2009/09/01/dont-delete-just-dont/
In this case, if you challenge the domain experts, perhaps the requirement around 'deleting' invoices has just come from them being trained about databases and implicitly converting their real requirement into a solution for you to try and be helpful.
Perhaps what they are really talking about is cancelling an invoice? Or archiving it? Or reversing it?
In any case, all these would allow you to model a state transition on the invoice without having to worry about 'orphan' complaints.
This would then prompt the consideration - what should happen to complaints if an invoice is cancelled? Should the owner of the complaint be notified? Should the complaint undergo it's own state transition? This could be triggered by an InvoiceCancelled event.
In DDD, whenever you see requirements relating to deletion and concerns around orphan records, it's usually a hint that there is some deeper knowledge crunching to be done to understand the real intent of the domain.

Related

How to reduce the higher time complexity brought by DDD

After reading the book Domain-Driven Design and some characters in the book Implementing Domain-Driven Design, I finally try to use DDD in a small service of a microservice system. And I have some questions here.
Here we have the entities Namespace, Group, and Resource. They are also aggregate roots:
As the picture pointed out, we have many Namespaces for users. And in every Namespace, we have Groups as well. And in every Group, we have Resources.
But I have a business logic:
The Group should have a unique name in its Namespace. (It is useful that the user can find the Group by its name)
To make it come true, I need to do those steps in the application layer to add a group with time complexity O(n):
Get the Namespace by its ID from the Repository of Namespace. It has a field Groups, and its type is []GroupID.
Get []Group value by []GroupID value from the Repository of Group.
Check if the name of the new group is unique in the existing Groups we get.
If it does be unique, then use the Repository of Group to save it.
But I think if I just use a sample transaction script, I can finish those in O(lg n). Because I know that I can let the field of Group name be unique in the database. How can I do it in DDD?
My thinking is:
I should add a comment in method save of the Repository interface for Group to let the user know that the save will check the name if is unique in the same Namespace.
Or we should use CQRS to check if the name of Group is unique? Another question is that maybe a Namespace may have a lot of Group. Even though we only put the ID of Group in the entity Namespace, it does cost a lot of space size. How to paginate the data?······ If we only want to get the name of Namespace by its ID, why we need get those IDs for Groups?
I do not want the DDD to limit me. But I still want to know what is the best practices. Before I know what happens, I try to avoid breaking rules.
My solution:
Thanks for the answer by #voiceofunreason. I find that it is hard to write code for set validation in the domain layer still.
#voiceofunreason tells me that I need consider the real world. I do consider and I am still confused that how to implement it to avoid breaking DDD rules. (Sorry but my question is not do we need the condition or not. My question is HOW to make the condition(or domain logic) come true without higher time complexity)
To be honest, I only have a MongoDB serving for storing all data. If I am using Transaction Script, everything is easy:
Create an index for the name of Group to make sure the names are unique.
Just insert a new Group. If the database raises any error, just refuse the request from the user.
But if I want to follow the DDD, and put the logic into the domain layer, I even do not know where to put the logic (it is easy in Transaction Script, right?). It really makes me feel blue. So my solution is:
Use DDD to split the total project into many bounded contexts.
And we do not care if we use the DDD or others in the bounded context. So tired I am.
In this bounded context, we just use Transaction Script.
Is the DDD not well to hold the condition for the set of entities, right? Because DDD always wants to get all data from the database rather than just deal in the database. Sometimes it makes the time complexity higher and I still do not know how to avoid it. Maybe I am wrong. If I am, please comment or post a new answer, thanks a lot.
The Group should have a unique name in its Namespace.
The general term for this problem is set validation. We have some collection of items, and we want to ensure that some condition holds over the entire set....
What is the business impact of having a failure
This is the key question we need to ask and it will drive our solution
in how to handle this issue as we have many choices of varying degrees
of difficulty. -- Greg Young, 2010
Some questions to consider include: is this a real constraint of the domain, or just an attempt at proofreading? Are we the authority for this data, or are we just storing a local copy of data that belongs to someone else? When we have conflicting information, can the computer determine whether the older or newer entry is in error? Does the business currently have a remediation process to use when the set condition doesn't hold? Can the business tolerate a conflict for some period of time (until end of day? minutes? nanoseconds?)
(In thinking about this last question, you may want to review Race Conditions Don't Exist, by Udi Dahan).
If the business requirement really is "we must never write conflicting entries into the collection", then any change you make must lock the collection against any potential conflicts. And this in turn has implications about, for example, how you can store the collection (trying to enforce a condition on a distributed collection is an expensive problem to have).
For the case where you can say: it makes sense to throw all of this data into a single relational database, then you might consider that the domain model is just going to make a "best effort" to avoid conflicts, and then re-enforce that with a "real" constraint in the data model.
You don't get bonus points for doing it the hard way.

DDD Modify one aggregate per transaction with invariants in both aggregates

Suppose I have an aggregate root Tenant and an aggregate root Organization. Multiples Organizations can be linked to a single Tenant. Tenant only has the Id of the Organizations in it's aggregate.
Suppose I have the following invariant in the Organization aggregate: Organization can only have one subscription for a specific product type.
Suppose I have the following invariant in the Tenant aggregate: only one subscription for a product type must exists across all Organizations related to a Tenant.
How can we enforce those invariants using the one aggregate per transaction rule?
When adding a subscription to an Organization, we can easily validate the first invariant, and fire a domain event to update (eventual consistency) the Tenant, but what happens if the invariant is violated in the Tenant aggregate?
Does it imply to fire another domain event to rollback what happens in the Organization aggregate? Seems tricky in the case a response had been sent to a UI after the first aggregate had been modified successfully.
Or is the real approach here is to use a domain service to validate the invariants of both aggregates before initiating the update? If so, do we place the invariants/rules inside the domain service directly or do we place kind of boolean validation methods on aggregates to keep the logic there?
UPDATE
What if the UI must prevent the user from saving in the UI if one invariants is violated? In this case we are not even trying to update an aggregate.
One thing you might want to consider is the possibility of a missing concept in your domain. You might want to explore the possibility of your scenario having something as a Subscription Plan concept, which by itself is an aggregate and enforces all of these rules you're currently trying to put inside the Tenant/Organization aggregates.
When facing such scenarios I tend to think to myself "what would an organization do if there was no system at all facilitating this operation". In your case, if there were multiple people from the same tenant, each responsible for an organization... how would they synchronize their subscriptions to comply with the invariants?
In such an exercise, you will probably reach some of the scenarios already explored:
Have a gathering event (such as a conference call) to make sure no redundant subscriptions are being made: that's the Domain Service path.
Each make their own subscriptions and they notify each other, eventually charging back redundant ones: that's the Event + Rollback path.
They might compromise and keep a shared ledger where they can check how subscriptions are going corporation wide and the ledger is the authority in such decisions: that's the missing aggregate path.
You will probably reach other options if you stress the issue enough.
How can we enforce those invariants using the one aggregate per transaction rule?
There are a few different answers.
One is to abandon the "rule" - limiting yourself to one aggregate per transaction isn't important. What really matters is that all of the objects in the unit of work are stored together, so that the transaction is an all or nothing event.
BEGIN TRANSACTION
UPDATE ORGANIZATION
UPDATE TENANT
COMMIT
A challenge in this design is that the aggregates no longer describe atomic units of storage - the fact that this organization and this tenant need to be stored in the same shard is implicit, rather than explicit.
Another is to redesign your aggregates - boundaries are hard, and its often the case that our first choice of boundaries are wrong. Udi Dahan, in his talk Finding Service Boundaries, observed that (as an example) the domain behaviors associated with a book title usually have little or nothing to do with the book price; they are two separate things that have a relation to a common thing, but they have no rules in common. So they could be treated as part of separate aggregates.
So you can redesign your Organization/Tenant boundaries to more correctly capture the relations between them. Thus, all of the relations that we need to correctly evaluate this rule are in a single aggregate, and therefore necessarily stored together.
The third possibility is to accept that these two aggregates are independent of each other, and the "invariant" is more like a guideline than an actual rule. The two aggregates act like participants in a protocol, and we design into the protocol not only the happy path, but also the failure modes.
The simple forms of these protocols, where we have reversible actions to unwind from a problem, are called sagas. Caitie McCaffrey gave a well received talk on this in 2015, or you could read Clemens Vasters or Bernd Rücker; Garcia-Molina and Salem introduced the term in their study of long lived transactions.
Process Managers are another common term for this idea of a coordinated protocol, where you might have a more complicated graph of states than commit/rollback.
The first idea that came to my mind is to have a property of the organization called "tenantHasSubscription" that property can be updated with domain events. Once you have this property you can enforce the invariant in the organization aggregate.
If you want to be 100% sure that the invariant is never violated, all the commands SubscribeToProduct(TenantId, OrganizationId) have to be managed by the same aggregate (maybe the Tenant), that has internally all the values to check the invariant.
Otherwise to do your operation you will always have to query for an "external" value (from the aggregate point of view), this will introduce "latency" in the operation that open a window for inconsistency.
If you query a db to have values, can it happen that when the result is on the wire, somebody else is updating it, because the db doesn't wait you consumed your read to allow others to modify it, so your aggregate will use stale data to check invariants.
Obviously this is an extremism, this doesn't mean that it is for sure dangerous, but you have to calculate the probability of a failure to happen, how can you be warned when it happen, and how to solve it (automatically by the program, or maybe a manual intervention, depending on the situation).

Domain-Driven Design: How to design relational aggregates with a dependency

My domain is about Program Management. I have a Program (Aggregate Root) that must have a Customer (Aggregate Root). So I require a CustomerID when creating a new Program as I have read aggregates should only hold reference to other aggregates by reference.
Here are my business rules:
Customers can become active and inactive over time.
If a Customer is inactivated for some reason, all programs associated with that Customer should also be inactivated.
A Program cannot be activated if its Customer is inactive.
Rules #1 & #2 I have implemented. It's #3 that is stumping me.
I can think of 3 solutions:
Program holds reference to the Customer aggregate.
Introduce a domain service that checks if the Customer is active and pass it to Program.Activate(CustomerActiveCheckService service).
Have the application service look up the Customer and pass it to Program.Activate(Customer customer).
Which is the best solution?
Update
I see both points of view made by #ConstaninGALBENU and #plalx, and I want to suggest a compromise. Can I created a CustomerStatusChecker service? The method would have the following signature: CustomerStatus CheckStatus(CustomerID id); I could then pass Programthe service like so: `Program.Activate(CustomerStatusChecker service);
Are there any problems with this design?
Which is the best solution?
There isn't a best solution; there are trade offs.
But one possible solution that is consistent with requirements #2 and #3 is that your existing model is wrong -- that Program entities are not isolated aggregates, but are part of the Customer entity, and therefore should be controlled by the same aggregate root.
Hints that this might be the case: that the life cycle of a Program fits within the life cycle of a Customer; that Programs don't normally migrate from one Customer to another, that there are limits to the count of active programs per customer.
Another possibility is that the requirements are "wrong". One way of exploring this is to review whether active/inactive is a decision made by the model, or if it is a decision made somewhere else and reported to the model. Another is to examine the cost to the business if this "rule" is violated.
If the model doesn't find out about the customer right away, or it is an inexpensive problem, then you probably have some room to detect the conflict and report it to a human, rather than trying to have the model do all of the work (See: Greg Young, Stop Over Engineering).
In these cases, having the main code path take a good guess, and implementing an alternative path that operators can use fix the mistakes is fine.
In choosing between solution #2 and #3 (I don't like #1 at all), I encourage keeping I/O actions out of the model. So unless you already have the latest version of the Customer in memory, I'm not fond of the domain service as a choice. Passing in a copy of the customer state to the domain model keeps the I/O concerns in the application component, where they belong (see Boundaries, by Gary Bernhardt, for more on this idea).
Solution 1: it breaks the rule about not holding references to other aggregate instances. That rule ensures that only one Aggregate is modified in a transaction. If you need to modify multiple aggregates in a single transaction then your design is definitely wrong.
Solution 2: I really don't like injecting services inside aggregates. My aggregates are pure functions with no touching of the outside world (I/O, repositories or the like).
Solution 3: is somehow equivalent to 1, even it is a temporary reference (Program could call command methods on Customer thus modifying Customer in the same transaction boundary as Program) .
My solution: make that check inside the Application service, before that call to Program.activate () or pass a customerStatus to Program.activate () and let Program aggregate decide if it throws an exception or emit events.
Update:
The idea is that you should pass only read-only/imutable data to Program AR to ensure that it does not modify other ARs in its transactional boundary. Also, we should not make Program dependent on what it does not need, like the entire Customer AR.
Also, if the architecture is event-driven then by listening to the right events emited by Customer you could keep the Program AR in sync: you make it "non activable" if not already activated or you deactivate it if it is activated already, using by example a Saga.

What if domain event failed?

I am new to DDD. Now I was looking at the domain event. I am not sure if I understand this domain event correctly, but I am just thinking what will happen if domain event published failed?
I have a case here. When a buyer order something from my website, firstly we will create a object, Order with line of items. The domain event, OrderWasMade, will be published to deduct the stock in Inventory. So here is the case, what if when the event was handled, the item quantity will be deducted, but what if when the system try to deduct the stock, it found out that there is no stock remaining for the item (amount = 0). So, the item amount can't be deducted but the order had already being committed.
Will this kind of scenario happen?
Sorry to have squeeze in 2 other questions here.
It seems like each event will be in its own transaction scope, which means the system requires to open multiple connection to database at once. So if I am using IIS Server, I must enable DTC, am I correct?
Is there any relationship between domain-events and domain-services?
A domain event never fails because it's a notification of things that happened (note the past tense). But the operation which will generate that event might fail and the event won't be generated.
The scenario you told us shows that you're not really doing DDD, you're doing CRUD using DDD words. Yes, I know you're new to it, don't worry, everybody misunderstood DDD until they got it (but it might take some time and plenty of practice).
DDD is about identifying the domain model abstraction, which is not code. Code is when you're implementing that abstraction. It's very obvious you haven't done the proper modelling, because the domain expert should tell you what happens if products are out of stock.
Next, there's no db/acid transactions at this level. Those are an implementation detail. The way DDD works is identifying where the business needs things to be consistent together and that's called an aggregate.
The order was submitted and this where that use case stops. When you publish the OrderWasMadeevent, another use case (deducting the inventory or whatever) is triggered. This is a different business scenario related but not part of "submit order". If there isn't enough stock then another event is published NotEnoughInventory and another use case will be triggered. We follow the business here and we identify each step that the business does in order to fulfill the order.
The art of DDD consists in understanding and identifying granular business functionality, the involved aggregates, business behaviour which makes decisions etc and this has nothing to do the database or transactions.
In DDD the aggregate is the only place where a unit of work needs to be used.
To answer your questions:
It seems like each event will be in its own transaction scope, which means the system requires to open multiple connection to database at once. So if I am using IIS Server, I must enable DTC, am I correct?
No, transactions,events and distributed transactions are different things. IIS is a web server, I think you want to say SqlServer. You're always opening multiple connections to the db in a web app, DTC has nothing to do with it. Actually, the question tells me that you need to read a lot more about DDD and not just Evans' book. To be honest, from a DDD pov it doesn't make much sense what you're asking.. You know one of principles of DD: the db (as in persistence details) doesn't exist.
Is there any relationship between domain-events and domain-services
They're both part of the domain but they have different roles:
Domain events tell the world that something changed in the domain
Domain services encapsulate domain behaviour which doesn't have its own persisted state (like Calculate Tax)
Usually an application service (which acts as a host for a business use case) will use a domain service to verify constraints or to gather data required to change an aggregate which in turn will generate one or more events. Aggregates are the ones persisted and always, an aggregate is persisted in an atomic manner i.e db transaction / unit of work.
what will happen if domain event published failed?
MikeSW already described this - publishing the event (which is to say, making it part of the history) is a separate concern from consuming the event.
what if when the system try to deduct the stock, it found out that there is no stock remaining for the item (amount = 0). So, the item amount can't be deducted but the order had already being committed.
Will this kind of scenario happen?
So the DDD answer is: ask your domain experts!
If you sit down with your domain experts, and explore the ubiquitous language, you are likely to discover that this is a well understood exception to the happy path for ordering, with an understood mitigation ("we mark the status of the order as pending, and we check to see if we've already ordered more inventory from the supplier..."). This is basically a requirements discovery exercise.
And when you understand these requirements, you go do it.
Go do it typically means a "saga" (a somewhat misleading and overloaded use of the term); a business process/workflow/state machine implementation that keeps track of what is going on.
Using your example: OrderWasMade triggers an OrderFulfillment process, which tracks the "state" of the order. There might be an "AwaitingInventory" state where OrderFulfillment parks until the next delivery from the supplier, for example.
Recommended reading:
http://udidahan.com/2010/08/31/race-conditions-dont-exist/
http://udidahan.com/2009/04/20/saga-persistence-and-event-driven-architectures/
http://joshkodroff.com/blog/2015/08/21/an-elegant-abandoned-cart-email-using-nservicebus/
If you need the stock to be immediately consistent at all times, a common way of handling this in event sourced systems (can also in non-event based systems, this is orthogonal really) is to rely on optimistic locking at the event store level.
Events basically have a revision number that they expect the stream of events to be at to take effect. Once the event hits the persistent store, its revision number is checked against the real stream number and if they don't match, a conflict exception is raised and the transaction is aborted.
Now as #MikeSW pointed out, depending on your business requirements, stock checking can be an out-of-band process that handles the problem in an eventually consistent way. Eventually can range from milliseconds if another part of the process takes over immediately, to hours if an email is sent with human action needing to be taken.
In other words, if your domain requires it, you can choose to trade this sequence of events
(OrderAbortedOutOfStock)
for
(OrderMade, <-- Some amount of time --> OrderAbortedOutOfStock)
which amounts to the same aggregate state in the end

How should I enforce relationships and constraints between aggregate roots?

I have a couple questions regarding the relationship between references between two aggregate roots in a DDD model. Refer to the typical Customer/Order model diagrammed below.
First, should references between the actual object implementation of aggregates always be done through ID values and not object references? For example if I want details on the customer of an Order I would need to take the CustomerId and pass it to a ICustomerRepository to get a Customer rather then setting up the Order object to return a Customer directly correct? I'm confused because returning a Customer directly seems like it would make writing code against the model easier, and is not much harder to setup if I am using an ORM like NHibernate. Yet I'm fairly certain this would be violating the boundaries between aggregate roots/repositories.
Second, where and how should a cascade on delete relationship be enforced for two aggregate roots? For example say I want all the associated orders to be deleted when a customer is deleted. The ICustomerRepository.DeleteCustomer() method should not be referencing the IOrderRepostiory should it? That seems like that would be breaking the boundaries between the aggregates/repositories? Should I instead have a CustomerManagment service which handles deleting Customers and their associated Orders which would references both a IOrderRepository and ICustomerRepository? In that case how can I be sure that people know to use the Service and not the repository to delete Customers. Is that just down to educating them on how to use the model correctly?
First, should references between aggregates always be done through ID values and not actual object references?
Not really - though some would make that change for performance reasons.
For example if I want details on the customer of an Order I would need to take the CustomerId and pass it to a ICustomerRepository to get a Customer rather then setting up the Order object to return a Customer directly correct?
Generally, you'd model 1 side of the relationship (eg., Customer.Orders or Order.Customer) for traversal. The other can be fetched from the appropriate Repository (eg., CustomerRepository.GetCustomerFor(Order) or OrderRepository.GetOrdersFor(Customer)).
Wouldn't that mean that the OrderRepository would have to know something about how to create a Customer? Wouldn't that be beyond what OrderRepository should be responsible for...
The OrderRepository would know how to use an ICustomerRepository.FindById(int). You can inject the ICustomerRepository. Some may be uncomfortable with that, and choose to put it into a service layer - but I think that's overkill. There's no particular reason repositories can't know about and use each other.
I'm confused because returning a Customer directly seems like it would make writing code against the model easier, and is not much harder to setup if I am using an ORM like NHibernate. Yet I'm fairly certain this would be violating the boundaries between aggregate roots/repositories.
Aggregate roots are allowed to hold references to other aggregate roots. In fact, anything is allowed to hold a reference to an aggregate root. An aggregate root cannot hold a reference to a non-aggregate root entity that doesn't belong to it, though.
Eg., Customer cannot hold a reference to OrderLines - since OrderLines properly belongs as an entity on the Order aggregate root.
Second, where and how should a cascade on delete relationship be enforced for two aggregate roots?
If (and I stress if, because it's a peculiar requirement) that's actually a use case, it's an indication that Customer should be your sole aggregate root. In most real-world systems, however, we wouldn't actually delete a Customer that has associated Orders - we may deactivate them, move their Orders to a merged Customer, etc. - but not out and out delete the Orders.
That being said, while I don't think it's pure-DDD, most folks will allow some leniency in following a unit of work pattern where you delete the Orders and then the Customer (which would fail if Orders still existed). You could even have the CustomerRepository do the work, if you like (though I'd prefer to make it more explicit myself). It's also acceptable to allow the orphaned Orders to be cleaned up later (or not). The use case makes all the difference here.
Should I instead have a CustomerManagment service which handles deleting Customers and their associated Orders which would references both a IOrderRepository and ICustomerRepository? In that case how can I be sure that people know to use the Service and not the repository to delete Customers. Is that just down to educating them on how to use the model correctly?
I probably wouldn't go a service route for something so intimately tied to the repository. As for how to make sure a service is used...you just don't put a public Delete on the CustomerRepository. Or, you throw an error if deleting a Customer would leave orphaned Orders.
Another option would be to have a ValueObject describing the association between the Order and the Customer ARs, VO which will contain the CustomerId and additional information you might need - name,address etc (something like ClientInfo or CustomerData).
This has several advantages:
Your ARs are decoupled - and now can be partitioned, stored as event streams etc.
In the Order ARs you usually need to keep the information you had about the customer at the time of the order creation and not reflect on it any future changes made to the customer.
In almost all the cases the information in the value object will be enough to perform the read operations ( display customer info with the order ).
To handle the Deletion/deactivation of a Customer you have the freedom to chose any behavior you like. You can use DomainEvents and publish a CustomerDeleted event for which you can have a handler that moves the Orders to an archive, or deletes them or whatever you need. You can also perform more than one operation on that event.
If for whatever reason DomainEvents are not your choice you can have the Delete operation implemented as a service operation and not as a repository operation and use a UOW to perform the operations on both ARs.
I have seen a lot of problems like this when trying to do DDD and i think that the source of the problems is that developers/modelers have a tendency to think in DB terms. You ( we :) ) have a natural tendency to remove redundancy and normalize the domain model. Once you get over it and allow your model to evolve and implicate the domain expert(s) in it's evolution you will see that it's not that complicated and it's quite natural.
UPDATE: and a similar VO - OrderInfo can be placed inside the Customer AR if needed, with only the needed information - order total, order items count etc.

Resources