Check command for validity with data from other aggregate - web

I am currently working on my first bigger DDD application. For now it works pretty well, but we are stuck with an issue since the early days that I cannot stop thinking about:
In some of our aggreagtes we keep references to another aggregate-root that is pretty essential for the whole application (based on their IDs, so there are no hard references - also the deletion is based on events/eventual consistency). Now when we create a new Entity "Entity1" we send a new CreateEntity1Command that contains the ID of the referenced aggregate-root.
Now how can I check if this referenced ID is a valid one? Right now we check it by reading from the other aggregate (without modifying anything there) - but this approach somehow feels dirty. I would like to just "trust" the commands, because the ID cannot be entered manually but must be selected. The problem is, that our application is a web-application and it is not really safe to trust the user input you get there (even though it is not accessibly by the public).
Did I overlook any possible solutions for this problems or should I just ignore the feeling that there needs to be a better solution?

Verifying that another referenced Aggregate exists is not the responsibility of an Aggregate. It would break the Single responsibility principle. When the CreateEntity1Command arrive at the Aggregate, it should be considered that the other referenced Aggregate is in a valid state, i.e. it exists.
Being outside the Aggregate's boundary, this checking is eventually consistent. This means that, even if it initially passes, it could become invalid after that (i.e. it is deleted, unpublished or any other invalid domain state). You need to ensure that:
the command is rejected, if the referenced Aggregate does not yet exists. You do this checking in the Application service that is responsible for the Use case, before dispatching the command to the Aggregate, using a Domain service.
if the referenced Aggregate enters an invalid state afterwards, the corrects actions are taken. You should do this inside a Saga/Process manager. If CQRS is used, you subscribe to the relevant events; if not, you use a cron. What is the correct action it depends on your domain but the main idea is that it should be modeled as a process.
So, long story short, the responsibilty of an Aggregate does not extend beyond its consistency boundary.
P.S. resist the temptation to inject services (Domain or not) into Aggregates (throught constructor or method arguments).

Direct Aggregate-to-Aggregate interaction is an anti-pattern in DDD. An aggregate A should not directly send a command or query to an aggregate B. Aggregates are strict consistency boundaries.
I can think of 2 solutions to your problem: Let's say you have 2 aggregate roots (AR) - A and B. Each AR has got a bunch of command handlers where each command raises 1 or more events. Your command handler in A depends on some data in B.
You can subscribe to the events raised by B and maintain the state of B in A. You can subscribe only to the events which dictate the validity.
You can have a completely independent service (S) coordinating between A and B. Instead of directly sending your request to A, send your request to S which would be responsible for a query from B (to check for validity of referenced ID) and then forward request to A. This is sometimes called a Process Manager (PM).
For Example in your case when you are creating a new Entity "Entity1", send this request to a PM whose job would be to validate if the data in your request is valid and then route your request to the aggregate responsible for creating "Entity1". Send a new CreateEntity1Command that contains the ID of the referenced aggregate-root to this PM which uses ID of the referenced AR to make sure it's valid and if it's valid then only it would pass your request forward.
Useful Links: http://microservices.io/patterns/data/saga.html

Did I overlook any possible solutions for this problems
You did. "Domain Services" give you a possible loop hole to play in.
Aggregates are consistency boundaries; their behaviors are constrained by
The current state of the aggregate
The arguments that they are passed.
If an aggregate needs to interact with something outside of its boundary, then you pass to the aggregate root a domain service to encapsulate that interaction. The aggregate, at its own discretion, can invoke methods provided by the domain service to achieve work.
Often, the domain service is just a wrapper around an application or infrastructure service. For instance, if the aggregate needed to know if some external data were available, then you could pass in a domain service that would support that query, checking against some cache of data.
But - here's the trick: you need to stay aware of the fact that data from outside of the aggregate boundary is necessarily stale. There might be another process changing the data even as you are querying a stale copy.
The problem is, that our application is a web-application and it is not really safe to trust the user input you get there (even though it is not accessibly by the public).
That's true, but it's not typically a domain problem. For instance, we might specify that an endpoint in our API requires a JSON representation of some command message -- but that doesn't mean that the domain model is responsible for taking a raw byte array and creating a DOM for it. The application layer would have that responsibility; the aggregate's responsibility is the domain concerns.
It can take some careful thinking to distinguish where the boundary between the different concerns is. Is this sequence of bytes a valid identifier for an aggregate? is clearly an application concerns. Is the other aggregate in a state that permits some behavior? is clearly a domain concern. Does the aggregate exist at all...? could go either way.

Related

How to handle hard aggregate-wide constraints in DDD/CQRS?

I'm new to DDD and I'm trying to model and implement a simple CRM system based on DDD, CQRS and event sourcing to get a feel for the paradigm. I have, however, run in to some difficulties that I'm not sure how to handle. I'm not sure if my difficulties stem from me not having modeled the domain properly or that I'm missing something else.
For a basic illustration of my problems, consider that my CRM system has the aggregate CustomerAggregate (which seems reasonble to me). The purpose of this aggregate is to make sure each customer is consistent and that its invarints hold up (name is required, social security number must be on the correkct format, etc.). So far, all is well.
When the system receives a command to create a new customer, however, it needs to make sure that the social security number of the new customer doesn't already exist (i.e. it must be unique across the system). This is, of cource, not an invariant that can be enforced by the CustomerAggregate aggregate since customers don't have any information regarding other customers.
One suggestion I've seen is to handle this kind of constraint in its own aggregate, e.g. SocialSecurityNumberUniqueAggregate. If the social security number is not already registered in the system, the SocialSecurityNumberUniqueAggregate publishes an event (e.g. SocialSecurityNumberOfNewCustomerWasUniqueEvent) which the CustomerAggregate subscribes to and publishes its own event in response to this (e.g. CustomerCreatedEvent). Does this make sense? How would the CustomerAggregate respond to, for example, a missing name or another hard constraint when responding to the SocialSecurityNumberOfNewCustomerWasUniqueEvent?
The search term you are looking for is set-validation.
Relational databases are really good at domain agnostic set validation, if you can fit the entire set into a single database.
But, that comes with a cost; designing your model that way restricts your options on what sorts of data storage you can use as your book of record, and it splits your "domain logic" into two different pieces.
Another common choice is to ignore the conflicts when you are running your domain logic (after all, what is the business value of this constraint?) but to instead monitor the persisted data looking for potential conflicts and escalate to a human being if there seems to be a problem.
You can combine the two (ex: check for possible duplicates via query when running the domain logic, and monitor the results later to mitigate against data races).
But if you need to maintain an invariant over a set, and you need that to be part of your write model (rather than separated out into your persistence layer), then you need to lock the entire set when making changes.
That could mean having a "registry of SSN assignments" that is an aggregate unto itself, and you have to start thinking about how much other customer data needs to be part of this aggregate, vs how much lives in a different aggregate accessible via a common identifier, with all of the possible complications that arise when your data set is controlled via different locks.
There's no rule that says all of the customer data needs to belong to a single "aggregate"; see Mauro Servienti's talk All Our Aggregates are Wrong. Trade offs abound.
One thing you want to be very cautious about in your modeling, is the risk of confusing data entry validation with domain logic. Unless you are writing domain models for the Social Security Administration, SSN assignments are not under your control. What your model has is a cached copy, and in this case potentially a corrupted copy.
Consider, for example, a data set that claims:
000-00-0000 is assigned to Alice
000-00-0000 is assigned to Bob
Clearly there's a conflict: both of those claims can't be true if the social security administration is maintaining unique assignments. But all else being equal, you can't tell which of these claims is correct. In particular, the suggestion that "the claim you happened to write down first must be the correct one" doesn't have a lot of logical support.
In cases like these, it often makes sense to hold off on an automated judgment, and instead kick the problem to a human being to deal with.
Although they are mechanically similar in a lot of ways, there are important differences between "the set of our identifier assignments should have no conflicts" and "the set of known third party identifier assignments should have no conflicts".
Do you also need to verify that the social security number (SSN) is really valid? Or are you just interested in verifying that no other customer aggregate with the same SSN can be created in your CRM system?
If the latter is the case I would suggest to have some CustomerService domain service which performs the whole SSN check by looking up the database (e.g. via a repository) and then creates the new customer aggregate (which again checks it's own invariants as you already mentioned). This whole process - the lookup of existing SSN and customer creation - needs to happen within one transaction to to ensure consistency. As I consider this domain logic a domain service is the perfect place for it. It does not hold data by itself but orchestrates the workflow which relates to business requirements - that no to customers with the same SSN must be created in our CRM.
If you also need to verify that the social security number is real you would also need to perform some call the another service I guess or keep some cached data of SSNs in your CRM. In this case you could additonally have some SocialSecurityNumberService domain service which is injected into the CustomerService. This would just be an interface in the domain layer but the implementation of this SocialSecurityNumberService interface would then reside in the infrastructure layer where the access to whatever resource required is implemented (be it a local cache you build in the background or some API call to another service).
Either way all your logic of creating the new customer would be in one place, the CustomerService domain service. Additional checks that go beyond the Customer aggregate boundaries would also be placed in this CustomerService.
Update
To also adhere to the nature of eventual consistency:
I guess as you go with event sourcing you and your business already accepted the eventual consistency nature. This also means entries with the same SSN could happen. I think you could have some background job which continually checks for duplicate entries and depending on the complexity of your business logic you might either be able to automatically correct the duplicates or you need human intervention to do it. It really depends how often this could really happen.
If a hard constraint is that this must NEVER happen maybe event sourcing is not the right way, at least for this part of your system...
Note: I also assume that command de-duplication is not the issue here but that you really have to deal with potentially different commands using the same SSN.

Enforce invariants spanning multiple aggregates (set validation) in Domain-driven Design

To illustrate the problem we use a simple case: there are two aggregates - Lamp and Socket. The following business rule always must be enforced: Neither a Lamp nor a Socket can be connected more than once at the same time. To provide an appropriate command we conceive a Connector-service with the Connect(Lamp, Socket)-method to plug them.
Because we want to comply to the rule that one transaction should involve only one aggregate, it's not advisable to set the association on both aggregates in the Connect-transaction. So we need an intermediate aggregate which symbolizes the Connection itself. So the Connect-transaction would just create a new Connection with the given components. Unfortunately, at this point the troubles begin; how can we ensure the consistency of connection-state? It may happen that many simultaneous users want to plug the same components at the exact same time, so our "consistency check" wouldn't reject the request. New Connection-aggregates would be stored, because we only lock at aggregate-level. The system would be inconsistent without even knowing that.
But how should we set the boundary of our aggregates to ensure our business rule? We could conceive a Connections-aggregate which gathers all active connections (as Connection-entity), thereby enabling our locking-algorithm which would properly reject duplicate Connect-requests. On the other hand this approach is inefficient and does not scale, further it is counter-intuitive in terms of domain language.
Do you know what I'm missing?
Edit: To sum up the problem, imagine an aggregate User. Since the definition of an aggregate is to be a transaction-based unit we are able to enforce invariants by locking this unit per transaction. All is fine. But now a business rule arises: the username must be unique. Therefore we must somehow reconcile our aggregate boundaries with this new requirement. Assuming millions of users registering at the same time, it becomes a problem. We try to ensure this invariant in a non-locked state since multiple users means multiple aggregates.
According to the book "Domain-driven Design" by Eric Evans one should apply eventual consistency as soon as multiple aggregates are involved in a single transaction. But is this really the case here and does is make sense?
Applying eventual consistency here would entail registering the User and afterwards checking the invariant with the username. If two Users actually set the same username the system would undo the second registering and notify the User. Thinking about this scenario disconcerts me because it disrupts the whole registering process. Sending the confirmation e-mail, for example, had to be delayed and so forth.
I think I'm just forgetting about something in general but I don't know what. It seems to me that I need something like invariants on Repository-level.
We could conceive a Connections-aggregate which gathers all active
connections (as Connection-entity), thereby enabling our
locking-algorithm which would properly reject duplicate
Connect-requests. On the other hand this approach is inefficient and
does not scale, further it is counter-intuitive in terms of domain
language
On the contrary, I think you're on the right track with this approach. It seems convoluted because you're using an example that doesn't make any sense - there is no real-life system that checks if a lamp is connected to more than one socket or a socket to more than one lamp.
But applying that approach to the second example would lead you to ask yourself what the "connection" aggregate is in that case, i.e. inside which scope a user name is unique. In a Company? For a given Tenant or Customer? For the whole <whatever-subdomain-youre-in>System? Find the name of the scope and there you have it - an Aggregate to enforce the unique name invariant. Choose the name carefully and if it doesn't exist in the ubiquitous language yet, invent a new concept with the help of a domain expert. DDD is not only about respecting existing domain terms, you're also allowed to introduce new ones when Breakthroughs are achieved.
Sometimes though, you will find that concurrent access to this aggregate is too intensive and generates problematic contention. With domain expert assent, you can introduce eventual consistency with a compensating action in case of conflict - appending a suffix to the nickname and notifying the user, for instance. Or you can split the "hot" aggregate into smaller, smarter, more efficient ones.
The problem you are describing is called set validation. Greg Young makes a very good point that a key question is whether or not the cost/benefit analysis justifies enforcing this constraint in code.
But let's suppose it does....
I find it's most useful to think about set validation from the perspective of an RDBMS. How would we handle this problem if we were doing things with tables? A likely candidate is that we would have some sort of connection table, with foreign keys for the Lamp and the Socket. Then we would define constraints that would say that each of those foreign keys must be unique in the table.
Those foreign key constraints span the entire table; which is the database's way of telling us that the entire table represents a single aggregate.
So if you were going to lift those constraints into your domain model, you would do so by making an aggregate of all connections, so that the domain model can immediately rule on whether or not a given Lamp-Socket connection should be allowed.
Now, there's an important caveat here -- we're assuming that the domain model is the authority for connections between lamps and sockets. If we are modeling lamps in the real world connected to sockets in the real world, then its important to recognize that the real world is the authority, not the model.
Put another way, if the domain model gets conflicting information about the real world (two lamps are reportedly connected to the same socket), the model only knows that its information about the world is incorrect -- maybe the first lamp was plugged in, maybe the second, maybe there's a message missing about a lamp being unplugged. So in this sort of case, it's common that you'll want to allow the conflict, with an escalation to a human being for resolution.
the username must be unique
This is the single most commonly asked variation of the set validation problem.
The basic remedy is the same: you now have a User Profile aggregate, with an identifier, and a separate user name directory aggregate, which ensures that each name is uniquely associated with a profile.
If you aren't worried that a profile has at most one user name linked to it, then there is another approach you can take, which is to introduce an aggregate for each user name, which includes the profileId as a member. Thus, each aggregate can enforce the constraint that the name can only be assigned if the previous assignment was terminated.
I think I'm just forgetting about something in general but I don't know what.
Only that constraints don't come from nowhere -- there should be a business motivation for them; and somebody (the domain expert) should be able to document the cost to the business of failing to maintain the proposed set constraint.
For instance, if you are already collecting an email address, do you really need a unique username? How much additional value are you creating by including username in the model? How much more by making it unique...?
If we plan an online game, for example, with millions of users which request games constantly, it's a real problem.
Yes, it is; but that may indicate that the game design is wrong. Review Udi Dahan's discussion of high contention domains, and his essay Race Conditions Don't Exist.
A thing to notice, however, is that if you really have an aggregate, you can scale it independently from the rest of your system. One monster box is dedicated to managing the set aggregate and nothing else (analog: an RDBMS dedicated to managing a single table).
A more likely choice is going to be sharding by realm/instance/whatzit; in which case you'd have a smaller set aggregate for each realm instance.
In addition to the suggestions already made, consider that some of these problems are very similar to database concurrency problems. Say that you have a contact, and one user changes the name, and another user changes the phone number for this contact. If you write a command that updates the whole contact with the state as it was after modification, then one of the two will overwrite the change of the other with the old value, unless measures are taken.
If, however, you write a 'ChangeEmailForContact' command, then already you will only change that one field and not have a conflict with the name change, which would similarly be a 'Name' or 'RenameContact' command.
Now what if two people change the email address shortly after the other? A really efficient way is to pass the original value (original email address) along with the new value in your command. Now you can check when updating the email address if the original email address was the same as the current email address (so it is a valid starting point), or if the new email address is the same as the current email address (no need to do anything). If not, then, only then, are you in a conflict situation.
Now, apply this to your 'set operation'. The first time a lightbulb is moved into a 'connection' (perhaps I would call it fixture), it is moving from unassigned to connection1. Then, when a lightbulb is moved, it must be moved from connection1 to connection2, say. Now you can validate if that lightbulb is already assigned, if it was assigned to connection1 or if something has changed in the meantime.
It doesn't solve everything of course, but for the tiny case that remains, that tiny moment where two initial assignments happen close enough together, you either have to go for say a redis cache of assigned usernames to validate against or give an admin an easy tool to solve this very rare instance. You could for instance make a projection that occasionally reports on such situations and make sure renaming isn't too painful.

Domain-Driven Design: How to design relational aggregates with a dependency

My domain is about Program Management. I have a Program (Aggregate Root) that must have a Customer (Aggregate Root). So I require a CustomerID when creating a new Program as I have read aggregates should only hold reference to other aggregates by reference.
Here are my business rules:
Customers can become active and inactive over time.
If a Customer is inactivated for some reason, all programs associated with that Customer should also be inactivated.
A Program cannot be activated if its Customer is inactive.
Rules #1 & #2 I have implemented. It's #3 that is stumping me.
I can think of 3 solutions:
Program holds reference to the Customer aggregate.
Introduce a domain service that checks if the Customer is active and pass it to Program.Activate(CustomerActiveCheckService service).
Have the application service look up the Customer and pass it to Program.Activate(Customer customer).
Which is the best solution?
Update
I see both points of view made by #ConstaninGALBENU and #plalx, and I want to suggest a compromise. Can I created a CustomerStatusChecker service? The method would have the following signature: CustomerStatus CheckStatus(CustomerID id); I could then pass Programthe service like so: `Program.Activate(CustomerStatusChecker service);
Are there any problems with this design?
Which is the best solution?
There isn't a best solution; there are trade offs.
But one possible solution that is consistent with requirements #2 and #3 is that your existing model is wrong -- that Program entities are not isolated aggregates, but are part of the Customer entity, and therefore should be controlled by the same aggregate root.
Hints that this might be the case: that the life cycle of a Program fits within the life cycle of a Customer; that Programs don't normally migrate from one Customer to another, that there are limits to the count of active programs per customer.
Another possibility is that the requirements are "wrong". One way of exploring this is to review whether active/inactive is a decision made by the model, or if it is a decision made somewhere else and reported to the model. Another is to examine the cost to the business if this "rule" is violated.
If the model doesn't find out about the customer right away, or it is an inexpensive problem, then you probably have some room to detect the conflict and report it to a human, rather than trying to have the model do all of the work (See: Greg Young, Stop Over Engineering).
In these cases, having the main code path take a good guess, and implementing an alternative path that operators can use fix the mistakes is fine.
In choosing between solution #2 and #3 (I don't like #1 at all), I encourage keeping I/O actions out of the model. So unless you already have the latest version of the Customer in memory, I'm not fond of the domain service as a choice. Passing in a copy of the customer state to the domain model keeps the I/O concerns in the application component, where they belong (see Boundaries, by Gary Bernhardt, for more on this idea).
Solution 1: it breaks the rule about not holding references to other aggregate instances. That rule ensures that only one Aggregate is modified in a transaction. If you need to modify multiple aggregates in a single transaction then your design is definitely wrong.
Solution 2: I really don't like injecting services inside aggregates. My aggregates are pure functions with no touching of the outside world (I/O, repositories or the like).
Solution 3: is somehow equivalent to 1, even it is a temporary reference (Program could call command methods on Customer thus modifying Customer in the same transaction boundary as Program) .
My solution: make that check inside the Application service, before that call to Program.activate () or pass a customerStatus to Program.activate () and let Program aggregate decide if it throws an exception or emit events.
Update:
The idea is that you should pass only read-only/imutable data to Program AR to ensure that it does not modify other ARs in its transactional boundary. Also, we should not make Program dependent on what it does not need, like the entire Customer AR.
Also, if the architecture is event-driven then by listening to the right events emited by Customer you could keep the Program AR in sync: you make it "non activable" if not already activated or you deactivate it if it is activated already, using by example a Saga.

Read model for aggregate in DDD CQRS ES

In CQRS + ES and DDD, is it a good thing to have small read model in aggregate to get data from other aggregate or bounded context?
For example, in order validation (In Order aggregate), there is a business rules which validate order only if customer is not flagged. The flag information is put in read model (specific to the aggregate) via synchronous domain events.
What do you think about this ?
is it a good thing to have small read model in aggregate to get data from other aggregate or bounded context?
It's not ideal. Aggregates, due to their nature, are not good at enforcing consistency that involves state outside of themselves.
What this usually means is that the business is going to need some way to respond when two aggregates produce an unacceptable state.
You also have the option of checking for the flag before you run the placeOrder command on the aggregate. That check for the flag could be done in the command handler, or in the client -- basically, you have was of "validating" that the command should succeed before passing it to the aggregate.
That said, if it were critical to try to consult the read model while processing the command, a way to do it would be to use a "domain service"; you pass a service provider to the aggregate as part of the command, and let the interface abstract away the fact that running the query requires looking outside of the aggregate.
That gives you some of the decoupling you need to keep the aggregate testable.
It's doable, but not in the form of a read model, rather a Value Object in the Aggregate (since we're on the Write side).
If you already have a CustomerId in Order, you just have to compose a VO with it and a Flagged member.
Of course, this remains prone to all the problems of cross-aggregate communication since the data originates from Customer. Order has to be kept in sync with the flagged status of its Customer, which can require quite a bit of work.
In any case, you should probably first determine with your domain expert whether immediate consistency is an absolute requirement (in which case you have to somehow wrap Customer + Order in a transaction) or if you can afford a small delay in Flagged freshness when enforcing that invariant.
If the latter, you can choose between duplicating Flagged in the Order aggregate or the first option given by #VoiceOfUnreason - the main difference being probably that if the data is in the aggregate, you'll get it for free at the Domain level should you need it in multiple occasions, instead of duplicating the check in multiple use cases/command handlers at the application level.

What are consequences of using repository inside of aggregate vs inside of domain service

We all heard that injecting repository into aggregate is a bad idea, but almost no one tells why.
I will try to write here all disadvantages of doing this, so we can measure rightness of this statement.
First thing that comes into my head is Single Responsibility Principle.
It's true that by injecting repository into AR we are violating SRP, because retrieving and persisting of aggregate is not responsibility of aggregate itself. But it says only about "aggregate itself", not about other aggregates. So does it apply for retrieving from repository aggregates referenced by id? And what about storing them?
I used to think that aggregate shouldn't even know that there is some sort of persistence in system, because it doesn't have to exist. Aggregates can be created just for one procedure call and then get rid of.
Now when I think of it, it's not right, because aggregate root is an entity, and entity has sense only if it has some unique identity. So why would we need unique identity if not for persisting? Even if it's just a persistence in a memory. Maybe for comparing, but in my opinion it's not a main reason behind the identity.
Ok, let's assume that we retrieve and store OTHER aggregates from inside of our aggregate using injected repositories. What are other consequences beside SRP violation?
For sure there is a problem with having no control over persisting of aggregates and retrieving is some kind of lazy loading, which is bad for the same reason (no control).
Because of no control we can come into situation when we persist the same aggregate few times, where it could be persisted only once, or the same aggregate is loaded one hundred times where it could be loaded once, hence performance is worse. Also there might be problem with stale data.
These reasons practically disqualifies ability to inject repository into aggregate.
Here comes my main question - why can we inject repositories into domain service then?
Not the same reasons applies here? It's just like moving logic out of aggregate into separate function and pretend it to be something different.
To be honest, when I stared to write this SO question, I had no good answer for that. But after hours of investigating this problem and writing of this question I came to solution. Rubber duck debugging.
I'll post this question anyway for others having the same problems. Of course with my answer below.
Here are the places where I'd recommend to fetch aggregates (i.e. call Repository.Get...()), in preference order :
Application Service
Domain Service
Aggregate
We don't want Aggregates to fetch other Aggregates most of the time, because this blurs the lines, giving them orchestration powers which normally belong to the Application layer. You also raise the risk of the Aggregate trespassing its jurisdiction by modifying other Aggregates, which can result in contention and performance problems, not to mention that transactions become more difficult to analyze and the code base to reason about.
Domain Services are IMO a good place to fetch Aggregates when determining which aggregates to modify is domain logic per se. In your game example (which might not be the ideal context for DDD by the way), which units are affected by another unit's attack might be considered domain logic, thus you may not want to place it at the Application Service level. This rarely happens in my experience though.
Finally, Application Services are the default place where I call Repository.Get(...) for uniformity's sake and because this is the natural place to get a hold of the actors of the use case (usually only one Aggregate per transaction) and orchestrate calls to them.
That doesn't mean Aggregates should never be injected Repositories, there are exceptions, but other alternatives are almost always better.
So as I wrote in a question, I've found my answer already in the process of writing that question.
The best way to show this is by example:
When we have a simple (superficially) behavior like unit attacking other unit, we can write something like that.
unit.attack_unit(other_unit)
Problem is that, to attack an unit, we have to calculate damage and to do that we need another aggregates, like weapon and armor, which are referenced by id inside of unit. Since we cannot inject repository inside of aggregate, then we have to move that attack_unit logic into domain service, because we can inject repository there. Now where is the difference between injecting it into domain service, and not into unit aggregate.
Answer is - there is no difference. All consequences I described in question won't bite us. In both cases we will load both units once, attacking unit weapon once and armor of unit being attacked once. Also there won't be stale data, even if we mutate weapon object during process and store it, because that weapon is retrieved and stored in one place.
Problem shows up in different example.
Lets create an use case where unit can attack all other units in game in one process.
Problem lies in how we implement it. If we will use already defined unit.attack_unit and we will call it on all units in game (iterating over them), then weapon that is used to compute damage will be retrieved from unit aggregate, number of times equal to count of units in game! But it could be retrieved only once!
It doesn't matter if unit.attack_unit will be method of unit aggregate, or if it will be domain service unit_attack_unit. It will be still the same, weapon will be loaded too many times. To fix that we simply have to change implementation and with that probably interface too.
Now at least we have an answer to question "does moving logic from aggregate method to domain service (because we want to access repository there) fixes problem?". No, it does not change a thing.
Injecting repositories into domain service can be as dangerous as injecting it into aggregate if used wrong.
This answers my SO question, but we still don't have solution to real problem.
What can we do if we have two use cases: one where unit attacks one other unit, and second where unit attacks all other units, without duplicating domain logic.
One way is to put all needed aggregates as parameters to our aggregate method.
unit.attack_unit(unit, weapon, armor)
But what if we will need like five or more aggregates there? It's not a good way. Also application logic will have to know that all these aggregates are needed for an attack, which is knowledge leak. When attack_unit implementation will change we would also might to update interface of that method. What is the purpose of encapsulation then?
So, if we can't access repository to get needed aggregate, how can we smuggle it then?
We can get rid of idea with referencing aggregates by ids, or pass all needed aggregates from application layer (which means knowledge leak).
Or maybe reason of these problems is bad modelling?
Attacking of other unit is indeed an unit responsibility, but is damage calculation its responsibility? Of course not.
Maybe we need another object, like value object MeleeAttack(weapon, armor), yet when we add more properties that can change result of an attack, like enchantments on unit, it gets more complicated.
Also I think that we are now creating objects based on performance, not our on domain.
So from domain driven design, we get performance driven design. Is that what we want? I don't think so.
"So why would we need unique identity if not for persisting?" - think of an account scenario, where several John Smiths exist in your system. Imagine John Smith and John Smith Jr (who didn't enter the Jr in signup) both live at the same address. How do you tell them apart? Imagine I'm trying to write a recommendation engine based upon their past purchases . . . .
Identity is a quality of equality in DDD. If you don't have an identity unique from your fields, then you're a ValueObject.
What are consequences of using repository inside of aggregate vs inside of domain service?
There's a reasonably strong argument that you shouldn't do either.
Riddle: when does an aggregate need to see the state of another aggregate?
The responsibility of an aggregate is to control change. Any command that would change the state of the domain model is dispatched to the aggregate root responsible for the integrity of the state in question. By definition, all of the state required to ensure that the command is currently permitted is contained within the aggregate boundary.
So there is never any need to peek at the data outside of the aggregate when making a change to the model.
In which case, you don't ever need to load another aggregate, which makes the "where" question moot.
Two clarifications:
Queries will often combine the state of multiple aggregates, and will often need to follow a reference from one aggregate to another. The principle above is satisfied because queries treat the domain model as read-only. You need the state to answer the query, but you don't need the invariant enforcement because you aren't changing anything.
Another case is when you need state from another aggregate to process a command properly, but small latency in the data is an acceptable risk to the data. In that case, you query the "other" aggregate to get state. If you were to run that query within the domain model itself, the right way to do so would be via a domain service.
In most cases, though, you'll be equally well served to run the query when generating the command (ie, in the client), or when handling the command (in the application, outside the domain). It would be very unusual for a business to consider domain service latency to be acceptable but client latency to be unacceptable.
(Disconnected clients are one case where that can be especially problematic; when the command is generated and then queued for a long period of time before being dispatched to the server).

Resources