DDD- Mapping events from external Bounded Context to domain model - domain-driven-design

My team is building a new microservice leveraging techniques from Domain-Driven Design and Event Sourcing. This service has to integrate with a handful of external bounded contexts (BCs) in the form of other legacy services. We've identified our core domain model, and its clear that we need some sort of Anti-corruption layer (ACL) between the external BCs and our internal domain model. However, we're getting hung up on some of the technical details of how best to accomplish this.
Let's say our domain has an Asset aggregate which can represent some piece of equipment from several of these external BCs. Once of these BCs, let's say BC-A, sends out EquipmentUpdate events on a message broker that our application can subscribe to. These events carry no intent - they merely reflect that the state of the external entity has changed in some way and it's up to us to determine what actually changed. The ID in the events is also from the external BC, not our domain.
So our ACL has to do the following tasks:
Map the external identifier from BC-A to our internal aggregate
Perform a diff of the new event and current/previous state to figure out what actually changed
Do this in a way that is resilient against out-of-order and duplicate event messages
Option 1 - Query directly from repository
First option is to use the external identifier directly in our repository to fetch the matching Asset. This seems like the simplest option, however it feels wrong since it's leaking concepts from external BCs into our repository API.
Furthermore, it forces us to to store external identifiers and event version directly on our aggregate which also feels like it defeats the purpose of the ACL.
interface AssetRepository {
Optional<Asset> findAssetByExternalIdentifier(String externalId);
}
Option 2 - Dedicated mapping
Second option is to expose a dedicated query that uses the external identifier to query the matching internal identifier, as well as the latest version that was processed.
This feels cleaner, but requires an additional read from the database where option 1 was a single read.
data class AssetMappingQueryResult {
String AssetId;
Long LatestVersion;
}
interface AssetMappingQuery {
Optional<AssetMappingQueryResult> resolveFromExternalIdentifier(String externalId);
}
How are other teams doing this?

I would tend to treat the EquipmentUpdate message as a signal that something might have changed. On receipt of such a message, the ACL queries BC-A for the latest state for the associated IDs, compares that state with the state it received the last time, and emits commands corresponding to the state changes which are of interest to the bounded context you're developing. In the case of duplicate messages (where the second event conveys no state change), this approach is idempotent. The confluence of "select current" likewise makes out-of-order not a concern. The ACL may want to guard against concurrent modifications involving the same ID, though viewing its output as commands against your BC's write model might make that unnecessary (especially depending on the chance that one of the concurrent modifications might be slow). The specific techniques for that vary, my personal preference coming from the Akka world would be to have responsibility for a given ID assigned to an actor.
An ACL is by its nature somewhat outside of any bounded context: it's analogous to the space between customs/immigration outposts on the border between countries. One could say it's partially in both (it's also, from a CQRS standpoint, a read-model for the "other" bounded context, though it may be a "read-model once removed", given that it should probably query a read-model in the source bounded context) bounded contexts. Alternatively, one could call an ACL a miniature bounded context which incorporates knowledge of parts of two other BCs: this may even extend to having its own aggregates, repositories, etc.

Related

How to handle hard aggregate-wide constraints in DDD/CQRS?

I'm new to DDD and I'm trying to model and implement a simple CRM system based on DDD, CQRS and event sourcing to get a feel for the paradigm. I have, however, run in to some difficulties that I'm not sure how to handle. I'm not sure if my difficulties stem from me not having modeled the domain properly or that I'm missing something else.
For a basic illustration of my problems, consider that my CRM system has the aggregate CustomerAggregate (which seems reasonble to me). The purpose of this aggregate is to make sure each customer is consistent and that its invarints hold up (name is required, social security number must be on the correkct format, etc.). So far, all is well.
When the system receives a command to create a new customer, however, it needs to make sure that the social security number of the new customer doesn't already exist (i.e. it must be unique across the system). This is, of cource, not an invariant that can be enforced by the CustomerAggregate aggregate since customers don't have any information regarding other customers.
One suggestion I've seen is to handle this kind of constraint in its own aggregate, e.g. SocialSecurityNumberUniqueAggregate. If the social security number is not already registered in the system, the SocialSecurityNumberUniqueAggregate publishes an event (e.g. SocialSecurityNumberOfNewCustomerWasUniqueEvent) which the CustomerAggregate subscribes to and publishes its own event in response to this (e.g. CustomerCreatedEvent). Does this make sense? How would the CustomerAggregate respond to, for example, a missing name or another hard constraint when responding to the SocialSecurityNumberOfNewCustomerWasUniqueEvent?
The search term you are looking for is set-validation.
Relational databases are really good at domain agnostic set validation, if you can fit the entire set into a single database.
But, that comes with a cost; designing your model that way restricts your options on what sorts of data storage you can use as your book of record, and it splits your "domain logic" into two different pieces.
Another common choice is to ignore the conflicts when you are running your domain logic (after all, what is the business value of this constraint?) but to instead monitor the persisted data looking for potential conflicts and escalate to a human being if there seems to be a problem.
You can combine the two (ex: check for possible duplicates via query when running the domain logic, and monitor the results later to mitigate against data races).
But if you need to maintain an invariant over a set, and you need that to be part of your write model (rather than separated out into your persistence layer), then you need to lock the entire set when making changes.
That could mean having a "registry of SSN assignments" that is an aggregate unto itself, and you have to start thinking about how much other customer data needs to be part of this aggregate, vs how much lives in a different aggregate accessible via a common identifier, with all of the possible complications that arise when your data set is controlled via different locks.
There's no rule that says all of the customer data needs to belong to a single "aggregate"; see Mauro Servienti's talk All Our Aggregates are Wrong. Trade offs abound.
One thing you want to be very cautious about in your modeling, is the risk of confusing data entry validation with domain logic. Unless you are writing domain models for the Social Security Administration, SSN assignments are not under your control. What your model has is a cached copy, and in this case potentially a corrupted copy.
Consider, for example, a data set that claims:
000-00-0000 is assigned to Alice
000-00-0000 is assigned to Bob
Clearly there's a conflict: both of those claims can't be true if the social security administration is maintaining unique assignments. But all else being equal, you can't tell which of these claims is correct. In particular, the suggestion that "the claim you happened to write down first must be the correct one" doesn't have a lot of logical support.
In cases like these, it often makes sense to hold off on an automated judgment, and instead kick the problem to a human being to deal with.
Although they are mechanically similar in a lot of ways, there are important differences between "the set of our identifier assignments should have no conflicts" and "the set of known third party identifier assignments should have no conflicts".
Do you also need to verify that the social security number (SSN) is really valid? Or are you just interested in verifying that no other customer aggregate with the same SSN can be created in your CRM system?
If the latter is the case I would suggest to have some CustomerService domain service which performs the whole SSN check by looking up the database (e.g. via a repository) and then creates the new customer aggregate (which again checks it's own invariants as you already mentioned). This whole process - the lookup of existing SSN and customer creation - needs to happen within one transaction to to ensure consistency. As I consider this domain logic a domain service is the perfect place for it. It does not hold data by itself but orchestrates the workflow which relates to business requirements - that no to customers with the same SSN must be created in our CRM.
If you also need to verify that the social security number is real you would also need to perform some call the another service I guess or keep some cached data of SSNs in your CRM. In this case you could additonally have some SocialSecurityNumberService domain service which is injected into the CustomerService. This would just be an interface in the domain layer but the implementation of this SocialSecurityNumberService interface would then reside in the infrastructure layer where the access to whatever resource required is implemented (be it a local cache you build in the background or some API call to another service).
Either way all your logic of creating the new customer would be in one place, the CustomerService domain service. Additional checks that go beyond the Customer aggregate boundaries would also be placed in this CustomerService.
Update
To also adhere to the nature of eventual consistency:
I guess as you go with event sourcing you and your business already accepted the eventual consistency nature. This also means entries with the same SSN could happen. I think you could have some background job which continually checks for duplicate entries and depending on the complexity of your business logic you might either be able to automatically correct the duplicates or you need human intervention to do it. It really depends how often this could really happen.
If a hard constraint is that this must NEVER happen maybe event sourcing is not the right way, at least for this part of your system...
Note: I also assume that command de-duplication is not the issue here but that you really have to deal with potentially different commands using the same SSN.

One aggregate per transaction, with "one" or "multiple" bounded contexts

Following the Vaughn Vernon recommendation, to achieve a high level of decoupling and single responsibility, just one aggregate should be changed per transaction.
In the chapter 8 of the Red Book Vaughn Vernon demonstrated how two aggregates can "talk" to each other with domain events. In the chapter 13 how different aggregates in two different bounded context can "talk" to each other with notifications.
My question is, why should I deal with these situations differently once both of them happen in different transaction? If is it just one or multiple bounded contexts the possible problems wouldn't be the same?
For example, if the application crashes between two domain events in the same bounded context I'll end up with inconsistency as with two bounded contexts.
It seems that the safest way to deal with two aggregates "talking" to each other asynchronously is to have a transitional status in it, persist the events before send them (to avoid lose events), have idempotent operations when possible and deduplicate the event in the receiving side when it's not possible to execute the operation in an idempotent way.
I see two aspects to consider in your question:
The DDD aspect: Event types and what you do with them
A technical aspect: how to implement it reliably
Regarding the types of Events what I would say is that events that stay within the boundaries of a bounded context (often called Domain Events) normally carry a lot of information. Potentially a big part of the state of the Aggregate. If you use CQRS, they are used to create the Read Model. Events that cross the BC boundaries are sometimes called Integration Events and they should carry as little data as possible (potentially, only global IDs, like CustomerId, OrderId). The reason is that every extra property that you add is extra coupling between the publisher BC and the subscriber BCs, which is what you want to minimize.
I would say that it's this distinction between the types of Events which might lead to have different technical solutions, but I agree with you that it doesn't have to be this way if you find a solution that works well for both cases.
The solution you propose is correct. It looks very similar to the Outbox feature of NServiceBus, which basically takes care of all this for you.
Another approach that I've used, if your message broker supports it, is what Azure Service Bus calls Send Via. With this feature, you can publish events Via your own queue but the send will be committed transactionally with the removal of the incoming message from the queue. This means that if for some reason the message that you are processing is not deleted from the queue successfully (DB update exception, broker unavailable, etc) and therefore it will be retried, you know for sure that the events won't be sent and you can safely publish them again during the retry. This makes making idempotent operations simpler and avoids publishing ghost messages.

How can I design a bridge from a legacy CRUD oriented app to a CQRS and Event sourcing system?

I was asked to implement CQRS/Event sourcing patterns into a legacy web application, in order to prepare to migrate it from a monolithic/state oriented model to a distributed, service oriented app.
I have some questions on how I can design a Domain oriented code bundle that would connect the legacy entities strongly coupled to database, with a new Event sourced model.
The first things I did were:
writing a small "framework" for CQRS/ES, with classes like AggregateRoot, DomainEvent, Command, Handlers, Messaging, Eventstore, AggregateIds, etc.
trying to group and "migrate" the legacy Entities into some Aggregates to reconstruct all the history and states of the app into EventSoourced Aggregates
plug some Commands dispatching in the old controllers in order to let the app work as is, but also to feed the new CQRS/ES system on the side.
The context:
The legacy app contains several entities, mapped to database, that hold the model layer. (Our domain is Human resources (manpower).
Let's say we have those existing entities:
Worker, with various fields and related entities (OneToOne, OneToMany), like
name
address 1-1
competences 1-N
Society, in which worker works, with various fields and related entities (OneToOne, OneToMany), like
name
address 1-1
hours
Contract, with various fields and related entities (OneToOne, OneToMany), like
address 1-1
Worker 1-1
Society 1-1
documents 1-N
days 1-N
hours
etc.
From this legacy model, I designed a MissionAggregate that holds:
A db independent ID, like UUID
some Value objects: address, days (they were an entity in the legacy model, they became VOs here)
I also designed a WorkerAggregate and a SocietyAggregate, with fields and UUIDS, and in the MissionAggregate I added:
a reference to WorkerAggregate's UUID
a reference to SocietyAggregate's UUID
As I said earlier, my aim is to leave the legacy app as is, but just introduce in the CRUD controller's methods some calls to dispatch Commands to the new CQRS system.
For example:
After flushing newly created Contract in bdd, I want to dispatch a "CreateMissionCommand" to the new command bus.
It targets the appropriate Command Handler, that handles all the command's data, passes it to a newly created Aggregate with a new UUID and stores "MissionCreatedDomainEvent" in the EventStore.
The DomainEvent is indexed with an AggregateId, a playhead, and has a payload which contains the fields necessary to be applied to and build the MissionAggregate.
The newly Contract created in the app has now its former lifecycle, as usual, with all the updates that the legacy app does on it. But I also need to reflects all those changes to the corresponding EventSourcedAggregate, so every time there is a flush in database in the app, I dispatch a Command that translates the "crud like operations" of the legacy app into a Domain oriented /Command oriented pattern.
To sum up the workflow is:
A Crud legacy operation occurs and flushes some changes on the Contract Entity
In just a row of code in the controller, I dispatch a command built with necessary fields (AggregateId of the MissionAggregate... that I need to have stored somewhere... see next problems) to the Domain command bus, so that the impact on the existing code base is very low.
The bus passes the command to the corresponding command handler
The handler loads the aggregate and applies the changes it by calling the appropriate Aggregate method
then after some validation, the aggregate raises and stores the appropriate event
My problems and questions (some of them at least) are:
I feel like I am rewriting all big portions of the legacy app, with the same kind of relations between the Aggregates that I have between the Entities, and with the same type of validations, checks etc.
Having references, to both WorkerAggregate and SocietyAggregate UUID in MissionAggregate implies that I have to build those aggregate also (hence to dispatch commands from legacy app when the Worker and Society entities are flushed). Can't I have only references to Worker's entity id and Society's entity id?
How can I avoid having a eternally growing MissionAggregate? The Contract Entity is quite huge, it has a lot of fields that are constantly updated (hours, days, documents, etc.) If I want to store all those events, I need to have a large MissionAggregate to reflect all those changes; and so I need to have a tons of CommandHandlers that react to all the Commands of add, update, etc. that I am going to dispatch from the legacy app.
How "free" is an Aggregate from the Root entity it is supposed to refer to ? For example, a Contract Entity needs to relate somewhere to it's related Mission Aggregate, like for example when I want to dispatch a Command from the app, just after the legacy code having flushed something on the Entity. Where to store this relation? In the Entity itself, in a AggregateId field? in the Aggregate, should I have a ContractId field? Or should I have some kind of Mapping Table somewhere that holds the relationship between Contract ID and MissionAggregate ID?
What to do with the past? Should I migrate all the existing data through a script that generates Aggregates and events on all the historical data?
Thanks in advance for your time.
You have a huge task ahead of you, let's try to break it down.
It's best to build this new part of the system in isolation from the legacy codebase, otherwise you're going to have your hands tied in every turn of the way.
Create a separate layer in your project for these new requirements. We're going to call it "bubble" from now on. This bubble will be like a greenfield project, with its own structure, dependencies, etc. There will be no direct communication between the bubble and the legacy; communication will happen through another dedicated translation layer, which we'll call "Anti-Corruption Layer" (ACL).
ACL
It is like an API between two systems.
It translates calls from the bubble to the legacy and vice-versa. Its purpose is to prevent one system from corrupting or influencing the other. This way you can keep building/maintaining each system independently from each other.
At the same time, the ACL allows one system to consume the other, and reuse logic, validations, rules, etc.
To answer your questions directly:
I feel like i am rewriting all big portions of the legacy app, with the same kind of relations between the Aggregates that i have between the Entities, and with the same type of validations, checks etc.
With the ACL, you can resort to calling validations and reuse implementations from the legacy code. This will allow you time to rewrite things as needed or as possible.
You may not need to rewrite the entire system, though. If your goal is to implement CQRS and Event Sourcing and you can achieve this goal by keeping most or part of the legacy system, I would say you do it. Unless, of course, one of the goals is to completely replace the old system. Otherwise, keep it; write as less code as possible.
Suggested workflow:
Keep the CQRS and Event Sourcing system in the bubble
Do not bring these new frameworks into legacy
Make the lagacy Controller issue method calls to the ACL
The ACL will convert these calls into Commands and dispatch them
Any events will be caught by your Event Sourcing framework
Results will be persisted to the bubble's database
The bubble's database can be a different schema in the same database or can be a different database altogether. But you'll have to think about synchronization, and that's a topic of its own. To reduce complexity, I recommend a different schema in the same database.
Having references, to both WorkerAggregate and SocietyAggregate UUID in MissionAggregate implies that i have to build those aggregate also (hence to dispatch commands from legacy app when the Worker and Society entities are flushed). Can't i have only references to Worker's entity id and Society's entity id?
How can i avoid having a eternally growing MissionAggregate ? The Contract Entity is quite huge, it has a looot of fields that are constantly updated (hours, days, documents, etc.) If i want to store all those events, i need to have a large MissionAggregate to reflect all those changes; and so i need to have a tons of CommandHandlers that react to all the Commands of add, update, etc that i am going to dispatch from the legacy app.
You should aim for small aggregates. Huge aggregates are likely to degrade performance and cause concurrency problems.
If you anticipate having a huge aggregate, it is best to rethink it and try to break it down. Ask what fields/properties change together - these are possibly a different aggregate.
Also, when you speak about CQRS, you generally lean towards a task-based way of doing things in your system.
Think of a traditional web application, where you have a huge page with lots of fields that are all sent to the server in one batch when the user saves.
Now, contrast it with a modern web app where the user changes small portions of data at each step. If you think about your system this way you'll find those smaller aggregates.
PS. you don't need to rebuild your interfaces for this. If your legacy system has those huge pages, you could have logic in the controllers to detect which fields were changed and issue the appropriate commands.
How "free" is an Aggregate from the Root entity it is supposed to refer to ? For example, a Contract Entity needs to relate somewhere to it's related Mission Aggregate, like for example when i want to dispatch a Command from the app, just after the legacy code having flushed something on the Entity. Where to store this relation ? In the Entity itself, in a AggregateId field ? in the Aggregate, should i have a ContratId field ? Or should i have some kind of Mapping Table somewhere that holds the relationship between Contract ID and MissionAggregate ID?
Aggregates represent a conceptual whole. They are like atoms, indivisible things. You should always refer to an aggregate by its Root Entity Id, and never to a Child Entity Id: looking from the outside, there are no children.
An aggregate should be loaded as a whole and persisted as a whole. One more reason to have small aggregates.
An aggregate can be comprised of a single entity. Or it can have more entities and value objects, forming a graph, but one entity will be elected as the Root and will hold references to its children. Child entities and value objects should not hold references to their parents. The dependency is not bi-directional.
If Contract is an entity inside the Mission aggregate, the Contract should not have a reference to its parent.
But, if your Contract and Mission are different aggregates, then they can reference each other by their Ids.
What to do with the past? Should i migrate all the existing datas through a script that generates Aggregates and events on all the historical data?
That's a question for the business experts. Do they need it? If they don't, then don't implement it just for the sake of doing so. Every decision you make should be geared towards satisfying a business need and generating real value for it, considering the costs and tradeoffs.
Some people say that code is a liability, not an asset, and I aggre to some extent: every line of code you write needs to be tested and supported. Don't write any code that is not really necessary.
Also, have a look at this article about the Strangler Pattern, which shows how to migrate a legacy system by gradually replacing specific pieces of functionality with new applications and services.
If you have a chance, watch this course at Pluralsight (paid): Domain-Driven Design: Working with Legacy Projects. The author presents practical approaches for dealing with this kind of task.
I hope this has given you some insight.
I don't want to spoil your game. Everybody knows how cool it is to rewrite something from scratch. It's a challenge, it's fun, it's exciting. However...
migrate it from a monolithic/state oriented model to a distributed, service oriented app
CQRS/Event Sourcing won't solve any of your problems and it won't help you distribute the app in any reasonable way. If you just generate events on the CRUD operations you'll have a large tangled mess of dependencies between each part. Every part that needs data will have to call a couple of "services" (i.e. tables) to get it, than push data elsewhere, generate events1 that some other parts will react to. It will be a mess. Usually this is called a distributed monolith.
This is also the reason you already see problems with it. These problems won't go away, because you are essentially building the same system in the same way, but this time it'll be more complex.
Where to go from here
The very first thing is always: have a clear goal. You want a service oriented architecture you said. Why? Are there parts that need different scaling, different resources? Are they managed by different teams with different life-cycles? Etc.? Maybe you already have all this, I don't know, but if not, that's your first task.
Then. The parts you do want to pull out can't be just CRUD things. Those will not be independent, so whether your goal (see point above!) is scaling or different team, you won't reach your goal! To be independent you'll have to pull out the behavior with the data, and in a way that the service can operate on its own.
You can't just throw buzzwords at it and hope for the best. I'd suggest to just ignore all the hype and buzzwords and think about the goal you want to reach.
For example: I need a million workers to log their time in under 10 minutes total. So that means I need a "service" to enable worker to log their time with a web interface. So let's create that as a complete independent piece with its own database so it can be scaled to a 100 nodes when it needs to be. Export data to billing automatically every hour or so.

Check command for validity with data from other aggregate

I am currently working on my first bigger DDD application. For now it works pretty well, but we are stuck with an issue since the early days that I cannot stop thinking about:
In some of our aggreagtes we keep references to another aggregate-root that is pretty essential for the whole application (based on their IDs, so there are no hard references - also the deletion is based on events/eventual consistency). Now when we create a new Entity "Entity1" we send a new CreateEntity1Command that contains the ID of the referenced aggregate-root.
Now how can I check if this referenced ID is a valid one? Right now we check it by reading from the other aggregate (without modifying anything there) - but this approach somehow feels dirty. I would like to just "trust" the commands, because the ID cannot be entered manually but must be selected. The problem is, that our application is a web-application and it is not really safe to trust the user input you get there (even though it is not accessibly by the public).
Did I overlook any possible solutions for this problems or should I just ignore the feeling that there needs to be a better solution?
Verifying that another referenced Aggregate exists is not the responsibility of an Aggregate. It would break the Single responsibility principle. When the CreateEntity1Command arrive at the Aggregate, it should be considered that the other referenced Aggregate is in a valid state, i.e. it exists.
Being outside the Aggregate's boundary, this checking is eventually consistent. This means that, even if it initially passes, it could become invalid after that (i.e. it is deleted, unpublished or any other invalid domain state). You need to ensure that:
the command is rejected, if the referenced Aggregate does not yet exists. You do this checking in the Application service that is responsible for the Use case, before dispatching the command to the Aggregate, using a Domain service.
if the referenced Aggregate enters an invalid state afterwards, the corrects actions are taken. You should do this inside a Saga/Process manager. If CQRS is used, you subscribe to the relevant events; if not, you use a cron. What is the correct action it depends on your domain but the main idea is that it should be modeled as a process.
So, long story short, the responsibilty of an Aggregate does not extend beyond its consistency boundary.
P.S. resist the temptation to inject services (Domain or not) into Aggregates (throught constructor or method arguments).
Direct Aggregate-to-Aggregate interaction is an anti-pattern in DDD. An aggregate A should not directly send a command or query to an aggregate B. Aggregates are strict consistency boundaries.
I can think of 2 solutions to your problem: Let's say you have 2 aggregate roots (AR) - A and B. Each AR has got a bunch of command handlers where each command raises 1 or more events. Your command handler in A depends on some data in B.
You can subscribe to the events raised by B and maintain the state of B in A. You can subscribe only to the events which dictate the validity.
You can have a completely independent service (S) coordinating between A and B. Instead of directly sending your request to A, send your request to S which would be responsible for a query from B (to check for validity of referenced ID) and then forward request to A. This is sometimes called a Process Manager (PM).
For Example in your case when you are creating a new Entity "Entity1", send this request to a PM whose job would be to validate if the data in your request is valid and then route your request to the aggregate responsible for creating "Entity1". Send a new CreateEntity1Command that contains the ID of the referenced aggregate-root to this PM which uses ID of the referenced AR to make sure it's valid and if it's valid then only it would pass your request forward.
Useful Links: http://microservices.io/patterns/data/saga.html
Did I overlook any possible solutions for this problems
You did. "Domain Services" give you a possible loop hole to play in.
Aggregates are consistency boundaries; their behaviors are constrained by
The current state of the aggregate
The arguments that they are passed.
If an aggregate needs to interact with something outside of its boundary, then you pass to the aggregate root a domain service to encapsulate that interaction. The aggregate, at its own discretion, can invoke methods provided by the domain service to achieve work.
Often, the domain service is just a wrapper around an application or infrastructure service. For instance, if the aggregate needed to know if some external data were available, then you could pass in a domain service that would support that query, checking against some cache of data.
But - here's the trick: you need to stay aware of the fact that data from outside of the aggregate boundary is necessarily stale. There might be another process changing the data even as you are querying a stale copy.
The problem is, that our application is a web-application and it is not really safe to trust the user input you get there (even though it is not accessibly by the public).
That's true, but it's not typically a domain problem. For instance, we might specify that an endpoint in our API requires a JSON representation of some command message -- but that doesn't mean that the domain model is responsible for taking a raw byte array and creating a DOM for it. The application layer would have that responsibility; the aggregate's responsibility is the domain concerns.
It can take some careful thinking to distinguish where the boundary between the different concerns is. Is this sequence of bytes a valid identifier for an aggregate? is clearly an application concerns. Is the other aggregate in a state that permits some behavior? is clearly a domain concern. Does the aggregate exist at all...? could go either way.

Inter-Aggregate Communication in CQRS + DDD + Event Sourcing

How should separate aggregate roots (AR) communicate with one another in an environment built on DDD principles using an event-sourced aggregate back-end?
For instance, I have a Facility aggregate root (AR) which has a factory method responsible for creating a Booking AR. The Booking is a time-sensitive combination of a Person AR and a Facility AR. A Person can only be booked in a single Facility.
In DDD, I would have held references to the Booking in Person, and Person in Facility. However, when generating events for use in event-sourcing I think that trying to handle the event deserialization from the back-end would become prohibitive. Therefore, I've taken to only holding references to the value object-based unique id's. This brings up a new problem, however, when a method on an AR needs to call another method on another AR -- how do you handle that situation? Hit the event source repository from the domain AR?
What is the general use case in this scenario? Am I approaching this all wrong?
Aggregate Root boundaries define a consistency boundary.
Inside the aggregate, consistency is guaranteed.
Outside... it's not.
So you should not have operations that spans several aggregates and have to be consistent.
If you need a transaction that spans two aggregates, you should review your aggregate boundaries.
For things that happen outside the aggregate you should have an event handler that will send a command to other aggregates.
If the logic of actions between aggregates is more complicated, you can define a process, a state machine that will listen to events and send commands to aggregates.
Processes can be used to define long running transactions (with compensation instead of rollback), or take business decisions based on what's happening in the system at a large scale (even between bounded contexts).
When using Event Sourcing and CQRS the most elegant (at least in my opinion) way of inter-AR communication is messaging. You can look at Ncqrs project (it will be easier if you are a .NET guy), particularly 'Messaging' branch. The idea is, ARs implement IMessageHandler interface for every message type they handle and AR base class exposes method Send for sending there messages. By means of this API clients can invoke model behavior and model itself can communicate (between ARs).

Resources