Ncqrs: How to raise an Event without having an Aggregate Root - domain-driven-design

Given I have two Bounded Contexts:
Fleet Mgt - simple CRUD-based supporting sub-domain
Sales - which is my CQRS-based Core Domain
When a CRUD operation occurs in the fleet management, an event reflecting the operation should be published:
AircraftCreated
AircraftUpdated
AircraftDeleted
etc.
These events are required a) to update various index tables that are needed in the Sales domain and b) to provide a unified audit log.
Question: Is there an easy way to store and publish these events (to the InProcessEventBus, I'm not using NSB here) without going through an AggregateRoot, which I wouldn't need in a simple CRUD context.

If you want to publish the event about something, this something probably is an aggregate root, because it is an externally identified object about a bundle of interest, otherwise why would you want to keep track of them?
Keeping that in mind, you don't need index tables (I understand these are for querying) in the sales BC. You need the GUIDs of the Aircraft and only lookups/joins on the read side.
For auditing I would just add a generic audit event via reflection in the repositories/unit of work.

According to Pieter, the main contributor of Ncqrs, there is no way to do this out of the box.
In this scenario I don't want to go through the whole ceremony of creating and executing a command, then loading an aggregate root from the event store just to have it emit the event.
The behavior is simple CRUD, implemented using the simplest possible solution, which in this specific case is forms-over-data using Entity Framework. The only thing I need is an event being published once a transaction occurred.
My solution looks like this:
// Abstract base class that provides a Unit Of Work
public abstract class EventPublisherMappedByConvention
: AggregateRootMappedByConvention
{
public void Raise(ISourcedEvent e)
{
var context = NcqrsEnvironment.Get<IUnitOfWorkFactory>()
.CreateUnitOfWork(e.EventIdentifier);
ApplyEvent(e);
context.Accept();
}
}
// Concrete implementation for my specific domain
// Note: The events only reflect the CRUD that's happened.
// The methods themselves can stay empty, state has been persisted through
// other means anyway.
public class FleetManagementEventSource : EventPublisherMappedByConvention
{
protected void OnAircraftTypeCreated(AircraftTypeCreated e) { }
protected void OnAircraftTypeUpdated(AircraftTypeUpdated e) { }
// ...
}
// This can be called from anywhere in my application, once the
// EF-based transaction has succeeded:
new FleetManagementEventSource().Raise(new AircraftTypeUpdated { ... });

Related

In Event Sourcing, How to make changes in database using arrays and no ORM?

I have a class like this:
class Community{
public List<Moderators> Moderators = new();
public void AddModerator(Moderator moderator) => Moderators.Add(moderator)
}
When i run the replay for all events from my EventStore, its ok to generate the list with moderators and send this to repository. But when the application call those events from API, i have a problem because i'm not using any ORM or Entity Framework cause my graph database doesn't have this. So if I have some change in moderator status or remove, or add an moderator, if i pass this to repository, i will need to check if moderator exists and then add, or if not in list, remove.
How can i solve this in order to use domain entities? Maybe when call AddModerator from API I send some message to other service that add this moderator, for example?
Or if I call DisableModerator(int moderatorId) someway i call another service to change this.
It's ok when I delete entire database and reconstruct replay all events, but in production I dont know how can i make these changes directly in moderator entity or repository when i change somethin in Moderators on Community aggregate root.
To be able to handle updating the Database in a clean way, you need to create a Class outside of your DDD model that takes care of it. Inject your class into your Model and use it like you would use EntityFramework or any other ORM.
For Example:
public class CommunityRepository {
public CommunityRepository(IUoW Context) {
_context = Context;
...
}
public void Save(Community community) {
_context.save(community);
}
}
This looks very similar to if you did you have an ORM implemented but you would own the class that implements IUoW interface.
So, how do you write your own ORM? You can use a pattern called the Unit of Work Pattern. The Pattern is described in detail here - https://dotnettutorials.net/lesson/unit-of-work-csharp-mvc/

Aggregate as a service

Assume scenario where the service requires some global configuration to handle some request.
For example when user wants to do something it requires some global configuration to check whether the user is permited todo so.
I realize that in axon i can have command handlers that could handle commands without specified target aggregate so the handling part isn't a problem.
Problem is where i would like to have persistent storage on top of that and some invariants when trying to change the configuration. The whole idea of the configuration is that it should be consistent like aggregate in axon.
ConfigService {
#Inject
configRepository;
#Inject
eventGateway;
#CommandHandler
handle(changeConfig){
let current = configRepository.loadCurrent;
//some checks
//persist here?
eventGateway.send(confgChanged)
}
#EventHandler
on(configChanged){
//or persist here?
configRepository.saveCurrent(configChanged.data)
}
}
If I do persistance on the command handler I think I shouldn't use event handler because it would save it twice. But then when i somehow lose the config repository data i can rebuild it based on the events.
Im not sure what im missing here in the understanding of the DDD concepts, to put it simply i would like to know where to put command handler for something that is neither an aggregate nor entity.
Maybe i should create command handler that calls the Config service instead making config service the command handler.
Are you using Axon without event sourcing here?
In Axon framework it is generally good practice only to change the state of an aggregate with events. If you are going to mix state or configuration loaded from a repository with state from the event store, how will you be able to guarantee that when you replay the same events, the resulting state will be the same? The next time the aggregate is loaded, there may be different state in your configRepository, resulting in a different state and different behavior of your aggregate.
Why is this bad? Well, those same events may have been handled by eventprocessors, they may have filled query tables, they may have sent messages to other systems or done other work based on the state the system had at the time. You will have a disagreement between your query database and your aggregate.
A concrete example: Imagine your aggregate processed a command to switch an email service on. The aggregate did this by applying an EmailServiceEnabledEvent and changing its own state to 'boolean emailEnabled = true'. After a while, the aggregate gets unloaded from memory. Now you change that configurationRepository to disable switching the email service on. When the aggregate is loaded again, events from the event store are applied, but this time it loads the configuration from your repository that says it shouldn't switch the email service on. The 'boolean emailEnabled' state is left false. You send a disable email service command to the aggregate, but the command handler in the aggregate thinks the email is already disabled, and doesn't apply an EmailServiceDisabledEvent. The email service is left on.
In short: I would recommend using commands to change the configuration of your aggregate.
It seems to me that you your global configuration is either a specification or a set of rules like in a rules engine.
Unlike the patterns described in GOF book, in DDD, some building blocks/patterns are more generic and can apply to different types of object that you have.
For example an Entity is something that has a life-cycle and has an identity. The stages in the life-cycle usually are: created, persisted, reconstructed from storage, modified and then it's life cycle ends by being deleted, archived, completed etc.
A Value Object is something that doesn't have identity, (most of the time) is immutable, two instances can be compared by the equality of their properties. Value Object represent important concepts in our domains like: Money in system that deal with accounting, banking etc., Vector3 and Matrix3 in systems that do mathematical calculations and simulations like modeling systems (3dsMax, Maya), video games etc. They contain important behavior.
So everything that you need to track and has identity can be an Entity.
You can have a Specification that is an entity, a Rule that is an entity, an Event can also be an entity if it has a unique ID assigned to it. In this case you can treat them just like any another entity. You can form aggregates, have repositories and services and use EventSourcing if necessary.
On the other hand a Specification, a Rule, an Event or a Command can also be Value Objects.
Specifications and Rules can also be Domain Services.
One important thing here is also the Bounded Context. The system that updates these rules is probably in a different Bounded context than the system that applies there rules. It's also possible that this isn't the case.
Here's an example.
Let's have a system, where a Customer can buy stuff. This sytem will also have Discounts on Orders that have specific Rules.
Let's say we have rule that says that: if a Customer has made an Order with more than 5 LineItems he get's a discount. If that Order has a total price of some amount (say 1000$) he gets discount.
The percentage of the discounts can be changed by the Sales team. The Sales system has OrderDicountPolicy aggregates that it can modify. On the other hand the Ordering system only reads OrderDicountPolicy aggregates and won't be able to modify them as this is the responsibility of the Sales team.
The Sales system and the Ordering system can be part of two separate Bounded Contexts: Sales and Orders. The Orders Bounded Context depends on Sales Bounded Context.
Note: I'll skip the most implementation details and add only the relevant things to shorten and simplify this example. If it's intent is not clear, I'll edit and add more details. UUID, DiscountPercentage and Money are value objects that I'll skip.
public interface OrderDiscountPolicy {
public UUID getID();
public DiscountPercentage getDiscountPercentage();
public void changeDiscountPercentage(DiscountPercentage percentage);
public bool canApplyDiscount(Order order);
}
public class LineItemsCountOrderDiscountPolicy implements OrderDiscountPolicy {
public int getLineItemsCount() { }
public void changeLineItemsCount(int count) { }
public bool canApplyDiscount(Order order) {
return order.getLineItemsCount() > this.getLineItemsCount();
}
// other stuff from interface implementation
}
public class PriceThresholdOrderDiscountPolicy implements OrderDiscountPolicy {
public Money getPriceThreshold() { }
public void changePriceThreshold(Money threshold) { }
public bool canApplyDiscount(Order order) {
return order.getTotalPriceWithoutDiscount() > this.getPriceThreshold();
}
// other stuff from interface implementation
}
public class LineItem {
public UUID getOrderID() { }
public UUID getProductID() { }
public Quantity getQuantity { }
public Money getProductPrice() { }
public Money getTotalPrice() {
return getProductPrice().multiply(getQuantity());
}
}
public enum OrderStatus { Pending, Placed, Approced, Rejected, Shipped, Finalized }
public class Order {
private UUID mID;
private OrderStatus mStatus;
private List<LineItem> mLineItems;
private DscountPercentage mDiscountPercentage;
public UUID getID() { }
public OrderStatus getStatus() { }
public DscountPercentage getDiscountPercentage() { };
public Money getTotalPriceWithoutDiscount() {
// return sum of all line items
}
public Money getTotalPrice() {
// return sum of all line items + discount percentage
}
public void changeStatus(OrderStatus newStatus) { }
public List<LineItem> getLineItems() {
return Collections.unmodifiableList(mLineItems);
}
public LineItem addLineItem(UUID productID, Quantity quantity, Money price) {
LineItem item = new LineItem(this.getID(), productID, quantity, price);
mLineItems.add(item);
return item;
}
public void applyDiscount(DiscountPercentage discountPercentage) {
mDiscountPercentage = discountPercentage;
}
}
public class PlaceOrderCommandHandler {
public void handle(PlaceOrderCommand cmd) {
Order order = mOrderRepository.getByID(cmd.getOrderID());
List<OrderDiscountPolicy> discountPolicies =
mOrderDiscountPolicyRepository.getAll();
for (OrderDiscountPolicy policy : discountPolicies) {
if (policy.canApplyDiscount(order)) {
order.applyDiscount(policy.getDiscountPercentage());
}
}
order.changeStatus(OrderStatus.Placed);
mOrderRepository.save(order);
}
}
public class ChangeOrderDiscountPolicyPercentageHandler {
public void handle(ChangeOrderDiscountPolicyPercentage cmd) {
OrderDiscountPolicy policy =
mOrderDiscountRepository.getByID(cmd.getPolicyID());
policy.changePercentage(cmd.getDiscountPercentage());
mOrderDiscountRepository.save(policy);
}
}
You can use EventSourcing if you think that it's appropriate for some aggregates. The DDD book has a chapter on global rules and specifications.
Let's take a look what whould we do in the case of a distributed application for example using microservices.
Let's say we have 2 services: OrdersService and OrdersDiscountService.
There are couple of ways to implement this operation. We can use:
Choreography with Events
Orchestration with explicit Saga or a Process Manager
Here's how we can do it if we use Choreography with Events.
CreateOrderCommand -> OrdersService -> OrderCreatedEvent
OrderCreatedEvent -> OrdersDiscountService -> OrderDiscountAvailableEvent or OrderDiscountNotAvailableEvent
OrderDiscountAvailableEvent or OrderDiscountNotAvailableEvent -> OrdersService -> OrderPlacedEvent
In this example to place the order OrdersService will wait for OrderDiscountNotAvailableEvent or OrderDiscountNotAvailableEvent so it can apply a discount before changing the status of the order to OrderPlaced.
We can also use an explicit Saga to do Orchestration between services.
This Saga will containt the sequence of steps for the process so it can execute it.
PlaceOrderCommand -> Saga
Saga asks OrdersDiscountService to see if a discount is available for that Order.
If discount is available, Saga calls OrdersService to apply a discount
Saga calls OrdersService to set the status of the Order to OrderPlaced
Note: Steps 3 and 4 can be combined
This raises the question: *"How OrdersDiscountService get's all the necessary information for the Order to calculate discounts?"*
This can either be achieved by adding all of the information of the order in the Event that this service will receive or by having OrdersDiscountService call OrdersService to get the information.
Here's a Great video from Martin Folwer on Event Driven Architectures that discusses these approaches.
The advantage of Orchestration with a Saga is that the exact process is explicitly defined in the Saga and can be found, understood and debugged.
Having implicit processes like in the case of the Choreography with Events can be harder to understand, debug and maintain.
The downside of having Sagas is that we do define more things.
Personally, I tend to go for the explicit Saga especially for complex processes, but most of the systems I work and see use both approaches.
Here are some additional resources:
https://blog.couchbase.com/saga-pattern-implement-business-transactions-using-microservices-part/
https://blog.couchbase.com/saga-pattern-implement-business-transactions-using-microservices-part-2/
https://microservices.io/patterns/data/saga.html
The LMAX Architecture is very interesting read. It's not distributed system, but is event driven and records both incomming events/commands and outgoint events. It's an interesting way to capture everything that happend in a system or a service.

EventSourced Saga Implementation

I have written an Event Sourced Aggregate and now implemented an Event Sourced Saga... I have noticed the two are similair and created an event sourced object as a base class from which both derive.
I have seen one demo here http://blog.jonathanoliver.com/cqrs-sagas-with-event-sourcing-part-ii-of-ii/ but feel there may be an issue as Commands could be lost in the event of a process crash as the sending of commands is outside the write transaction?
public void Save(ISaga saga)
{
var events = saga.GetUncommittedEvents();
eventStore.Write(new UncommittedEventStream
{
Id = saga.Id,
Type = saga.GetType(),
Events = events,
ExpectedVersion = saga.Version - events.Count
});
foreach (var message in saga.GetUndispatchedMessages())
bus.Send(message); // can be done in different ways
saga.ClearUncommittedEvents();
saga.ClearUndispatchedMessages();
}
Instead I am using Greg Young's EventStore and when I save an EventSourcedObject (either an aggregate or a saga) the sequence is as follows:
Repository gets list of new MutatingEvents.
Writes them to stream.
EventStore fires off new events when streams are written to and committed to the stream.
We listen for the events from the EventStore and handle them in EventHandlers.
I am implementing the two aspects of a saga:
To take in events, which may transition state, which in turn may emit commands.
To have an alarm where at some point in the future (via an external timer service) we can be called back).
Questions
As I understand event handlers should not emit commands (what happens if the command fails?) - but am I OK with the above since the Saga is the actual thing controlling the creation of commands (in reaction to events) via this event proxy, and any failure of Command sending can be handled externally (in the external EventHandler that deals with CommandEmittedFromSaga and resends if the command fails)?
Or do I forget wrapping events and store native Commands and Events in the same stream (intermixed with a base class Message - the Saga would consume both Commands and Events, an Aggregate would only consume Events)?
Any other reference material on the net for implementation of event sourced Sagas? Anything I can sanity check my ideas against?
Some background code is below.
Saga issues a command to Run (wrapped in a CommandEmittedFromSaga event)
Command below is wrapped inside event:
public class CommandEmittedFromSaga : Event
{
public readonly Command Command;
public readonly Identity SagaIdentity;
public readonly Type SagaType;
public CommandEmittedFromSaga(Identity sagaIdentity, Type sagaType, Command command)
{
Command = command;
SagaType = sagaType;
SagaIdentity = sagaIdentity;
}
}
Saga requests a callback at some point in future (AlarmRequestedBySaga event)
Alarm callback request is wrapped onside an event, and will fire back and event to the Saga on or after the requested time:
public class AlarmRequestedBySaga : Event
{
public readonly Event Event;
public readonly DateTime FireOn;
public readonly Identity Identity;
public readonly Type SagaType;
public AlarmRequestedBySaga(Identity identity, Type sagaType, Event #event, DateTime fireOn)
{
Identity = identity;
SagaType = sagaType;
Event = #event;
FireOn = fireOn;
}
}
Alternatively I can store both Commands and Events in the same stream of base type Message
public abstract class EventSourcedSaga
{
protected EventSourcedSaga() { }
protected EventSourcedSaga(Identity id, IEnumerable<Message> messages)
{
Identity = id;
if (messages == null) throw new ArgumentNullException(nameof(messages));
var count = 0;
foreach (var message in messages)
{
var ev = message as Event;
var command = message as Command;
if(ev != null) Transition(ev);
else if(command != null) _messages.Add(command);
else throw new Exception($"Unsupported message type {message.GetType()}");
count++;
}
if (count == 0)
throw new ArgumentException("No messages provided");
// All we need to know is the original number of events this
// entity has had applied at time of construction.
_unmutatedVersion = count;
_constructing = false;
}
readonly IEventDispatchStrategy _dispatcher = new EventDispatchByReflectionStrategy("When");
readonly List<Message> _messages = new List<Message>();
readonly int _unmutatedVersion;
private readonly bool _constructing = true;
public readonly Identity Identity;
public IList<Message> GetMessages()
{
return _messages.ToArray();
}
public void Transition(Event e)
{
_messages.Add(e);
_dispatcher.Dispatch(this, e);
}
protected void SendCommand(Command c)
{
// Don't add a command whilst we are in the constructor. Message
// state transition during construction must not generate new
// commands, as those command will already be in the message list.
if (_constructing) return;
_messages.Add(c);
}
public int UnmutatedVersion() => _unmutatedVersion;
}
I believe the first two questions are the result of a wrong understanding of Process Managers (aka Sagas, see note on terminology at bottom).
Shift your thinking
It seems like you are trying to model it (as I once did) as an inverse aggregate. The problem with that: the "social contract" of an aggregate is that its inputs (commands) can change over time (because systems must be able to change over time), but its outputs (events) cannot. Once written, events are a matter of history and the system must always be able to handle them. With that condition in place, an aggregate can be reliably loaded from an immutable event stream.
If you try to just reverse the inputs and outputs as a process manager implementation, it's output cannot be a matter of record because commands can be deprecated and removed from the system over time. When you try to load a stream with a removed command, it will crash. Therefore a process manager modeled as an inverse aggregate could not be reliably reloaded from an immutable message stream. (Well I'm sure you could devise a way... but is it wise?)
So let's think about implementing a Process Manager by looking at what it replaces. Take for example an employee who manages a process like order fulfillment. The first thing you do for this user is setup a view in the UI for them to look at. The second thing you do is to make buttons in the UI for the user to perform actions in response to what they see on the view. Ex. "This row has PaymentFailed, so I click CancelOrder. This row has PaymentSucceeded and OrderItemOutOfStock, so I click ChangeToBackOrder. This order is Pending and 1 day old, so I click FlagOrderForReview"... and so forth. Once the decision process is well-defined and starts requiring too much of the user's time, you are tasked to automate this process. To automate it, everything else can stay the same (the view, even some of the UI so you can check on it), but the user has changed to be a piece of code.
"Go away or I will replace you with a very small shell script."
The process manager code now periodically reads the view and may issue commands if certain data conditions are present. Essentially, the simplest version of a Process Manager is some code that runs on a timer (e.g. every hour) and depends on particular view(s). That's the place where I would start... with stuff you already have (views/view updaters) and minimal additions (code that runs periodically). Even if you decide later that you need different capability for certain use cases, "Future You" will have a better idea of the specific shortcomings that need addressing.
And this is a great place to remind you of Gall's law and probably also YAGNI.
Any other reference material on the net for implementation of event sourced Sagas? Anything I can sanity check my ideas against?
Good material is hard to find as these concepts have very malleable implementations, and there are diverse examples, many of which are over-engineered for general purposes. However, here are some references that I have used in the answer.
DDD - Evolving Business Processes
DDD/CQRS Google Group (lots of reading material)
Note that the term Saga has a different implication than a Process Manager. A common saga implementation is basically a routing slip with each step and its corresponding failure compensation included on the slip. This depends on each receiver of the routing slip performing what is specified on the routing slip and successfully passing it on to the next hop or performing the failure compensation and routing backward. This may be a bit too optimistic when dealing with multiple systems managed by different groups, so process managers are often used instead. See this SO question for more information.

Decorating Repositories with AutoFac

Hi I have a maybe a common problem that I think not entirely can be solved by Autofac or any IoC container. It can be a design problem that I need some fresh input on.
I have the classic MVC web solution with EF 6. Its been implemented in a true DDD style with Anti-corruption layer, three bounded contexts, cross-cutting concerns movers out to infrastructure projects. It has been a real pleasure to see all pieces fall in to place in good way. We also added Commands to CUD operations into Domain.
Now here is the problem. Customer want a change log that tracks every entities property and when updates are done we need to save into change log values before and after update. We have implemented that successful in a ILoggerService that wraps a Microsoft test utility that we uses to detect changes. But I, my role is Software Architect, took the decision to Decorate our generic repositories with a ChangeTrackerRepository that have a dependency on ILoggerService. This works fine. The Decorator track methods Add(…) and Modify(…) in our IRepository<TEntity>.
The problem is that we have Repositories that have custom repositories that have custom queries like this:
public class CounterPartRepository : Repository<CounterPart>, ICounterPartRepository
{
public CounterPartRepository(ManagementDbContext unitOfWork)
: base(unitOfWork)
{}
public CounterPart GetAggregate(Guid id)
{
return GetSet().CompleteAggregate().SingleOrDefault(s => s.Id == id);
}
public void DeleteCounterPartAddress(CounterPartAddress address)
{
RemoveChild(address);
}
public void DeleteCounterPartContact(CounterPartContact contact)
{
RemoveChild(contact);
}
}
We have simple repositories that just closes the generic repository and get proper EF Bounded context injected into it (Unit Of Work pattern):
public class AccrualPeriodTypeRepository : Repository<AccrualPeriodType>, IAccrualPeriodTypeRepository
{
public AccrualPeriodTypeRepository(ManagementDbContext unitOfWork)
: base(unitOfWork)
{
}
}
The problem is that when decorating AccrualPeriodTypeRepository with AutoFac through generic Decorator we can easily inject that repo into CommandHandler actor like this
public AddAccrualPeriodCommandHandler(IRepository<AccrualPeriod> accrualRepository)
This works fine.
But How do we also decorate CounterPartRepository???
I have gone through several solutions in my head and they all end up with a dead-end.
1) Manually decorate every custom repository generate to many custom decorators that it will be near unmaintainable.
2) Decorate the closed Repository Repository with extended custom queries. This smells bad. Should be part of that repository?
3) If we consider 2… maybe Skip our Services and only rely on IRepository for operating on our Aggregate Roots and IQueryHandler (see article https://cuttingedge.it/blogs/steven/pivot/entry.php?id=92)
I need some fresh input to a common problem I think, when it comes to decorating your repositories when you have custom closed repositories and simple repositories also closed but both inherit from same Repository
Have you consider decorating command handlers instead of decorating repositories?
Repos are too low level, and it is not their responsibility to know what should be logged and how.
What about the following:
1) You have your command handlers in a way:
public class DeleteCounterPartAddressHandler : IHandle<DeleteCounterPartAddressCommand>
{
//this might be set by a DI container, or passed to a constructor
public ICounterPartRepository Repository { get; set; }
public void Handle(DeleteCounterPartAddressCommand command)
{
var counterpart = repository.GetPropertyById(command.CounterPartId);
// in DDD you always want to read and aggregate
// and save an aggregate as a whole
property.DeleteAdress(command.AddressId);
repository.Save(counterpart)
}
}
2) Now you can simply use Chain Of Responsibility pattern to "decorate" your handlers with logging, transactions, whatever:
public class LoggingHandler<T> : IHandler<T> {
private readonly IHandler<T> _innerHandler;
public LoggingHandler(IHandler<T> innerHandler) {
_innerHandler = innerHandler;
}
public void Handle(T command)
{
//Obviously you do it properly, but you get the idea
_log.Info("Before");
_innerHandler.Handle(command);
_log.Info("After");
}
}
Now you have just one piece of code responsible for logging and you can compose it with any command handler, so if you ever want to log a particular command then you just "wrap" it with the logging handler, and it is still your IHandle<T> so the rest of the system is not impacted.
And you can do it with other concerns too (threading, queueing, transactions, multiplexing, routing, etc.) without messing around and plumbing this stuff here and there.
Concerns are very well separated this way.
It is also much better (to me) because you log on a real operation (business) level, rather than on low-level repository.
Hope it helps.
P.S. In DDD you really want your repositories to only expose aggregate-level methods because Aggregates suppose to take care of their invariants (and nothing else, no services, no repositories), and because Aggregate represents transaction boundary.
Really, it is up to the Repository how to get the Aggregate from persisted storage and how to persist it back, outside it should look like you ask someone for an object and it gives you an object you can call behaviors on.
So normally you would only get an aggregate from the repository, call its behavior(s) and then save it back. Which really means that your repositories would mostly have GetById and Save methods, not some internals like "UpdateThatPartOfAnAggregate".

Connecting the dots with DDD

I have read Evans, Nilsson and McCarthy, amongst others, and understand the concepts and reasoning behind a domain driven design; however, I'm finding it difficult to put all of these together in a real-world application. The lack of complete examples has left me scratching my head. I've found a lot of frameworks and simple examples but nothing so far that really demonstrates how to build a real business application following a DDD.
Using the typical order management system as an example, take the case of order cancellation. In my design I can see an OrderCancellationService with a CancelOrder method which accepts the order # and a reason as parameters. It then has to perform the following 'steps':
Verify that the current user has the necessary permission to cancel an Order
Retrieve the Order entity with the specified order # from the OrderRepository
Verify that the Order may be canceled (should the service interrogate the state of the Order to evaluate the rules or should the Order have a CanCancel property that encapsulates the rules?)
Update the state of the Order entity by calling Order.Cancel(reason)
Persist the updated Order to the data store
Contact the CreditCardService to revert any credit card charges that have already been processed
Add an audit entry for the operation
Of course, all of this should happen in a transaction and none of the operations should be allowed to occur independently. What I mean is, I must revert the credit card transaction if I cancel the order, I cannot cancel and not perform this step. This, imo, suggests better encapsulation but I don't want to have a dependency on the CreditCardService in my domain object (Order), so it seems like this is the responsibility of the domain service.
I am looking for someone to show me code examples how this could/should be "assembled". The thought-process behind the code would be helpful in getting me to connect all of the dots for myself. Thx!
Your domain service may look like this. Note that we want to keep as much logic as possible in the entities, keeping the domain service thin. Also note that there is no direct dependency on credit card or auditor implementation (DIP). We only depend on interfaces that are defined in our domain code. The implementation can later be injected in the application layer. Application layer would also be responsible for finding Order by number and, more importantly, for wrapping 'Cancel' call in a transaction (rolling back on exceptions).
class OrderCancellationService {
private readonly ICreditCardGateway _creditCardGateway;
private readonly IAuditor _auditor;
public OrderCancellationService(
ICreditCardGateway creditCardGateway,
IAuditor auditor) {
if (creditCardGateway == null) {
throw new ArgumentNullException("creditCardGateway");
}
if (auditor == null) {
throw new ArgumentNullException("auditor");
}
_creditCardGateway = creditCardGateway;
_auditor = auditor;
}
public void Cancel(Order order) {
if (order == null) {
throw new ArgumentNullException("order");
}
// get current user through Ambient Context:
// http://blogs.msdn.com/b/ploeh/archive/2007/07/23/ambientcontext.aspx
if (!CurrentUser.CanCancelOrders()) {
throw new InvalidOperationException(
"Not enough permissions to cancel order. Use 'CanCancelOrders' to check.");
}
// try to keep as much domain logic in entities as possible
if(!order.CanBeCancelled()) {
throw new ArgumentException(
"Order can not be cancelled. Use 'CanBeCancelled' to check.");
}
order.Cancel();
// this can throw GatewayException that would be caught by the
// 'Cancel' caller and rollback the transaction
_creditCardGateway.RevertChargesFor(order);
_auditor.AuditCancellationFor(order);
}
}
A slightly different take on it:
//UI
public class OrderController
{
private readonly IApplicationService _applicationService;
[HttpPost]
public ActionResult CancelOrder(CancelOrderViewModel viewModel)
{
_applicationService.CancelOrder(new CancelOrderCommand
{
OrderId = viewModel.OrderId,
UserChangedTheirMind = viewModel.UserChangedTheirMind,
UserFoundItemCheaperElsewhere = viewModel.UserFoundItemCheaperElsewhere
});
return RedirectToAction("CancelledSucessfully");
}
}
//App Service
public class ApplicationService : IApplicationService
{
private readonly IOrderRepository _orderRepository;
private readonly IPaymentGateway _paymentGateway;
//provided by DI
public ApplicationService(IOrderRepository orderRepository, IPaymentGateway paymentGateway)
{
_orderRepository = orderRepository;
_paymentGateway = paymentGateway;
}
[RequiredPermission(PermissionNames.CancelOrder)]
public void CancelOrder(CancelOrderCommand command)
{
using (IUnitOfWork unitOfWork = UnitOfWorkFactory.Create())
{
Order order = _orderRepository.GetById(command.OrderId);
if (!order.CanBeCancelled())
throw new InvalidOperationException("The order cannot be cancelled");
if (command.UserChangedTheirMind)
order.Cancel(CancellationReason.UserChangeTheirMind);
if (command.UserFoundItemCheaperElsewhere)
order.Cancel(CancellationReason.UserFoundItemCheaperElsewhere);
_orderRepository.Save(order);
_paymentGateway.RevertCharges(order.PaymentAuthorisationCode, order.Amount);
}
}
}
Notes:
In general I only see the need for a domain service when a command/use case involves the state change of more than one aggregate. For example, if I needed to invoke methods on the Customer aggregate as well as Order, then I'd create the domain service OrderCancellationService that invoked the methods on both aggregates.
The application layer orchestrates between infrastructure (payment gateways) and the domain. Like domain objects, domain services should only be concerned with domain logic, and ignorant of infrastructure such as payment gateways; even if you've abstracted it using your own adapter.
With regards to permissions, I would use aspect oriented programming to extract this away from the logic itself. As you see in my example, I've added an attribute to the CancelOrder method. You can use an intercepter on that method to see if the current user (which I would set on Thread.CurrentPrincipal) has that permission.
With regards to auditing, you simply said 'audit for the operation'. If you just mean auditing in general, (i.e. for all app service calls), again I would use interceptors on the method, logging the user, which method was called, and with what parameters. If however you meant auditing specifically for the cancellation of orders/payments then do something similar to Dmitry's example.

Resources