How to handle failure in DDD? - domain-driven-design

I need to create a folder on my local drive when "something" changes in my domain. So in DDD fashion I need to raise an event and not let my domain create the folder.
My question is what if my event fails (i.e. the creation of the folder fails)?
Do I have to re-raise another command to undo the first change which I think is called a compensating command?
Also what if the compensating command fails? Now I have a domain change but the folder does not exist.

The way you describe your proposed solution isn't really DDD; it's more CQRS (i.e. events & compensating commands) which I think is possibly over complicating the situation.
Do you really need to take a CQRS approach for this scenario which is intended for asynchronous operations? As in, what advantages are there for the folder being created in a separate transaction to the business logic being invoked and persisted? There is good reason for this approach when raising events that a query service handles, as the query service is likely to be on a separate physical machine, therefore requiring a RPC. Also, the event may require many de-normalised tables to be updated. So for performance it makes sense for this process to use the async eventing model. But for creating a local folder I'm not sure it does?
A possible approach
public class ApplicationService : IApplicationService
{
private readonly IMyAggregateRepository _myAggregateRepository;
private readonly IFolderCreationService _folderCreationService;
public ApplicationService(IMyAggregateRepository myAggregateRepository, IFolderCreationService folderCreationService)
{
_myAggregateRepository = myAggregateRepository;
_folderCreationService = folderCreationService;
}
public void SomeApplicationServiceMethod(Guid id)
{
using (IUnitOfWork unitOfWork = UnitOfWorkFactory.Create())
{
MyAggregate aggregate = _myAggregateRepository.GetById(id);
aggregate.SomeMethod();
_myAggregateRepository.Save(aggregate);
_folderCreationService.CreateFolder();
}
}
}
Here, the changes are only committed to the database once all of the code within the unit of work's using statement completes without error.
Note that it isn't a Domain Service, or Domain Entity that invokes the folder service... it's the Application Service. I prefer keeping domain services focused on pure domain logic. It's the Application Service's job to orchestrate client calls to the domain, database and any other infrastructure services such as this folder service.
If you decide that there is good reason for this to use the event model, then what you said is correct. If the folder creation failed in the event handler you would have to issue a compensating command. You would need to ensure this handler cannot fail by design (by this I mean the entity in question is always in a state where this compensating command can be executed; and the state reverted). Having good unit tests that cover all scenarios will help. If there is a flaw in the design that allows this compensating command to fail, I guess you'd have to resort to manual intervention by sending an email/notification on failure.
P.S. Although not the point of the question, I'd really recommend not creating physical folders unless there really is a good reason. In my experience it just causes a head ache for deployments/machine upgrades and things. Obviously I don't know your reasons for doing so, but I'd really recommend using a document store/database for whatever you need to store instead.

If a file system folder is vital to your domain, I would implement the creation of folders as a domain service. That way, you would let the entity handle all business rules and logic and if the creation of a folder fails, your domain is not left in an invalid state.
Pass the service as a parameter to your method handling the logic (the double dispatch pattern).
Folder service example.
IFolderService
{
CreateFolder(string path);
}
Entity example.
class MyEntity
{
public void DoWork(IFolderService folderService, ...)
{
folderService.CreateFolder(...);
// do work.
}
}
After the work is done, you could raise domain events to notify sub systems.

Related

DDD Domain Entities using external Services

In DDD, you're strongly encouraged to have all business logic within the domain entities instead of separate from it. Which makes sense.
You also have the idea of Domain Services to encapsulate certain pieces of logic.
What I can't work out is how to have the domain entities perform their business logic that itself depends on external services.
Take, for example, a user management system. In this there is a User domain entity, on which there are various business actions to perform. One of these is verify_email_address - which sends an email to the users email address in order to verify that it's valid.
Except that sending an email involves interactions with external services. So it seems reasonable for this logic to be encapsulated within a Domain Service that the User entity then makes use of. But how does it actually do this?
I've seen some things suggest that the User entity is constructed with references to every domain service that it needs. Which makes it quite big and bloated, and especially hard to test.
I've seen some things that suggest that you should pass in the domain service to the method that you are calling - which just feels weird. Why would someone outside of the domain entity need to pass in the EmailClient service when calling verify_email_address?
I've also then seen the suggestion that you instead pass the User entity in to the domain service, except that this seems to imply that the logic is in the domain service and not the entity.
And finally, I've seen suggestions that the entity should raise a domain event to trigger the email, which the domain service then reacts to. That means that all of the logic for whether to send the email or not - no need if it's already verified - and which email address to send it to are in the right place, and it also means that the user only needs a way to raise events and nothing more. So it feels less tightly coupled. But it also means you need a whole eventing mechanism, which is itself quite complicated.
So, am I missing something here? How can I achieve this?
(I'm working in Rust if that matters, but I don't see that it should)
What I can't work out is the best way to have the domain entities perform their business logic that itself depends on external services.
That's not your fault; the literature is a mess.
In DDD, you're strongly encouraged to have all business logic within the domain entities instead of separate from it.
That's not quite right; the common idea is that all business logic belongs within the domain layer (somewhere). But that doesn't necessarily mean that the logic must be within a domain entity.
Evans, in Chapter 5, writes:
In some cases, the clearest and most pragmatic design includes operations that do not belong to any object. Rather than force the issue, we can follow the natural contours of the problems space and include SERVICES explicitly in the model.
There are important domain operations that can't find a natural home in an ENTITY or VALUE OBJECT....
It's a very Kingdom of Nouns idea; we have code that actually does something useful, so there must be an object it can belong to.
Having a module (in the Parnas sense) in the domain layer that is responsible for the coordination of an email client and a domain entity (or for that matter, a repository) to achieve some goal is a perfectly reasonable option.
Could that module be the domain entity itself? It certainly could.
You might find, however, that coupling the management of the in-memory representation of domain information and the orchestration of a domain process that interacts with "the real world", as it were, complicates your maintenance responsibilities by introducing a heavy coupling between two distinct concepts that should instead be lightly coupled.
Clean Architecture (and the predecessors in that lineage) suggests to separate "entities" from "use cases". Ivar Jacobson's Objectory Process distinguished "entities" from "controls". So the notion of a service that is decoupled from the entity shouldn't be too alien.
Ruth Malan writes:
Design is what we do when we want to get more of what we want than we'd get by just doing it.
Finding the right design depends a lot on finding the right "what we want" for our local context (including our best guess at how this local context is going to evolve over the time window we care about).
VoiceOfUnReason has a perfectly valid answer.
I just want to boil down your question to the grits.
What I can't work out is how to have the domain entities perform their business logic that itself depends on external services.
I've also then seen the suggestion that you instead pass the User entity in to the domain service, except that this seems to imply that the logic is in the domain service and not the entity.
That's the key. All logic that belongs to domain entities should be done on domain entities. But at the same time, domain entities MUST be independent of the outside world (even other domain entities).
That's why we have domain services and application services.
Domain services are used to coordinate things between multiple entities, like transferring money between two accounts:
public class TransferService
{
IAccountRepos _repos;
public void Transfer(string fromAccountNumber, string toAccountNumber, decimal amount)
{
var account1 = _repos.Get(fromAccountNumber);
var account2 = _repos.Get(fromAccountNumber);
var money = account1.Withdraw(amount);
account2.Deposit(money);
_repos.Update(account1);
_repos.Update(account2);
}
}
That's a domain service since it's still only using the domain only.
Application services on the other hand are used to communicate over boundaries and with external services.
And it's an external service that you should create in this case. It looks similar to domain services but are at a layer over it (it can use domain services to solve its purpose).
To summarize:
Entities must be used to perform actions on themself (the easiest way is to make all setters private which forces you to add methods).
Domains services should be used as soon as two or more entities must be used to solve a problem (can even be two entities of the same type as in the example above)
Application services are used to interact with things outside the domain and can use entities and/or domain services to solve the problem.

CQRS and cross cutting concerns like ABAC for authorization reasons

Let's assume a monolithic web service. The architectural design is based on the DDD and divides the domain into sub-domains. These are structured according to the CQRS pattern. If a request arrives via the presentation layer, in this case a RESTful interface, the controller generates a corresponding command and executes it via the command bus.
So far so good. The problem now is that certain commands may only be executed by certain users, so access control must take place. In a three-layer architecture, this can be solved relatively easily using ABAC. However, to be able to use ABAC, the domain resource must be loaded and known. The authentication is taken over by the respective technology of the presentation layer and the authenticated user is stored in the request. For example, JWT Bearer Tokens using Passport.js for the RESTful interface.
Approach #1: Access Control as part of the command handler
Since the command handler has access to the repository and the aggregate has to be loaded from the database anyway in order to execute the command, my first approach was to transfer access control to the command handler based on ABAC. In case of a successful execution it returns nothing as usual, if access is denied an exception is thrown.
// change-last-name.handler.ts
async execute(command: ChangeUserLastNameCommand): Promise<void> {
const user = await this._userRepository.findByUsername(command.username);
if (!user) {
throw new DataNotFoundException('Resource not found');
}
const authenticatedUser = null; // Get authenticated user from somewhere
// Handle ABAC
if(authenticatedUser.username !== user.username) {
throw new Error();
}
user.changeLastName(command.lastName);
await this._userRepository.save(user);
user.commit();
}
However, this approach seems very unclean to me. Problem: In my opinion, access control shouldn't be the responsibility of a single command handler, should it? Especially since it is difficult to get the authenticated user or the request, containing the authenticated user, at this level. The command handler should work for all possible technologies of the presentation layer (e.g. gRPC, RESTful, WebSockets, etc.).
Approach #2: Access control as part of the presentation layer or using AOP
A clean approach for me is to take the access control from the handler and carry it out before executing the command. Either by calling it up manually in the presentation layer, for example by using an AccessControlService which implements the business security rules, or by non-invasive programming using AOP, whereby the aspect could also use the AccessControlService.
The problem here is that the presentation layer does not have any attributes of the aggregate. For ABAC, the aggregate would first have to be loaded using the query bus. An ABAC could then be carried out in the presentation layer, for example in the RESTful controller.
That's basically a good approach for me. The access control is the responsibility of the presentation layer, or if necessary an aspect (AOP), and the business logic (domain + CQRS) are decoupled. Problem: The main problem here is that redundancies can arise from the database point of view. For the ABAC, the aggregate must be preloaded via query in order to be able to decide whether the command may be executed. If the command is allowed to be executed, it can happen that it loads exactly the same aggregate from the database again, this time simply to make the change, even though the aggregate has already been loaded shortly before.
Question: Any suggestions or suggestions for improvement? I tried to find what I was looking for in the literature, which was not very informative. I came across the following Security in Domain-Driven Design by Michiel Uithol, which gives a good overview but did not answer my problems. How do I address security concerns in a CQRS architecture? Or are the redundant database access negligible and I actually already have the solution?
I would handle authentication and the overall authorization in the infrastructure, before it reaches the command handlers, because it is a separate concern.
It is also important to handle authentication separately from authorization, because there are separate concerns. It can become pretty messy if you handle authentication and authorization at the same time.
Then you I would do final authorization in the handler (if needed), for example if you have a command AddProductToCart, then I would make sure that the user who initially created the cart-aggreagate is the same as the one making the AddProductToCart command.

DDD: The problem with domain services that need to fetch data as part of their business rules

Suppose I have a domain service which implements the following business rule / policy:
If the total price of all products in category 'family' exceeds 1 million, reduce the price by 50% of the family products which are older than one year.
Using collection-based repositories
I can simply create a domain service which loads all products in the 'family' category using the specification pattern, then check the condition, and if true, reduce the prices. Since the products are automatically tracked by the collection-based repository, the domain service is not required to issue any explicit infrastructure calls at all – as should be.
Using persistence-based repositories
I'm out of luck. I might get away with using the repository and the specification to load the products inside my domain service (as before), but eventually, I need to issue Save calls which don't belong into the domain layer.
I could load the products in the application layer, then pass them to the domain service, and finally save them again in the application layer, like so:
// Somewhere in the application layer:
public void ApplyProductPriceReductionPolicy()
{
// make sure everything is in one transaction
using (var uow = this.unitOfWorkProvider.Provide())
{
// fetching
var spec = new FamilyProductsSpecification();
var familyProducts = this.productRepository.findBySpecification(spec);
// business logic (domain service call)
this.familyPriceReductionPolicy.Apply(familyProducts);
// persisting
foreach (var familyProduct in familyProducts)
{
this.productRepository.Save(familyProduct);
}
uow.Complete();
}
}
However, I see the following issues with this code:
Loading the correct products is now part of the application layer, so in case I need to apply the same policy again in some other use case, I need to repeat myself.
The cohesion between the specification (FamilyProductsSpecification) and the policy is lost, essentially allowing someone to pass the wrong products into the domain service. Note that filtering the products (in-memory) again in the domain service does not help either, as the caller might have passed only a subset of all products.
The application layer has no clue which products have changed, and therefore is forced to save all of them, which might be a lot of redundant work.
Question: Is there a better strategy to deal with this situation?
I thought about something complicated like adapting the persistence-based repository such that it appears as a collection-based one to the domain service, internally keeping track of the products which were loaded by the domain service in order to save them again when the domain service returns.
First of all, I think choosing a domain service for this kind of logic - which does not belong inside one specific aggregate - is a good idea.
And I also agree with you that the domain service should not be concerned with saving changed aggregates, keeping stuff like this out of domain services also allows you to be concerned with managing transactions - if required - by the application.
I would be pragmatic about this problem and make a small change to your implementation to keep it simple:
// Somewhere in the application layer:
public void ApplyProductFamilyDiscount()
{
// make sure everything is in one transaction
using (var uow = this.unitOfWorkProvider.Provide())
{
var familyProducts = this.productService.ApplyFamilyDiscount();
// persisting
foreach (var familyProduct in familyProducts)
{
this.productRepository.Save(familyProduct);
}
uow.Complete();
}
}
The implementation in the product domain service:
// some method of the product domain service
public IEnumerable<Product> ApplyFamilyDiscount()
{
var spec = new FamilyProductsSpecification();
var familyProducts = this.productRepository.findBySpecification(spec);
this.familyPriceReductionPolicy.Apply(familyProducts);
return familyProducts;
}
With that the whole business logic of going through all family products older than a year and then applying the current discount (50 percent) is encapsulated inside the domain service. The application layer then is again only responsible for orchestrating that the right logic is being called in the right order. The naming and how generic you want to make the domain service methods by providing parameters might of course need tuning, but I usually try to make nothing too generic if there is only one specific business requirement anyway. So if that's the current family product discount I would than already know where exactly I need to change the implementation - in the domain service method only.
To be honest, if the application method is not getting more complex and you don't have different branches (such as if conditions) I would usually start off like you originally proposed as the application layer method also simply makes calls to domain services (in your case the repository) with the corresponding parameters and has no conditional logic in it. If it get's more complicated I would refactor it out into a domain service method, e.g. the way I proposed.
Note: As I don't know the Implementation of FamilyPriceRedcutionPolicy I can only assume that it will call the corresponding method on the product aggregates to let them apply the discount on the price. E.g. by having a method such as ApplyFamilyDiscount() on the Product aggregate. With that in mind, considering that looping through all the products and calling the discount method will be only logic outside the aggregate, having the steps of getting all products from the repository, calling the ApplyFamilyDiscount() method on all products and saving all changed products could indeed just reside in the application layer.
In terms of considering domain model purity vs. domain model completeness (see discussion below concerning the DDD trilemma) this would move the Implementation again a little more in the direction of purity but also makes a domain service questionable if looping through the products and calling the ApplyFamilyDiscount() is all it does (considering the fetching of the corresponding products via the repository is done beforehand in the application layer and the product list is already passed to the domain service). So again, there is no dogmatic approach and it is rather important knowing the different options and their trade-offs. For instance, one could also consider to let the product always calculate the current price on demand by applying all applicable possible discounts when asking for the price. But again, if such a solution would be feasible depends on the specific requirements.

DDD Layers and External Api

Recently I've been trying to make my web application use separated layers.
If I understand the concept correctly I've managed to extract:
Domain layer
This is where my core domain entities, aggregate roots, value objects reside in. I'm forcing myself to have pure domain model, meaning i do not have any service definitions here. The only thing i define here is the repositories, which is actually hidden because axon framework implements that for me automatically.
Infrastructure layer
This is where the axon implements the repository definitions for my aggregates in the domain layer
Projection layer
This is where the event handlers are implemented to project the data for the read model using MongoDB to persist it. It does not know anything other than event model (plain data classes in kotlin)
Application layer
This is where the confusion starts.
Controller layer
This is where I'm implementing the GraphQL/REST controllers, this controller layer is using the command and query model, meaning it has knowledge about the Domain Layer commands as well as the Projection Layer query model.
As I've mentioned the confusion starts with the application layer, let me explain it a bit with simplified example.
Considering I want a domain model to implement Pokemon fighting logic. I need to use PokemonAPI that would provide me data of the Pokemon names stats etc, this would be an external API i would use to get some data.
Let's say that i would have domain implemented like this:
(Keep in mind that I've stretched this implementation so it forces some issues that i have in my own domain)
Pokemon {
id: ID
}
PokemonFight {
id: ID
pokemon_1: ID
pokemon_2: ID
handle(cmd: Create) {
publish(PokemonFightCreated)
}
handle(cmd: ProvidePokemonStats) {
//providing the stats for the pokemons
publish(PokemonStatsProvided)
}
handle(cmd: Start) {
//fights only when the both pokemon stats were provided
publish(PokemonsFought)
}
The flow of data between layers would be like this.
User -> [HTTP] -> Controller -> [CommandGateway] -> (Application | Domain) -> [EventGateway] -> (Application | Domain)
Let's assume that two of pokemons are created and the use case of pokemon fight is basically that when it gets created the stats are provided and then when the stats are provided the fight automatically starts.
This use case logic can be solved by using event processor or even saga.
However as you see in the PokemonFight aggregate, there is [ProvidePokemonStats] command, which basically provides their stats, however my domain do not know how to get such data, this data is provided with the PokemonAPI.
This confuses me a bit because the use case would need to be implemented on both layers, the application (so it provides the stats using the external api) and also in the domain? the domain use case would just use purely domain concepts. But shouldn't i have one place for the use cases?
If i think about it, the only purpose saga/event processor that lives in the application layer is to provide proper data to my domain, so it can continue with it's use cases. So when external API fails, i send command to the domain and then it can decide what to do.
For example i could just put every saga / event processor in the application, so when i decide to change some automation flow i exactly know what module i need to edit and where to find it.
The other confusion is where i have multiple domains, and i want to create use case that uses many of them and connects the data between them, it immediately rings in my brain that this should be application layer that would use domain APIs to control the use case, because I don't think that i should add dependency of different domain in the core one.
TL;DR
What layer should be responsible of implementing the automated process between aggregates (can be single but you know what i mean) if the process requires some external API data.
What layer should be responsible of implementing the automated process between aggregates that live in different domains / micro services.
Thank you in advance, and I'm also sorry if what I've wrote sounds confusing or it's too much of text, however any answers about layering the DDD applications and proper locations of the components i would highly appreciate.
I will try to put it clear. If you use CQRS:
In the Write Side (commands): The application services are the command handlers. A cmd handler accesses the domain (repositories, aggreagates, etc) in order to implement a use case.
If the use case needs to access data from another bounded context (microservice), it uses an infraestructure service (via dependency injection). You define the infraestructure service interface in the application service layer, and the implementation in the infra layer. The infra then access the remote microservice via http rest for example. Or integration through events.
In the Read Side (queries): The application service is the query method (I think you call it projection), which access the database directly. There's no domain here.
Hope it helps.
I do agree your wording might be a bit vague, but a couple of things do pop up in my mind which might steer you in the right direction.
Mind you, the wording makes it so that I am not 100% sure whether this is what you're looking for. If it isn't, please comment and correct my on the answer I'll provide, so I can update it accordingly.
Now, before your actual question, I'd firstly like to point out the following.
What I am guessing you're mixing is the notion of the Messages and your Domain Model belonging to the same layer. To me personally, the Messages (aka your Commands, Events and Queries) are your public API. They are the language your application speaks, so should be freely sharable with any component and/or service within your Bounded Context.
As such, any component in your 'application layer' contained in the same Bounded Context should be allowed to be aware of this public API. The one in charge of the API will be your Domain Model, that's true, but these concepts have to be shared to be able to communicate with one another.
That said, the component which will provide the states to your aggregate can be viewed from two directions I think.
It's a component that handles a specific 'Start Pokemon Match' Command. This component has the smarts to know to firstly retrieve the states prior to being able to dispatch a Create and ProvidePokemonStats command, thus ensuring it'll consistently create a working match with the stats in it by not dispatching any of both of the external stats-retrieval API fails.
Your angle in the question is to have an Event Handling Component that reacts on the creation of a Match. From here, I'd state a short-lived saga would be in place, as you'd need to deal with the fault scenario of not being able to retrieve the stats. A regular Event Handler is likely to lean to deal with this correctly.
Regardless of the two options you select, this service will deal with messages, a.k.a. your public API. As such it's within your application and not a component others will deal with directly, ever.
When it comes to your second question, I feel the some notion still holds. Two distinct applications/microservices only more so suggests your talking about two different Bounded Contexts. Certainly then a Saga would be in place to coordinate the operations between both contexts. Note that between Bounded Contexts, you want to share consciously when it comes to the public API, as you'd ideally not expose everything to the outside world.
Hope this helps you out and if not, like I said, please comment and provide me guidance how to answer your question properly.

What types of code are appropriate for the service layer?

Assume you have entities, a service layer, and repositories (with an ORM like NHibernate). The UIs interact with the service layer.
What types of code are appropriate for the service layer?
Repository Coordination?
It looks like entities should not reference the repository so should calls for loading/saving/evicting entities exist in the service layer?
Business Logic that Involves Repositories?
If the above is true, should something like checking if a username is distinct go in the service layer (i.e. call GetUsersByUsername and check the results)? Before suggesting that the DB should handle distinct, what about verifying that a password hasn't been used in 90 days?
Business Logic that Involves Multiple Entities?
I'm not sure about this one, but say you have the need to apply an operation against a collection of entities that may or may not be related and is not really applicable to a single entity. Should entities be capable of operating on these collections or does this sort of thing belong in the service layer?
Mapping?
Whether you use DTOs or send your entities to/from your service layer, you will likely end up mapping (preferably with AutoMapper). Does this belong in the service layer?
I'm looking for confirmation (or rejection) of the ideas listed above as well as any other thoughts about the responsibilities of a service layer when working with entities/repositories.
Repository Coordination?
Aggregate roots should draw transactional boundaries. Therefore - multiple repositories should rarely be involved. If they are - that usually happens when You are creating new aggregate root (as opposed to modifying its state).
Business Logic that Involves Repositories?
Yes, checking if username is distinct might live in service layer. Because User usually is an aggregate root and aggregate roots live in global context (there is nothing that "holds" them). I personally put that kind of logic in repository or just check directly through ORM.
As for checking password usage - that's a concern of user itself and should live underneath User object. Something like this:
class User{
void Login(){
LoggedOn=DateTime.Now;
...
}
bool HasLoggedInLast90Days(){
return (DateTime.Now-LoggedOn).Days<=90;
}
}
Business Logic that Involves Multiple Entities?
Aggregate root should manage their entity collections.
class Customer{
void OrderProduct(Product product){
Orders.Add(new Order(product)); //<--
}
}
But remember that aggregate root should not micro-control its entities.
E.g. this is bad:
class Customer{
void IsOrderOverdue(Order order){
return Orders.First(o=>o==order)....==...;
}
}
Instead use:
class Order{
void IsOverdue(){
return ...;
}
}
Mapping?
I suppose mapping to dto`s live in service layer. My mapping classes lives next to view model classes in web project.

Resources