External id as domain identity - domain-driven-design

Our application sends/receives a lot of data to/from a third party we work with.
Our domain model is mainly populated with that data.
The 'problem' we're having is identifying a 'good' candidate as domain identity for the aggregate.
It seems like we have 3 options:
Generate a domain identity (UUID or DB-sequence...);
Use the External-ID as domain identity that comes along with all data from the external source.
Use an internal domain identity AND External-ID as a separate id that 'might' be used for retrieval operations; the internal id is always leading
About the External-ID:
It is 100% guaranteed the ID will never change
The ID is always managed by the external source
Other domains in our system might use the external-id for retrieval operations
Especially the last point above convinced us that the external-id is not an infrastructural concern but really belongs to the domain.
Which option should we choose?
** UPDATE **
Maybe I was not clear about the term '3th party'.
Actually, the external source is our client who is active in the Car industry. Our application uses client's master data to complete several 'things'. We have several Bounded Contexts (BC) like 'Client management', 'Survey', 'Appointment',
'Maintenance' etc.
Our client sends us 'Tasks' that describe something needs te be done.
That 'something' might be:
'let client X complete survey Y'
'schedule/cancel appointment for client X'
'car X for client Y is scheduled for maintenance at position XYZ'
Those 'Tasks' always have a 'task-id' that is guaranteed to be unique.
We store all incoming 'Tasks' in our database (active record style). Every possible action on a task maps with a domain event. (Multiple BCs might be interested in the same task)
Every BC contains one or more aggregates which distribute some domain events to other BCs. For instance, when an appointment is canceled a domain event is triggered, maintenance listens to that event to get some things done.
However, our client expects some message after every action that is related to a Task. Therefore we always need to use the 'task-id'.
To summarize things:
Tasks have a task-id
Tasks might be related to multiple BCs
Every BC sends some 'result message' to the client with the related task-id
Task-ids are distributed by domain events
We keep every (internally) persisted task up-to-date
Hopefully, I was clear enough about the use of the external-id (= task-id) and our different BCs.

My gut feeling would be to manage your own identity and not rely on a third party service for this, so option 3 above. Difficult to say without context though. What is the 3rd party system? What is your domain?
Would you ever switch the 3rd party service?
You say other parts of your domain might use the external id for querying - what are they querying? Your internal systems or the 3rd party service?
[Update]
Based on the new information it sounds like a correlationId. I'd store it alongside the other information relevant to the aggregates.

As a general rule, I would veto using a DB-sequence number as a identifier; the domain model should be independent of the choice of persistence; the domain model writes the identifier to the database, rather than the other way around (if the DB wants to be tracking a sequence number for its own purposes, that's fine).
I'm reluctant to use the external identifier, although it can make sense in some circumstances. A given entity, like "Customer" might have representations in a number of different bounded contexts - it might make sense to use the same identifier for all of them.
My default: I would reach for a name based uuid, using the external ID as part of the seed, which gives a simple mapping from external to internal.

Related

DDD Domain Entities using external Services

In DDD, you're strongly encouraged to have all business logic within the domain entities instead of separate from it. Which makes sense.
You also have the idea of Domain Services to encapsulate certain pieces of logic.
What I can't work out is how to have the domain entities perform their business logic that itself depends on external services.
Take, for example, a user management system. In this there is a User domain entity, on which there are various business actions to perform. One of these is verify_email_address - which sends an email to the users email address in order to verify that it's valid.
Except that sending an email involves interactions with external services. So it seems reasonable for this logic to be encapsulated within a Domain Service that the User entity then makes use of. But how does it actually do this?
I've seen some things suggest that the User entity is constructed with references to every domain service that it needs. Which makes it quite big and bloated, and especially hard to test.
I've seen some things that suggest that you should pass in the domain service to the method that you are calling - which just feels weird. Why would someone outside of the domain entity need to pass in the EmailClient service when calling verify_email_address?
I've also then seen the suggestion that you instead pass the User entity in to the domain service, except that this seems to imply that the logic is in the domain service and not the entity.
And finally, I've seen suggestions that the entity should raise a domain event to trigger the email, which the domain service then reacts to. That means that all of the logic for whether to send the email or not - no need if it's already verified - and which email address to send it to are in the right place, and it also means that the user only needs a way to raise events and nothing more. So it feels less tightly coupled. But it also means you need a whole eventing mechanism, which is itself quite complicated.
So, am I missing something here? How can I achieve this?
(I'm working in Rust if that matters, but I don't see that it should)
What I can't work out is the best way to have the domain entities perform their business logic that itself depends on external services.
That's not your fault; the literature is a mess.
In DDD, you're strongly encouraged to have all business logic within the domain entities instead of separate from it.
That's not quite right; the common idea is that all business logic belongs within the domain layer (somewhere). But that doesn't necessarily mean that the logic must be within a domain entity.
Evans, in Chapter 5, writes:
In some cases, the clearest and most pragmatic design includes operations that do not belong to any object. Rather than force the issue, we can follow the natural contours of the problems space and include SERVICES explicitly in the model.
There are important domain operations that can't find a natural home in an ENTITY or VALUE OBJECT....
It's a very Kingdom of Nouns idea; we have code that actually does something useful, so there must be an object it can belong to.
Having a module (in the Parnas sense) in the domain layer that is responsible for the coordination of an email client and a domain entity (or for that matter, a repository) to achieve some goal is a perfectly reasonable option.
Could that module be the domain entity itself? It certainly could.
You might find, however, that coupling the management of the in-memory representation of domain information and the orchestration of a domain process that interacts with "the real world", as it were, complicates your maintenance responsibilities by introducing a heavy coupling between two distinct concepts that should instead be lightly coupled.
Clean Architecture (and the predecessors in that lineage) suggests to separate "entities" from "use cases". Ivar Jacobson's Objectory Process distinguished "entities" from "controls". So the notion of a service that is decoupled from the entity shouldn't be too alien.
Ruth Malan writes:
Design is what we do when we want to get more of what we want than we'd get by just doing it.
Finding the right design depends a lot on finding the right "what we want" for our local context (including our best guess at how this local context is going to evolve over the time window we care about).
VoiceOfUnReason has a perfectly valid answer.
I just want to boil down your question to the grits.
What I can't work out is how to have the domain entities perform their business logic that itself depends on external services.
I've also then seen the suggestion that you instead pass the User entity in to the domain service, except that this seems to imply that the logic is in the domain service and not the entity.
That's the key. All logic that belongs to domain entities should be done on domain entities. But at the same time, domain entities MUST be independent of the outside world (even other domain entities).
That's why we have domain services and application services.
Domain services are used to coordinate things between multiple entities, like transferring money between two accounts:
public class TransferService
{
IAccountRepos _repos;
public void Transfer(string fromAccountNumber, string toAccountNumber, decimal amount)
{
var account1 = _repos.Get(fromAccountNumber);
var account2 = _repos.Get(fromAccountNumber);
var money = account1.Withdraw(amount);
account2.Deposit(money);
_repos.Update(account1);
_repos.Update(account2);
}
}
That's a domain service since it's still only using the domain only.
Application services on the other hand are used to communicate over boundaries and with external services.
And it's an external service that you should create in this case. It looks similar to domain services but are at a layer over it (it can use domain services to solve its purpose).
To summarize:
Entities must be used to perform actions on themself (the easiest way is to make all setters private which forces you to add methods).
Domains services should be used as soon as two or more entities must be used to solve a problem (can even be two entities of the same type as in the example above)
Application services are used to interact with things outside the domain and can use entities and/or domain services to solve the problem.

Specifications Pattern crossing bounded context - Domain Driven Design

I'm trying to understand and implement with good practice the Specification Pattern within my domain driven design project.
I have few questions:
For example: verifying that a customer is subscribed before update a data on a aggregate is it part of business rule and should sit in the domain layer?
Can specifications cross bounded context? For exemple, i have a Subscription bounded context where we can find a specification called SubscriptionIsActive. In a second bounded context let's call it PropertyManagement. I would like to call SubscriptionIsActive specification as an User need to be a subscriber to create a Property aggregate. My specification here is crossing bounded context and i don't think it's right choice, any recommandation, tips?
Where should instanciate the specification when we want to use it, Application layer (that contains, commands and query, we use cqrs) or Domain Layer wihtin the aggregate root?
At the end, where should access control like (User has rights to edit some aggregate) sit in a domain driven design, Domain Layer or Application Layer in services before calling the aggregate?
Thanks
My recommendation is that a domain should not call out to obtain additional information/data. This means that everything a domain needs should be handed to it and the domain performs the transactional processing.
All authentication/authorization takes place outside the domain and access to the relevant domain operations is granted based on the identity/access outcome.
I have found specifications most useful on the data querying side of things. Usually there are multiple permutations w.r.t. how data is retrieved and a specification encapsulates the various possibilities quite nicely (GetByName,GetByNameAndType,etc.) This doesn't mean that you cannot use a specification in the domain. You may have some complex piece of functionality to determine qualification of a domain object and in that case a specification operating on the relevant domain object would be a good fit. To that end a specification doesn't cross a domain as it would be part of a particular domain. In much the same way as a specification is useful in a data querying scenario you may have a specification in the integration/application concern and have it determine qualification based on data from multiple domains such as in your second example. This may even be something that is applied in a process manager implementation (integration).
Not all business rules need to be specifications and if, as in your first case, a customer is not subscribed then your integration/application processing would stop any further transactional changes much as it would for an authorization failure.

DDD/Event sourcing, getting data from another microservice?

I wonder if you can help. I am writing an order system and currently have implemented an order microservice which takes care of placing an order. I am using DDD with event sourcing and CQRS.
The order service itself takes in commands that produce events, the actual order service listens to its own event to create a read model (The idea here is to use CQRS, so commands for writes and queries for reads)
After implementing the above, I ran into a problem and its probably just that I am not fully understanding the correct way of doing this.
An order actually has dependents, meaning an order needs a customer and a product/s. So i will have 2 additional microservices for customer and products.
To keep things simple, i would like to concentrate on the customer (although I have exactly the same issue with products but my thinking is that if I fix the customer issue then the other one is automatically fixed also).
So back to the problem at hand. To create an order the order needs a customer (and products), I currently have the customerId on the client, so sending down a command to the order service, I can pass in the customerId.
I would like to save the name and address of the customer with the order. How do I get the name and address of the customerId from the Customer Service in the Order Service ?
I suppose to summarize, when data from one service needs data from another service, how am I able to get this data.
Would it be the case of the order service creating an event for receiving a customer record ? This is going to introduce a lot of complexity (more events) in the system
The microservices are NOT coupled so the order service can't just call into the read model of the customer.
Anybody able to help me on this ?
If you are using DDD, first of all, please read about bounded context. Forget microservices, they are just implementation strategy.
Now back to your problem. Publish these events from Customer aggregate(in your case Customer microservice): CustomerRegistered, CustomerInfoUpdated, CustomerAccountRemoved, CustomerAddressChanged etc. Then subscribe your Order service(again in your case application service inside Order microservice) to listen all above events. Okay, not all, just what order needs.
Now, you may have a question, what if majority or some of my customers don't make orders? My order service will be full of unnecessary data. Is this a good approach?
Well, answer might vary. I would say, space in hard disk is cheaper than memory or a database query is faster than a network call in performance perspective. If your database host(or your server) is limited then you should not go with microservices. Moreover, I would make some business ideas with these unused customer data e.g. list all customers who never ordered anything, I will send them some offers to grow my business. Just kidding. Don't feel bothered with unused data in microservices.
My suggestion would be to gather the required data on the front-end and pass it along. The relevant customer details that you want to denormalize into the order would be a value object. The same goes for the product data (e.g. id, description) related to the order line.
It isn't impossible to have the systems interact to retrieve data but that does couple them on a lower level that seems necessary.
When data from one service needs data from another service, how am I able to get this data?
You copy it.
So somewhere in your design there needs to be a message that carries the data from where it is to where it needs to be.
That could mean that the order service is subscribing to events that are published by the customer service, and storing a copy of the information that it needs. Or it could be that the order service queries some API that has direct access to the data stored by the customer service.
Queries for the additional data that you need could be synchronous or asynchronous - maybe the work can be deferred until you have all of the data you need.
Another possibility is that you redesign your system so that the business capability you need is with the data, either moving the capability or moving the data. Why does ordering need customer data? Can the customer service do the work instead? Should ordering own the data?
There's a certain amount of complexity that is inherent in your decision to distribute the work across multiple services. The decision to distribute your system involves weighing various trade offs.

DDD Layers and External Api

Recently I've been trying to make my web application use separated layers.
If I understand the concept correctly I've managed to extract:
Domain layer
This is where my core domain entities, aggregate roots, value objects reside in. I'm forcing myself to have pure domain model, meaning i do not have any service definitions here. The only thing i define here is the repositories, which is actually hidden because axon framework implements that for me automatically.
Infrastructure layer
This is where the axon implements the repository definitions for my aggregates in the domain layer
Projection layer
This is where the event handlers are implemented to project the data for the read model using MongoDB to persist it. It does not know anything other than event model (plain data classes in kotlin)
Application layer
This is where the confusion starts.
Controller layer
This is where I'm implementing the GraphQL/REST controllers, this controller layer is using the command and query model, meaning it has knowledge about the Domain Layer commands as well as the Projection Layer query model.
As I've mentioned the confusion starts with the application layer, let me explain it a bit with simplified example.
Considering I want a domain model to implement Pokemon fighting logic. I need to use PokemonAPI that would provide me data of the Pokemon names stats etc, this would be an external API i would use to get some data.
Let's say that i would have domain implemented like this:
(Keep in mind that I've stretched this implementation so it forces some issues that i have in my own domain)
Pokemon {
id: ID
}
PokemonFight {
id: ID
pokemon_1: ID
pokemon_2: ID
handle(cmd: Create) {
publish(PokemonFightCreated)
}
handle(cmd: ProvidePokemonStats) {
//providing the stats for the pokemons
publish(PokemonStatsProvided)
}
handle(cmd: Start) {
//fights only when the both pokemon stats were provided
publish(PokemonsFought)
}
The flow of data between layers would be like this.
User -> [HTTP] -> Controller -> [CommandGateway] -> (Application | Domain) -> [EventGateway] -> (Application | Domain)
Let's assume that two of pokemons are created and the use case of pokemon fight is basically that when it gets created the stats are provided and then when the stats are provided the fight automatically starts.
This use case logic can be solved by using event processor or even saga.
However as you see in the PokemonFight aggregate, there is [ProvidePokemonStats] command, which basically provides their stats, however my domain do not know how to get such data, this data is provided with the PokemonAPI.
This confuses me a bit because the use case would need to be implemented on both layers, the application (so it provides the stats using the external api) and also in the domain? the domain use case would just use purely domain concepts. But shouldn't i have one place for the use cases?
If i think about it, the only purpose saga/event processor that lives in the application layer is to provide proper data to my domain, so it can continue with it's use cases. So when external API fails, i send command to the domain and then it can decide what to do.
For example i could just put every saga / event processor in the application, so when i decide to change some automation flow i exactly know what module i need to edit and where to find it.
The other confusion is where i have multiple domains, and i want to create use case that uses many of them and connects the data between them, it immediately rings in my brain that this should be application layer that would use domain APIs to control the use case, because I don't think that i should add dependency of different domain in the core one.
TL;DR
What layer should be responsible of implementing the automated process between aggregates (can be single but you know what i mean) if the process requires some external API data.
What layer should be responsible of implementing the automated process between aggregates that live in different domains / micro services.
Thank you in advance, and I'm also sorry if what I've wrote sounds confusing or it's too much of text, however any answers about layering the DDD applications and proper locations of the components i would highly appreciate.
I will try to put it clear. If you use CQRS:
In the Write Side (commands): The application services are the command handlers. A cmd handler accesses the domain (repositories, aggreagates, etc) in order to implement a use case.
If the use case needs to access data from another bounded context (microservice), it uses an infraestructure service (via dependency injection). You define the infraestructure service interface in the application service layer, and the implementation in the infra layer. The infra then access the remote microservice via http rest for example. Or integration through events.
In the Read Side (queries): The application service is the query method (I think you call it projection), which access the database directly. There's no domain here.
Hope it helps.
I do agree your wording might be a bit vague, but a couple of things do pop up in my mind which might steer you in the right direction.
Mind you, the wording makes it so that I am not 100% sure whether this is what you're looking for. If it isn't, please comment and correct my on the answer I'll provide, so I can update it accordingly.
Now, before your actual question, I'd firstly like to point out the following.
What I am guessing you're mixing is the notion of the Messages and your Domain Model belonging to the same layer. To me personally, the Messages (aka your Commands, Events and Queries) are your public API. They are the language your application speaks, so should be freely sharable with any component and/or service within your Bounded Context.
As such, any component in your 'application layer' contained in the same Bounded Context should be allowed to be aware of this public API. The one in charge of the API will be your Domain Model, that's true, but these concepts have to be shared to be able to communicate with one another.
That said, the component which will provide the states to your aggregate can be viewed from two directions I think.
It's a component that handles a specific 'Start Pokemon Match' Command. This component has the smarts to know to firstly retrieve the states prior to being able to dispatch a Create and ProvidePokemonStats command, thus ensuring it'll consistently create a working match with the stats in it by not dispatching any of both of the external stats-retrieval API fails.
Your angle in the question is to have an Event Handling Component that reacts on the creation of a Match. From here, I'd state a short-lived saga would be in place, as you'd need to deal with the fault scenario of not being able to retrieve the stats. A regular Event Handler is likely to lean to deal with this correctly.
Regardless of the two options you select, this service will deal with messages, a.k.a. your public API. As such it's within your application and not a component others will deal with directly, ever.
When it comes to your second question, I feel the some notion still holds. Two distinct applications/microservices only more so suggests your talking about two different Bounded Contexts. Certainly then a Saga would be in place to coordinate the operations between both contexts. Note that between Bounded Contexts, you want to share consciously when it comes to the public API, as you'd ideally not expose everything to the outside world.
Hope this helps you out and if not, like I said, please comment and provide me guidance how to answer your question properly.

Duplicate business logic in front-end with ddd microservice back-end

Here's an abstract question with real world implications.
I have two microservices; let's call them the CreditCardsService and the SubscriptionsService.
I also have a SPA that is supposed to use the SubscriptionsService so that customers can subscribe. To do that, the SubscriptionsService has an endpoint where you can POST a subscription model to create a subscription, and in that model is a creditCardId that points to a credit card that should pay for the subscription. There are certain business rules that say whether or not you can use said credit card for the subscription (expiration is more than 12 months away, it's a VISA, etc). These specific business rules are tied to the SubscriptionsService
The problem is that the team working on the SPA want a /CreditCards endpoint in the SubscriptonsService that returns all valid credit cards of the user that can be used in the subscriptions model. They don't want to implement the same business validation rules in the SPA that are in the SubscriptionsService itself.
To me this seems to go against the SOLID principles that are central to microservice design; specifically separation of concerns. I also ask myself, what precedent is this going to set? Are we going to have to add a /CreditCards endpoint to the OrdersService or any other service that might use creditCardId as a property of it's model?
So the main question is this: What is the best way to design this? Should the business validation logic be duplicated between the frontend and the backend? Should this new endpoint be added to the SubscriptionsService? Should we try to simplify the business logic?
It is a completely fair request and you should offer that endpoint. If you define the rules for what CC is valid for your service, then you should offer any and all help dealing with it too.
Logic should not be repeated. That tend to make systems unmaintainable.
This has less to do with SOLID, although SRP would also say, that if you are responsible for something then any related logic also belongs to you. This concern can not be separated from your service, since it is defined there.
As a solution option, I would perhaps look into whether I can get away with linking to the CC Service since you already have one. Can I redirect the client with a constructed query perhaps to the CC Service to get all relevant CCs, without actually knowing them in the Subscription Service.
What is the best way to design this? Should the business validation
logic be duplicated between the frontend and the backend? Should this
new endpoint be added to the SubscriptionsService? Should we try to
simplify the business logic?
From my point of view, I would integrate "Subscription BC" (S-BC) with "CreditCards BC" (CC-BC). CC-BC is upstream and S-BC is downstream. You could do it with REST API in CC-BC, or with a message queue.
But what I validate is the operation done with a CC, not the CC itself, i.e. validate "is this CC valid for subscription". And that validation is in S-BC.
If the SPA wants to retrieve "the CCs of a user that he/she can use for subscription", it is a functionality of the S-BC.
The client (SPA) should call the the S-BC API to use that functionality, and the S-BC performs the functionality getting the CCs from the CC-BC and doing the validation.
In microservices and DDD the subscriptions service should have a credit cards endpoint if that is data that is relevant to the bounded context of subscriptions.
The creditcards endpoint might serve a slightly different model of data than you would find in the credit cards service itself, because in the context of subscriptions a credit card might look or behave differently. The subscriptions service would have a creditcards table or backing store, probably, to support storing its own schema of creditcards and refer to some source of truth to keep that data in good shape (for example messages about card events on a bus, or some other mechanism).
This enables 3 things, firstly the subscriptions service wont be completely knocked out if cards goes down for a while, it can refer to its own table and work anyway. Secondly your domain code will be more focused as it will only have to deal with the properties of credit cards that really matter to solving the current problem. Finally if your cards store can even have extra domain specific properties that are computed and materialized on store.
Obligatory Fowler link : Bounded Context Pattern
Even when the source of truth is the domain model and ultimately you must have validation at the
domain model level, validation can still be handled at both the domain model level (server side) and
the UI (client side).
Client-side validation is a great convenience for users. It saves time they would otherwise spend
waiting for a round trip to the server that might return validation errors. In business terms, even a few
fractions of seconds multiplied hundreds of times each day adds up to a lot of time, expense, and
frustration. Straightforward and immediate validation enables users to work more efficiently and
produce better quality input and output.
Just as the view model and the domain model are different, view model validation and domain model
validation might be similar but serve a different purpose. If you are concerned about DRY (the Don’t
Repeat Yourself principle), consider that in this case code reuse might also mean coupling, and in enterprise applications it is more important not to couple the server side to the client side than to
follow the DRY principle. (NET-Microservices-Architecture-for-Containerized-NET-Applications book)

Resources