I am new in DDD and I am trying to use it in my Go REST API project. I am quite confuse about how to handle the situation when there are logic which may cross the domain.
Let say I have the following folder structure where I have two domains (Customer and Order):
+.
-- Customer
+-- service
+-- http
+-- usecase
+-- repository
-- Order
+-- service
+-- http
+-- usecase
+-- repository
In general speaking, assume there is an endpoint in the Order domain that would provide a create order feature, then the http service should act as a router and call the corresponding usecase to handle the logic by adding order data to the repository (database).
How about if I would like to add a new logic that customer will get certain reward points when they create an order. Assume there is a CreateOrder function in the usecase to create a new order into the database, but how should I interactive with the Customer domain so that I could add reward points to a customer?
I am thinking of calling the Customer usecase function from the Order usecase function but I am not sure whether it will trigger the circular import issue.
In an event-driven architecture you would use messaging to decouple your systems.
You reward system could, for instance, subscribe to the CustomerActivatedEvent and register a new customer to track rewards for. In addition the reward bounded context could subscribe to the OrderShippedEvent and handle the appropriate reward processing.
In this way you Customers and Ordering BCs don't even know about each other but the Rewards BC can happily reward the various customers.
Related
Recently I've been trying to make my web application use separated layers.
If I understand the concept correctly I've managed to extract:
Domain layer
This is where my core domain entities, aggregate roots, value objects reside in. I'm forcing myself to have pure domain model, meaning i do not have any service definitions here. The only thing i define here is the repositories, which is actually hidden because axon framework implements that for me automatically.
Infrastructure layer
This is where the axon implements the repository definitions for my aggregates in the domain layer
Projection layer
This is where the event handlers are implemented to project the data for the read model using MongoDB to persist it. It does not know anything other than event model (plain data classes in kotlin)
Application layer
This is where the confusion starts.
Controller layer
This is where I'm implementing the GraphQL/REST controllers, this controller layer is using the command and query model, meaning it has knowledge about the Domain Layer commands as well as the Projection Layer query model.
As I've mentioned the confusion starts with the application layer, let me explain it a bit with simplified example.
Considering I want a domain model to implement Pokemon fighting logic. I need to use PokemonAPI that would provide me data of the Pokemon names stats etc, this would be an external API i would use to get some data.
Let's say that i would have domain implemented like this:
(Keep in mind that I've stretched this implementation so it forces some issues that i have in my own domain)
Pokemon {
id: ID
}
PokemonFight {
id: ID
pokemon_1: ID
pokemon_2: ID
handle(cmd: Create) {
publish(PokemonFightCreated)
}
handle(cmd: ProvidePokemonStats) {
//providing the stats for the pokemons
publish(PokemonStatsProvided)
}
handle(cmd: Start) {
//fights only when the both pokemon stats were provided
publish(PokemonsFought)
}
The flow of data between layers would be like this.
User -> [HTTP] -> Controller -> [CommandGateway] -> (Application | Domain) -> [EventGateway] -> (Application | Domain)
Let's assume that two of pokemons are created and the use case of pokemon fight is basically that when it gets created the stats are provided and then when the stats are provided the fight automatically starts.
This use case logic can be solved by using event processor or even saga.
However as you see in the PokemonFight aggregate, there is [ProvidePokemonStats] command, which basically provides their stats, however my domain do not know how to get such data, this data is provided with the PokemonAPI.
This confuses me a bit because the use case would need to be implemented on both layers, the application (so it provides the stats using the external api) and also in the domain? the domain use case would just use purely domain concepts. But shouldn't i have one place for the use cases?
If i think about it, the only purpose saga/event processor that lives in the application layer is to provide proper data to my domain, so it can continue with it's use cases. So when external API fails, i send command to the domain and then it can decide what to do.
For example i could just put every saga / event processor in the application, so when i decide to change some automation flow i exactly know what module i need to edit and where to find it.
The other confusion is where i have multiple domains, and i want to create use case that uses many of them and connects the data between them, it immediately rings in my brain that this should be application layer that would use domain APIs to control the use case, because I don't think that i should add dependency of different domain in the core one.
TL;DR
What layer should be responsible of implementing the automated process between aggregates (can be single but you know what i mean) if the process requires some external API data.
What layer should be responsible of implementing the automated process between aggregates that live in different domains / micro services.
Thank you in advance, and I'm also sorry if what I've wrote sounds confusing or it's too much of text, however any answers about layering the DDD applications and proper locations of the components i would highly appreciate.
I will try to put it clear. If you use CQRS:
In the Write Side (commands): The application services are the command handlers. A cmd handler accesses the domain (repositories, aggreagates, etc) in order to implement a use case.
If the use case needs to access data from another bounded context (microservice), it uses an infraestructure service (via dependency injection). You define the infraestructure service interface in the application service layer, and the implementation in the infra layer. The infra then access the remote microservice via http rest for example. Or integration through events.
In the Read Side (queries): The application service is the query method (I think you call it projection), which access the database directly. There's no domain here.
Hope it helps.
I do agree your wording might be a bit vague, but a couple of things do pop up in my mind which might steer you in the right direction.
Mind you, the wording makes it so that I am not 100% sure whether this is what you're looking for. If it isn't, please comment and correct my on the answer I'll provide, so I can update it accordingly.
Now, before your actual question, I'd firstly like to point out the following.
What I am guessing you're mixing is the notion of the Messages and your Domain Model belonging to the same layer. To me personally, the Messages (aka your Commands, Events and Queries) are your public API. They are the language your application speaks, so should be freely sharable with any component and/or service within your Bounded Context.
As such, any component in your 'application layer' contained in the same Bounded Context should be allowed to be aware of this public API. The one in charge of the API will be your Domain Model, that's true, but these concepts have to be shared to be able to communicate with one another.
That said, the component which will provide the states to your aggregate can be viewed from two directions I think.
It's a component that handles a specific 'Start Pokemon Match' Command. This component has the smarts to know to firstly retrieve the states prior to being able to dispatch a Create and ProvidePokemonStats command, thus ensuring it'll consistently create a working match with the stats in it by not dispatching any of both of the external stats-retrieval API fails.
Your angle in the question is to have an Event Handling Component that reacts on the creation of a Match. From here, I'd state a short-lived saga would be in place, as you'd need to deal with the fault scenario of not being able to retrieve the stats. A regular Event Handler is likely to lean to deal with this correctly.
Regardless of the two options you select, this service will deal with messages, a.k.a. your public API. As such it's within your application and not a component others will deal with directly, ever.
When it comes to your second question, I feel the some notion still holds. Two distinct applications/microservices only more so suggests your talking about two different Bounded Contexts. Certainly then a Saga would be in place to coordinate the operations between both contexts. Note that between Bounded Contexts, you want to share consciously when it comes to the public API, as you'd ideally not expose everything to the outside world.
Hope this helps you out and if not, like I said, please comment and provide me guidance how to answer your question properly.
Here's an abstract question with real world implications.
I have two microservices; let's call them the CreditCardsService and the SubscriptionsService.
I also have a SPA that is supposed to use the SubscriptionsService so that customers can subscribe. To do that, the SubscriptionsService has an endpoint where you can POST a subscription model to create a subscription, and in that model is a creditCardId that points to a credit card that should pay for the subscription. There are certain business rules that say whether or not you can use said credit card for the subscription (expiration is more than 12 months away, it's a VISA, etc). These specific business rules are tied to the SubscriptionsService
The problem is that the team working on the SPA want a /CreditCards endpoint in the SubscriptonsService that returns all valid credit cards of the user that can be used in the subscriptions model. They don't want to implement the same business validation rules in the SPA that are in the SubscriptionsService itself.
To me this seems to go against the SOLID principles that are central to microservice design; specifically separation of concerns. I also ask myself, what precedent is this going to set? Are we going to have to add a /CreditCards endpoint to the OrdersService or any other service that might use creditCardId as a property of it's model?
So the main question is this: What is the best way to design this? Should the business validation logic be duplicated between the frontend and the backend? Should this new endpoint be added to the SubscriptionsService? Should we try to simplify the business logic?
It is a completely fair request and you should offer that endpoint. If you define the rules for what CC is valid for your service, then you should offer any and all help dealing with it too.
Logic should not be repeated. That tend to make systems unmaintainable.
This has less to do with SOLID, although SRP would also say, that if you are responsible for something then any related logic also belongs to you. This concern can not be separated from your service, since it is defined there.
As a solution option, I would perhaps look into whether I can get away with linking to the CC Service since you already have one. Can I redirect the client with a constructed query perhaps to the CC Service to get all relevant CCs, without actually knowing them in the Subscription Service.
What is the best way to design this? Should the business validation
logic be duplicated between the frontend and the backend? Should this
new endpoint be added to the SubscriptionsService? Should we try to
simplify the business logic?
From my point of view, I would integrate "Subscription BC" (S-BC) with "CreditCards BC" (CC-BC). CC-BC is upstream and S-BC is downstream. You could do it with REST API in CC-BC, or with a message queue.
But what I validate is the operation done with a CC, not the CC itself, i.e. validate "is this CC valid for subscription". And that validation is in S-BC.
If the SPA wants to retrieve "the CCs of a user that he/she can use for subscription", it is a functionality of the S-BC.
The client (SPA) should call the the S-BC API to use that functionality, and the S-BC performs the functionality getting the CCs from the CC-BC and doing the validation.
In microservices and DDD the subscriptions service should have a credit cards endpoint if that is data that is relevant to the bounded context of subscriptions.
The creditcards endpoint might serve a slightly different model of data than you would find in the credit cards service itself, because in the context of subscriptions a credit card might look or behave differently. The subscriptions service would have a creditcards table or backing store, probably, to support storing its own schema of creditcards and refer to some source of truth to keep that data in good shape (for example messages about card events on a bus, or some other mechanism).
This enables 3 things, firstly the subscriptions service wont be completely knocked out if cards goes down for a while, it can refer to its own table and work anyway. Secondly your domain code will be more focused as it will only have to deal with the properties of credit cards that really matter to solving the current problem. Finally if your cards store can even have extra domain specific properties that are computed and materialized on store.
Obligatory Fowler link : Bounded Context Pattern
Even when the source of truth is the domain model and ultimately you must have validation at the
domain model level, validation can still be handled at both the domain model level (server side) and
the UI (client side).
Client-side validation is a great convenience for users. It saves time they would otherwise spend
waiting for a round trip to the server that might return validation errors. In business terms, even a few
fractions of seconds multiplied hundreds of times each day adds up to a lot of time, expense, and
frustration. Straightforward and immediate validation enables users to work more efficiently and
produce better quality input and output.
Just as the view model and the domain model are different, view model validation and domain model
validation might be similar but serve a different purpose. If you are concerned about DRY (the Don’t
Repeat Yourself principle), consider that in this case code reuse might also mean coupling, and in enterprise applications it is more important not to couple the server side to the client side than to
follow the DRY principle. (NET-Microservices-Architecture-for-Containerized-NET-Applications book)
Our application sends/receives a lot of data to/from a third party we work with.
Our domain model is mainly populated with that data.
The 'problem' we're having is identifying a 'good' candidate as domain identity for the aggregate.
It seems like we have 3 options:
Generate a domain identity (UUID or DB-sequence...);
Use the External-ID as domain identity that comes along with all data from the external source.
Use an internal domain identity AND External-ID as a separate id that 'might' be used for retrieval operations; the internal id is always leading
About the External-ID:
It is 100% guaranteed the ID will never change
The ID is always managed by the external source
Other domains in our system might use the external-id for retrieval operations
Especially the last point above convinced us that the external-id is not an infrastructural concern but really belongs to the domain.
Which option should we choose?
** UPDATE **
Maybe I was not clear about the term '3th party'.
Actually, the external source is our client who is active in the Car industry. Our application uses client's master data to complete several 'things'. We have several Bounded Contexts (BC) like 'Client management', 'Survey', 'Appointment',
'Maintenance' etc.
Our client sends us 'Tasks' that describe something needs te be done.
That 'something' might be:
'let client X complete survey Y'
'schedule/cancel appointment for client X'
'car X for client Y is scheduled for maintenance at position XYZ'
Those 'Tasks' always have a 'task-id' that is guaranteed to be unique.
We store all incoming 'Tasks' in our database (active record style). Every possible action on a task maps with a domain event. (Multiple BCs might be interested in the same task)
Every BC contains one or more aggregates which distribute some domain events to other BCs. For instance, when an appointment is canceled a domain event is triggered, maintenance listens to that event to get some things done.
However, our client expects some message after every action that is related to a Task. Therefore we always need to use the 'task-id'.
To summarize things:
Tasks have a task-id
Tasks might be related to multiple BCs
Every BC sends some 'result message' to the client with the related task-id
Task-ids are distributed by domain events
We keep every (internally) persisted task up-to-date
Hopefully, I was clear enough about the use of the external-id (= task-id) and our different BCs.
My gut feeling would be to manage your own identity and not rely on a third party service for this, so option 3 above. Difficult to say without context though. What is the 3rd party system? What is your domain?
Would you ever switch the 3rd party service?
You say other parts of your domain might use the external id for querying - what are they querying? Your internal systems or the 3rd party service?
[Update]
Based on the new information it sounds like a correlationId. I'd store it alongside the other information relevant to the aggregates.
As a general rule, I would veto using a DB-sequence number as a identifier; the domain model should be independent of the choice of persistence; the domain model writes the identifier to the database, rather than the other way around (if the DB wants to be tracking a sequence number for its own purposes, that's fine).
I'm reluctant to use the external identifier, although it can make sense in some circumstances. A given entity, like "Customer" might have representations in a number of different bounded contexts - it might make sense to use the same identifier for all of them.
My default: I would reach for a name based uuid, using the external ID as part of the seed, which gives a simple mapping from external to internal.
Not quite sure how to approach this problem regarding DDD.
Say you have 2 domains:
A Product domain which is responsible for creating new and managing existing Products which a person has created. Product Aggregate Root
A Store domain which is responsible for assigning all created Products to a given Store to be sold, and other stuff. But the Store can also create new Products which will be assigned to themselves but also available for other Stores so that they can assign the Product to themselves. Store Aggregate Root
A Product can exist without belonging to a Store. A Store can exist without having any Products in it whatsoever (doesn't make sense I know, but this is just a simple example of a complex solution I'm working on atm.)
So when a Person comes along into the system, they can start at either end. They can start by creating new Products or they can start by adding a new Store.
Here is where it gets complicated, when they create a new Store, they are given the option to add existing Products or they can create a new Product and add it to the Store.
How do you deal with this use case. is the does the Store have a behavior to CreateNewProduct where it's responsible to setting up the new Product and then adding it to the Store. Or do you simply create a new Product outside the Store Domain as part of the Product domain, and tell the Store to AddProduct / AddExistingProduct?
UPDATE: Would something like this be appropriate as a Domain Service
public class StoreProductService {
public Store CreateNewStoreProduct (Store store, string sku, string name, etc){
Product newProduct = ProductFactory.CreateNewProduct(sku, name, etc);
store.AddProduct(newProduct);
return store;
}
}
You need the two domains to communicate with each other. You can communicate using various methods but one common way is to use a REST API. In your Store domain, you need something that communicates or knows how to communicate with the API. I have generally implemented these as a service that wraps api calls. The service will be able to understand your domain Ubiquitous language and will translate that into API commands. You can use domain events to possibly listen for when the product has been created and then you can associate the store with the product or you can implement some kind of polling mechanism.
An example was requested, so here goes:
Call from UI (probably from a controller)
StoreService.CreateStore(storeName: String, newProduct : StoreProduct)
You could pass in primitives to represent the newproduct. This allows you to create a new product without the need for a new class for it. You subsequently don't need a shared infrastructure class or converter between your domain and UI layers.
StoreService
public StoreService
{
public StoreService(ProductApiClient productApiClient...)...
public void CreateStore(string StoreName, StoreProduct prod...)
{
var newStore = StoreFactory.Create(storeName);
//below might be better as an asynch call but for simplicity
//keeping it all synchronous
var newProd = productApiClient.CreateProduct(prod);
//check if product was created successfully, if so
newStore.Add(newProd);
//save
StoreRepo.Create(newStore);
//check if store was created successfully
}
The main points to take from here:
Create your aggregate entity using a factory (at this state it has not being persisted).
You need a repo to serialize and persist your new store to your data store.
productApiClient will translate between your domain model and a request to the productApi. It knows how to generate an Api request (if REST for example) given domain entities.
ProductApiClient will return a domain entity from the Api Response.
You add a new product to a store using the store entity.
You persist your store using the repo
You may choose to pick a more asynchronous approach if the call to the api is time consuming.
This example illustrates the power of DDD and the separation of concerns. Depending on the size of your project and your requirements, this might be overly tedious so fit it to your needs, skills and requirements.
This business problem is very common. You surely can find some of examples about eCommerce systems with DDD.
Typical situation is that you have not finished developing your UL. If you discuss the Store (I won't mark it as code since I am talking about UL terms, not classes) with your domain experts, you can find out that Store only cares about Product Availability, not the Product itself.
Typically, your Product that belongs to a Product Catalogue, has little to do with the availability. It might have general description, manufacturer details, country of origin, picture, package size and weight and so on.
Product Availability, however, has, well, the availability, i.e. count of available product items. It might have some extra details, for example a count of reserved items, expected to arrive items and so on.
The fact is that often both of these concepts, despite they are rather different, are referred by domain experts as "Product". This is a typical example of two separate Bounded Contexts.
As soon as you have two different things that are called Product in two different Bounded Contexts (and physical places), you will need to keep them in sync. This is usually done by publishing domain events outside of the Bounded Contexts and updating the other side by subscribing to these events there and doing internal domain command handling inside the Bounded Context.
I am implementing a RESTful service that has a security model requiring authorization at three levels:
Resource level authorization - determining whether the user has
access to the resource (entity) at all. For example, does the
current user have permission to view customers in general. If not,
then everything stops there.
Instance level authorization - determining whether the user has
access to a specific instance of a resource (entity). Due to
various rules and the state of the entity, the current user may not
be granted access to one of more customers within the set of
customers. E.g., a customer may be able to view their own information but not the information of another customer.
Property level authorization - determine which properties the user
has access to on an instance of the resource (entity). We have many
business rules that determine whether a user may see and/or change
individual properties of a resource (entity). For example, the
current user may be able to see the customer's name but not their
address or phone number as well as able to see and add notes.
Implementing resource-level authorization is straight-forward; however, the other two are not. I believe the solution for instance-level authorization will reveal itself with a solution to the (harder, imo) property-level authorization question. The latter issue is complicated by the fact that I need to communicate the authorization decisions, by property, in the response message (ala hypermedia) - in other words, this isn't something I can simply enforce in property setters.
With each request to the service, I must use the current user's information to perform these authorization checks. In the case of a GET request for either a list of resources or an individual resource, I need to tell the API layer which attributes the current user can see (is visible) and whether the attribute is read-only or editable. The API layer will then use this information to create the appropriate response message. For instance, any property that is not visible will not be included in the message. Read-only properties will be flagged so the client application can render the property in the appropriate state for the user.
Solutions like application services, aspects, etc. work great for resource-level authorization and can even be used for instance-level checks, but I am stuck determining how best to model my domain so the business rules and checks enforcing the security constraints are included.
NOTE: Keep in mind that this goes way beyond role-based security in that I am getting the final authorization result based on business rules using the current state of the resource and environment (along with verifying access using the permissions granted to the current user via their roles).
How should I model the domain so I have enforcement of all three types of authorization checks (in a testable way, with DI, etc)?
Initial Assumptions
I suppose the following architecture:
Stateless Scaling Sharding
. .
. .
. .
+=========================================+
| Service Layer |
+----------+ | +-----------+ +---------------+ | +------------+
| | HTTP | | | | | | Driver, Wire, etc. | |
| Client | <============> | RESTful | <====> | Data Access | <=======================> | Database |
| | JSON | | Service | DTO | Layer | | ORM, Raw, etc. | |
| | | | | | | | | |
+----------+ | +-----------+ +---------------+ | +------------+
+=========================================+
. .
. .
. .
Initially, let us suppose you are authenticating the Client with the Service Layer and obtain a particular token, that encodes the authentication and authorization information in it.
First of all, what I first think of is to process all the requests and then only filter them depending on authorization. This will make the whole thing much simpler and much easier to maintain. However, of course there might be some requests requiring expensive processing, in which case this approach is absolutely not effective. On the other hand, the heavy-load requests will most probably involve resource level access, which as you have stated is easy to organise, and it is possible to detect and authorise in Service Layer at API or at least Data Access levels.
Further Thoughts
As for instance and property level authorization, I will not even try to put it into the Data Access Layer and would completely isolate it beyond the API level, i.e. starting from Data Access Layer no any layer would even be aware of it. Even if you request a list of 1M objects and want to emit one or two properties from all objects for that particular client, it would be preferable to fetch the whole objects and then only hide the properties.
Another assumption is that your model is a clear DTO, i.e. simply a data container, and all the business logic is implemented in the Service Layer, particularly API level. And say you pass the data across HTTP encoded as JSON. So anyway, somewhere in front of the API layer, you are going to have a small serialization stage, to transform your model into JSON. So this stage is where I think is the ideal place to put the instance and property authorization.
Suggestion
If it comes to the property level authorization, I think there is no reasonable way to isolate the model from security logic. Be it a rule-based, role-based or whatever-based authorisation, the process is going to be verified upon a piece of value from the authentication/authorisation token provided by the Client. So, in the serialisation level, you will get basically two parameters, the token and the model, and accordingly will serialise appropriate properties or the instance as a whole.
When it comes to defining the rules, roles and whatevers per property for the model, it can be done in various ways depending on the available paradigms, i.e. depending on the language the Service Layer will be implemented. The definitions can be done heavily utilising Annotations (Java) or Decorators (Python). For emitting specific properties, Python will come handy with its dynamic typing and hacky features, e.g. Descriptors. In case of Java, you might end up encapsulating properties into a template class, say AuthField<T>.
Summary
Summing all up, I would suggest to put the instance and property authorization in front of the API Layer in the stage of serialisation. Thus basically, the roles/rules will be assigned in the model and the authorisation will be performed in the serialiser, being provided the model and token.
Since a comment was added recently, I thought I'd update this post with my learnings since I originally asked the question...
Simply put, my original logic was flawed and I was trying to do too much in my business entities. Following a CQRS approach helped make the solution clearer.
For state changes, the "write model" uses the business entity and a "command handler"/"domain service" performs authorization checks to make sure the user has the necessary permissions to update the requested object and, where applicable, change specific properties of that object. I still debate whether the property-level checks belong inside the business entity methods or outside (in the handler/service).
In the case of the "read model", a "query handler"/"domain service" checks the resource- and instance-level authorization rules so only objects to which the user has access are returned. The handler/service uses a mapper object that applies the property-level authorization rules to the data when constructing the DTO(s) to return. In other words, the data access layer returns the "projection" (or view) regardless of authorization rules and the mapper ignores any properties to which the current user does not have access.
I dabbled with using dynamic query generation based on the list of fields to which the user has access but found that to be overkill in our case. But, in a more complex solution with larger, more complex entities, I can see that being more viable.