Several sources claim that process managers do not contain any business logic. A Microsoft article for example says this:
You should not use a process manager to implement any business logic in your domain. Business logic belongs in the aggregate types.
Further up they also say this (emphasis mine):
It's important to note that the process manager does not perform any business logic. It only routes messages, and in some cases translates between message types.
However, I fail to see why translations between messages (e.g. from a domain event to a command) are not part of the business logic. You require a domain expert in order to know what the correct order of steps and the translations between them are. In some cases you also need to persist state in-between steps, and you maybe even select next steps based on some (business) condition. So not everything is a static list of given steps (although that alone I’d call business logic too).
In many ways a process manager (or saga for that matter) is just another aggregate type that persists state and may have some business invariants, in my opinion.
Assuming that we implement DDD with a hexagonal architecture, I‘d place the process manager in the application layer (not adapter!!) such that it can react to messages or be triggered by a timer. It would load a corresponding process manager aggregate via a repository and call methods on it that either set its (business) state or ask it for the next command to send (where the actual sending is done by the application layer of course). This aggregate lives in the domain layer because it does business logic.
I really don‘t understand why people make a distinction between business rules and workflow rules. If you delete everything except the domain layer, you should be able to reconstruct a working application without the need to consult a domain expert again.
I‘d be happy to get some further insight I might be missing from you guys.
A fair portion of the confusion here is a consequence of semantic diffusion.
The spelling "process manager" comes from Enterprise Integration Patterns (Hohpe and Woolf, 2003). There, it is a messaging pattern; more precisely, it is one possible specialization of a message router. The motivation for a message router is a decoupling of the sender and receiver.
If new message types are defined, new processing components are added, or routing rules change, we need to change only the Message Router logic, while all other components remain unaffected.
Process manager, in this context, refers to a specialization of message router that sits in the middle of a hub and spoke design, maintaining the state of the processing sequence and "determining the next processing step based on intermediate results".
The "process definition" is, of course, something that the business cares about -- we're passing these messages around to coordinate activities in different parts of the enterprise, after all.
And yes... this thing that maintains the "state of the processing sequence", sounds a lot like an example of a "domain entity", this is true.
BUT: it is an entity of the message routing domain; which is to say that it is bookkeeping to ensure that messages go to the right place rather than bookkeeping of business information (ie: the routing of shipping containers).
Expressed in the language of hexagonal architecture, what a process manager is doing is keeping track of messages sent to other hexagons (and, of course, the messages that they send back).
Domain logic is not only to be found in aggregates and domain services. Other places are:
Appropriately handling domain events. Domain events can be translated into one or multiple commands to aggregates; they can trigger those commands based on some condition/policy on the event itself and/or the state of other aggregates; they can inform an ongoing business process to proceed with its next step(s), and so forth. All of these things are part of domain logic.
A business process is a (potentially distributed) state machine that may involve various actors/users/systems. The allowed states and transitions between them are all a core part of the domain logic.
A saga is an eventually consistent transaction spanning multiple local or foreign aggregates that completes either successfully or compensates already executed steps in a best-effort manner. The steps that make up a saga can only be known to domain experts and thus are part of the domain logic.
The reasons why these three things are – in my opinion – mistaken as an application-layer-only concern are the following:
In order to handle a domain event, we must load and later save the affected aggregates. Because of this, the handler must also be part of the application layer. But crucially, not only. If we respect the idea behind a hexagonal architecture, then for each domain event handler residing in the application layer, there must be a corresponding one located in the domain layer. Even for the most trivial case where one domain event translates to exactly one command method invocation on some aggregate. This is probably omitted in many examples because it initially adds little value. But just imagine that later on, the translation will be based on some further business condition. Will we also just place that in the application layer handler? Remember: All of our domain logic should be in the domain layer.
A side note: Even if we respect this separation of concerns, we still have the choice of letting the domain event be handled by an aggregate itself or let it be translated into an aggregate command by a thin domain service. This choice however is based on an entirely different concern: Whether or not we want to couple aggregates more tightly. There is no right or wrong answer here. Some things just naturally are coupled more tightly while others might benefit from some extra indirection for increased flexibility.
In order to correctly implement a business process or saga, we have to take care of various application-specific concerns like message de-duplication, idempotency, retries, timeouts, logging etc. Although one might argue that the domain logic itself should be responsible for dealing with at least some of these aspects first-hand (see Vaughn Vernons excellent talk about modelling uncertainty). Remember, however, that the essence of the sequence of (allowed) steps/actions is entirely based on domain logic.
Finally, a word about coupling. In my view, there is a tendency in the community that coupling is a bad thing per-se and thus must be avoided/mitigated by all means. This might lead to solutions like placing event-command translations (remember: domain logic!) out in the adapter layer of a hexagonal/onion/clean architecture. This layers responsibility is to adapt something to something else with the same semantics/function, but with slightly different form (think power adapters). It is not the place to host any type of domain logic even if it is dead simple. Businesses have dependencies and coupling all over the place. The art is to embrace it where it actually is, and avoid it otherwise. There is a reason why we have partnership or customer/supplier relationships in DDD. And if we care about domain logic isolation, those dependencies are reflected right where they belong: In the domain layer.
A side note: An anti-corruption layer (DDD) is a valid example of an adapter. For example, it may take a bunch of remote domain events and transform/combine them in any way necessary to suit the local model. They still remain events that happened in the past, and don't just magically become commands. The transformation only changes form, not function. And it doesn't eliminate the inevitable coupling from a domain perspective. It just rephrases the same thing in a slightly different language.
Related
I am new to DDD and this is the first project for me to apply DDD and clean architecture,
I have implemented some code in the domain layer and in the application layer but currently, I have a confusion
now I have an API which is placeOrder
and to place the order I need to grab data from other microservices, so I need to grab production details from product service and address details from user profile microservice
my question is, should these data are pulled into the domain layer or in the application layer?
is it possible to implement a domain service that has some interfaces that represent the product service API and use that interface in the domain layer and it will be injected later using dependency injection or should I implement the calling of this remote API in the application layer?
please note that, calling product service and address service are prior steps to create the order
Design is what we do when we want to get more of what we want than we would get by just doing it. -- Ruth Malan
should these data are pulled into the domain layer or in the application layer?
Eric Evans, introducing chapter four of the original DDD book, wrote
We need to decouple the domain objects from other functions of the system, so we can avoid confusing the domain concepts with other concepts related only to software technology or losing sight of the domain altogether in the mass of the system.
The domain model doesn't particularly care whether the information it is processing comes from a remote microservice or a cache. It certainly isn't interested in things like network failures and re-try strategies.
Thus, in an idealized system, the domain model would only be concerned with the manipulation of business information, and all of the "plumbing" would be somewhere else.
Sometimes, we run into business processes where (a) some information is relatively "expensive" to acquire and (b) you don't know whether or not you need that information until you start processing the work.
In that situation, you need to start making some choices: does the domain code return a signal to the application code to announce that it needs more information, or do we instead pass to the domain code the capability to get the information for itself?
Offering the retrieve capability to the domain code, behind the facade of a "domain service" is a common implementation of the latter approach. It works fine when your failure modes are trivial (queries never fail; or abort-on-failure is an acceptable policy).
You are certainly going to be passing an implementation of that domain service to the domain model; whether you pass it as a method argument or as a constructor argument really depends on the overall design. I wouldn't normally expect to see a domain entity with a domain service property, but a "use case" that manipulates entities might have one.
That all said, do keep in mind that nobody is handing out prizes for separating domain and application code in your designs. "Doing it according to the book" is only a win if that helps us to be more cost effective in delivering solutions with business value.
I'm currently learning about Microservices because in the company where I work, we will split down our giant monolith into microservices.
We have a lot of business logic in our application, but this rules basically validate data from many different domains and act accordingly the status of this data.
Our domain has it's own data, of course, but I even dare to say we depend something like 60 ~ 70% of data from different domains, which makes our domain kind of an aggregator.
I created a little diagram to illustrate it:
So like I said before, my domain (Domain A) has a lot of business logic to validate data and status from all those different domains. And then after this validation, take the appropriate actions and save the result of it in the DB.
I feel like I hit a dead end, cause I have read a few articles how to break down a monolith, but I haven't got any good example where it explains this situation.
So I ask you guys, do you have any suggestions to tell me? :)
Thanks!
In DDD speak a bounded context approximates a microservice. The different domains in your diagram are probably going to be bounded contexts.
You most certainly do not want concepts from various BCs polluting each other as you are going to end up with quite a mess.
There is one place, however, where you may run into this and that is on integration/orchestration. Here you should approach this concern almost as a separate BC that relates to the orchestration or integration.
For instance, let's assume that you have an Assets domain and an Accounting domain. The two should know nothing about each other. However, when you decommission an asset (say some huge machine that grinds down rocks into stones) you perhaps need to have the accounting domain register some write-off value if the asset has not reached the end of its useful life. In this layer you would integrate the various bits of your process and manage the state using a process manager. Although the process manager, and related state, may belong to the Assets BC the AssetsOrchestration BC may make use of objects from both the Assets as well as the Accounting BCs. Typically you would attempt to limit that interaction to, say, messages using some messaging infrastructure but YMMV.
A good starting point may be Sam Newman's Microservice Decomposition Patterns.
About 18 minutes into the talk, Newman offers this pattern:
Within your monolith, identify the coarse modules of functionality.
Draw the dependency graph
OK, your dependency graph has cycles in it. That's no good, you want an acyclic graph. So iterate on the graph until you are able to eliminate the cycles. DO NOT PASS GO until you have this done
Identify candidate modules that have no inbound dependencies - you want the bits that nothing else depends on.
Pick one, and see if you can redesign your system to allow you to deploy that module independently of the monolith.
Since the most similar questions are related to ASP MVC I want to know some common right choice strategies.
Lets try to decide, will it go into the business layer or sit on the service layer.
Considering service layer to have a classical remote facade interface it seems to be essential just to land permission checks here as the user object instance is always here (the service session is bound to the user) and ready for .hasPermission(...) calls. But that looks like a business logic leak.
In the different approach with an implementation of security checks in the business layer we pollute domain object interfaces with 'security token' arguments and similar things.
Any suggestions how to overcome this tradeoff or maybe you know the only true solution?
I think the answer to this question is complex and worth a bit of thought early on. Here are some guidelines.
The service layer is a good place for:
Is a page public or only open to registered users?
Does this page require a user of a specific role?
Authentication process including converting tokens to an internal representation of users.
Network checks such as IP and spam filters.
The business layer is a good place for:
Does this particular user have access to the requested record? For example, a user should have access to their profile but not someone else's profile.
Auditing of requests. The business layer is in the best situation to describe the specifics about requests because protocol and other details have been filtered out by this point. You can audit in terms of the business entities that you are setting policy on.
You can play around a bit separating the access decision from the enforcement point. For example, your business logic can have code to determine if a user can access a specific role and present that as a callback to the service layer. Sometimes this will make sense.
Some thoughts to keep in mind:
The more you can push security into a framework, the better. You are asking for a bug (and maybe a vulnerability) if you have dozens of service calls where each one needs to perform security checks in the beginning of the code. If you have a framework, use it.
Some security is best nearest the network. For example, if you wish to ban IP addresses that are spamming you, that definitely shouldn't be in the business layer. The nearer to the network connection you can get the better.
Duplicating security checks is not a problem (unless it's a performance problem). It is often the case that the earlier in the workflow that you can detect a security problem, the better the user experience. That said, you want to protect business operations as close to the implementation as possible to avoid back doors that bypass earlier security checks. This frequently leads to having early checks for the sake of UI but the definitive checks happening late in the business process.
Hope this helps.
In my domain, I have 2 bounded contexts that are relevant to this question:
Purchasing - where the customer orders services
Fulfillment - where services are assigned to vendors to be completed
It's a requirement that an order is editable by the customer at any given time throughout the life of the order.
If a customer removes a service from an order (i.e. within the purchasing context), if that service has already been assigned to a vendor to be performed (but has not already been performed) that service must also be removed in the fulfillment context.
There's a couple of options here, and I'd like the community's opinion:
I have my contexts wrong because this will create a cross-context transaction.
I may not need transactional consistency here. Of course, that's for the business stakeholder to decide, which begs 2 questions: What are the implementation options? How do I pose this question to the business stakeholder?
This is an acceptable violation of the "no cross-context transactions" rule.
EDIT
This is all happening within a single process, so the likelihood of mid-transaction failure is very low.
Here's the question to ask your stakeholder, re: an order being editable at all times - what does it mean for an order to be edited after it has already been fulfilled?
Why is it necessary that when an order is edited, this impacts the fulfillment service?
This, in my mind, crosses the bounded contexts. An order, while being edited, should not leave its domain unless there is good reason to. Why would any order information be propagated to the fulfillment service before it is complete?
Based on my obviously very limited understanding of your domain, I would think that you would complete the order first, then send a creation event to the service bus, where it is picked up by the fulfillment service. Therefore, no transactions are taking place that cross contexts.
How can a piece of hardware be an actor when designing a use case diagram?
I got confused because I've read on Wikipedia this:
A use case should not include detail regarding user interfaces and screens. This is done in user-interface design, which references the use case and its business rules.
If you give me an example about hardware being an actor, I'd be grateful.
I'd suggest the important part here is the definition of an actor.
An actor specifies a role played by a person or thing when interacting with the system
In the system of traffic intersection, there are many 'hardware' actors, including Car and Traffic Light. The system under consideration is the rules around what do to (yield, merge, stop) and when.
How about a third party system - for example a warehouse management system that produces a feed of stock level changes for different products, which is consumed by your retail application.
That would be an actor. It will not have a UI or screen, but communicate with your system, cause different events to occur and have its own business rules.
The following can all be "actors" on a system you are describing, if these components are outside the scope of the system being described:
A scheduled task
A server component
An automated network client (or whatever's on the other end of a network connection)
If the source of a request for your system to do something is outside the scope of the system, it is usually not necessary to separate the human component from any external tool or hardware they are using to facilitate the requests on your system. In such cases, the actors could very well be automatons.