I am new to DDD and this is the first project for me to apply DDD and clean architecture,
I have implemented some code in the domain layer and in the application layer but currently, I have a confusion
now I have an API which is placeOrder
and to place the order I need to grab data from other microservices, so I need to grab production details from product service and address details from user profile microservice
my question is, should these data are pulled into the domain layer or in the application layer?
is it possible to implement a domain service that has some interfaces that represent the product service API and use that interface in the domain layer and it will be injected later using dependency injection or should I implement the calling of this remote API in the application layer?
please note that, calling product service and address service are prior steps to create the order
Design is what we do when we want to get more of what we want than we would get by just doing it. -- Ruth Malan
should these data are pulled into the domain layer or in the application layer?
Eric Evans, introducing chapter four of the original DDD book, wrote
We need to decouple the domain objects from other functions of the system, so we can avoid confusing the domain concepts with other concepts related only to software technology or losing sight of the domain altogether in the mass of the system.
The domain model doesn't particularly care whether the information it is processing comes from a remote microservice or a cache. It certainly isn't interested in things like network failures and re-try strategies.
Thus, in an idealized system, the domain model would only be concerned with the manipulation of business information, and all of the "plumbing" would be somewhere else.
Sometimes, we run into business processes where (a) some information is relatively "expensive" to acquire and (b) you don't know whether or not you need that information until you start processing the work.
In that situation, you need to start making some choices: does the domain code return a signal to the application code to announce that it needs more information, or do we instead pass to the domain code the capability to get the information for itself?
Offering the retrieve capability to the domain code, behind the facade of a "domain service" is a common implementation of the latter approach. It works fine when your failure modes are trivial (queries never fail; or abort-on-failure is an acceptable policy).
You are certainly going to be passing an implementation of that domain service to the domain model; whether you pass it as a method argument or as a constructor argument really depends on the overall design. I wouldn't normally expect to see a domain entity with a domain service property, but a "use case" that manipulates entities might have one.
That all said, do keep in mind that nobody is handing out prizes for separating domain and application code in your designs. "Doing it according to the book" is only a win if that helps us to be more cost effective in delivering solutions with business value.
Related
In DDD, I've seen the following code many times where an entity is passed as parameter to the save method of a repository.
class MysqlUserRepository
{
public function save(User $user) {}
}
As far as I know, infrastructure shouldn't know anything about domain. If this is correct, what am I missing here?
A Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection.
See: https://martinfowler.com/eaaCatalog/repository.html
Repository is a mediator that has to "translate" from persistence to domain and vice versa. Persistence infrastructure get/gives "raw" data so there has to be something that allows to glue these two worlds in the most possible decoupled way. That means Repository has to know both worlds.
As ar as I know, infrastructure shouldn’t know anything about the domain
That's a valid point, which I believe can be concluded from Eric Evans his book “Domain-Driven Design”. Note however, that Eric Evans leaves us somewhat in a paradoxal state.
In his book he talks about a layered architecture, from top to bottom:
“User Interface” —> “Application” —> “Domain” —> “Infrastructure”.
With it, he states:
The essential principle is that any element of a layer depends only on other elements in the same layer or on elements of the layers “beneath” it.
From this we can conclude that the infrastucture layer (being beneath the domain layer) should have no knowledge about the domain layer. However, Evans also talks about the domain layer being the layer that defines the repository. In other words, we've reached an impossible state.
Since then, more has been written about Domain-Driven Design and I believe it was Vaughn Vernon’s book “Implementing Domain-Driven Design” that comes with a solution.
Instead of using a layered architecture, we could use Hexagonal Architecture defined by Alistair Cockburn. The mind-shift with Hexagonal Architecture is that the application is no longer an asymmetric, top-down, layered architecture. Instead it places the domain layer at the center. Everything else, including the infrastructure layer is placed outside this center (or hexagon). Communication is possible using ports defined by the domain "layer". The Repository can be seen as an example of such a port.
So the Domain layer can expose a port, in your case UserRepositoryInterface. Following Hexagonal Architecture, the infrastructure layer is allowed to communicate with the domain "layer". As such, it can implement the interface with MysqlUserRepository.
... infrastructure shouldn't know anything about domain. If this is correct ...
This isn't correct. In DDD, infraestructure layer depends on domain layer, so it knows anything the domain wants to show. There's no rule in DDD about how many things the domain should show. It's up to you: you can show the whole entity, you can show a DTO, a DPO, ...
On the other hand, domain doesn't know anything about infraestructure (this is correct).
Several sources claim that process managers do not contain any business logic. A Microsoft article for example says this:
You should not use a process manager to implement any business logic in your domain. Business logic belongs in the aggregate types.
Further up they also say this (emphasis mine):
It's important to note that the process manager does not perform any business logic. It only routes messages, and in some cases translates between message types.
However, I fail to see why translations between messages (e.g. from a domain event to a command) are not part of the business logic. You require a domain expert in order to know what the correct order of steps and the translations between them are. In some cases you also need to persist state in-between steps, and you maybe even select next steps based on some (business) condition. So not everything is a static list of given steps (although that alone I’d call business logic too).
In many ways a process manager (or saga for that matter) is just another aggregate type that persists state and may have some business invariants, in my opinion.
Assuming that we implement DDD with a hexagonal architecture, I‘d place the process manager in the application layer (not adapter!!) such that it can react to messages or be triggered by a timer. It would load a corresponding process manager aggregate via a repository and call methods on it that either set its (business) state or ask it for the next command to send (where the actual sending is done by the application layer of course). This aggregate lives in the domain layer because it does business logic.
I really don‘t understand why people make a distinction between business rules and workflow rules. If you delete everything except the domain layer, you should be able to reconstruct a working application without the need to consult a domain expert again.
I‘d be happy to get some further insight I might be missing from you guys.
A fair portion of the confusion here is a consequence of semantic diffusion.
The spelling "process manager" comes from Enterprise Integration Patterns (Hohpe and Woolf, 2003). There, it is a messaging pattern; more precisely, it is one possible specialization of a message router. The motivation for a message router is a decoupling of the sender and receiver.
If new message types are defined, new processing components are added, or routing rules change, we need to change only the Message Router logic, while all other components remain unaffected.
Process manager, in this context, refers to a specialization of message router that sits in the middle of a hub and spoke design, maintaining the state of the processing sequence and "determining the next processing step based on intermediate results".
The "process definition" is, of course, something that the business cares about -- we're passing these messages around to coordinate activities in different parts of the enterprise, after all.
And yes... this thing that maintains the "state of the processing sequence", sounds a lot like an example of a "domain entity", this is true.
BUT: it is an entity of the message routing domain; which is to say that it is bookkeeping to ensure that messages go to the right place rather than bookkeeping of business information (ie: the routing of shipping containers).
Expressed in the language of hexagonal architecture, what a process manager is doing is keeping track of messages sent to other hexagons (and, of course, the messages that they send back).
Domain logic is not only to be found in aggregates and domain services. Other places are:
Appropriately handling domain events. Domain events can be translated into one or multiple commands to aggregates; they can trigger those commands based on some condition/policy on the event itself and/or the state of other aggregates; they can inform an ongoing business process to proceed with its next step(s), and so forth. All of these things are part of domain logic.
A business process is a (potentially distributed) state machine that may involve various actors/users/systems. The allowed states and transitions between them are all a core part of the domain logic.
A saga is an eventually consistent transaction spanning multiple local or foreign aggregates that completes either successfully or compensates already executed steps in a best-effort manner. The steps that make up a saga can only be known to domain experts and thus are part of the domain logic.
The reasons why these three things are – in my opinion – mistaken as an application-layer-only concern are the following:
In order to handle a domain event, we must load and later save the affected aggregates. Because of this, the handler must also be part of the application layer. But crucially, not only. If we respect the idea behind a hexagonal architecture, then for each domain event handler residing in the application layer, there must be a corresponding one located in the domain layer. Even for the most trivial case where one domain event translates to exactly one command method invocation on some aggregate. This is probably omitted in many examples because it initially adds little value. But just imagine that later on, the translation will be based on some further business condition. Will we also just place that in the application layer handler? Remember: All of our domain logic should be in the domain layer.
A side note: Even if we respect this separation of concerns, we still have the choice of letting the domain event be handled by an aggregate itself or let it be translated into an aggregate command by a thin domain service. This choice however is based on an entirely different concern: Whether or not we want to couple aggregates more tightly. There is no right or wrong answer here. Some things just naturally are coupled more tightly while others might benefit from some extra indirection for increased flexibility.
In order to correctly implement a business process or saga, we have to take care of various application-specific concerns like message de-duplication, idempotency, retries, timeouts, logging etc. Although one might argue that the domain logic itself should be responsible for dealing with at least some of these aspects first-hand (see Vaughn Vernons excellent talk about modelling uncertainty). Remember, however, that the essence of the sequence of (allowed) steps/actions is entirely based on domain logic.
Finally, a word about coupling. In my view, there is a tendency in the community that coupling is a bad thing per-se and thus must be avoided/mitigated by all means. This might lead to solutions like placing event-command translations (remember: domain logic!) out in the adapter layer of a hexagonal/onion/clean architecture. This layers responsibility is to adapt something to something else with the same semantics/function, but with slightly different form (think power adapters). It is not the place to host any type of domain logic even if it is dead simple. Businesses have dependencies and coupling all over the place. The art is to embrace it where it actually is, and avoid it otherwise. There is a reason why we have partnership or customer/supplier relationships in DDD. And if we care about domain logic isolation, those dependencies are reflected right where they belong: In the domain layer.
A side note: An anti-corruption layer (DDD) is a valid example of an adapter. For example, it may take a bunch of remote domain events and transform/combine them in any way necessary to suit the local model. They still remain events that happened in the past, and don't just magically become commands. The transformation only changes form, not function. And it doesn't eliminate the inevitable coupling from a domain perspective. It just rephrases the same thing in a slightly different language.
Since the most similar questions are related to ASP MVC I want to know some common right choice strategies.
Lets try to decide, will it go into the business layer or sit on the service layer.
Considering service layer to have a classical remote facade interface it seems to be essential just to land permission checks here as the user object instance is always here (the service session is bound to the user) and ready for .hasPermission(...) calls. But that looks like a business logic leak.
In the different approach with an implementation of security checks in the business layer we pollute domain object interfaces with 'security token' arguments and similar things.
Any suggestions how to overcome this tradeoff or maybe you know the only true solution?
I think the answer to this question is complex and worth a bit of thought early on. Here are some guidelines.
The service layer is a good place for:
Is a page public or only open to registered users?
Does this page require a user of a specific role?
Authentication process including converting tokens to an internal representation of users.
Network checks such as IP and spam filters.
The business layer is a good place for:
Does this particular user have access to the requested record? For example, a user should have access to their profile but not someone else's profile.
Auditing of requests. The business layer is in the best situation to describe the specifics about requests because protocol and other details have been filtered out by this point. You can audit in terms of the business entities that you are setting policy on.
You can play around a bit separating the access decision from the enforcement point. For example, your business logic can have code to determine if a user can access a specific role and present that as a callback to the service layer. Sometimes this will make sense.
Some thoughts to keep in mind:
The more you can push security into a framework, the better. You are asking for a bug (and maybe a vulnerability) if you have dozens of service calls where each one needs to perform security checks in the beginning of the code. If you have a framework, use it.
Some security is best nearest the network. For example, if you wish to ban IP addresses that are spamming you, that definitely shouldn't be in the business layer. The nearer to the network connection you can get the better.
Duplicating security checks is not a problem (unless it's a performance problem). It is often the case that the earlier in the workflow that you can detect a security problem, the better the user experience. That said, you want to protect business operations as close to the implementation as possible to avoid back doors that bypass earlier security checks. This frequently leads to having early checks for the sake of UI but the definitive checks happening late in the business process.
Hope this helps.
I'm looking at developing a system (primarily web based) that has a clearly defined domain.
Parts of the domain include entities like Diary, Booking, Customer, etc.
However I created another entity called User whose intention is for authentication and authorization only (it seemed wrong to contaminate the Customer entity with data specific to authentication). I figure this isn't part of the domain of "making bookings", but specifically this should belong to the application layer (I'm trialling a hexagonal architecture).
I'm accessing my repositories using interfaces in my domain model and wiring them up to my persistence layer using IoC.
My questions are this:
Should I put the authentication/authorization code in the application
and keep it out of the domain?
If I do keep it out of the domain, should I put the interface for the
UserRepository in the application layer also (I think this would
make sense)?
If I do keep it out of the domain, I will end up with entities also in the application layer called User etc. This seems wrong.
What are people's thoughts?
[EDIT]
I've gone for a solution that takes a bit from both answers, so thanks for answering and I've +1'd you both.
What I've done is put the authentication/authorisation code in a subdomain (secondary adapter), in a separate project, and because it requires access to it's own persistence (a few collections in a separate RavenDB database), I'm including these straight into the separate project keeping them separate from the main persistence layer.
Should I put the authentication/authorization code in the application
and keep it out of the domain?
No, you should keep authentication/authorization code out of the core domain. It belongs to a generic subdomain.
If I do keep it out of the domain, should I put the interface for the
UserRepository in the application layer also (I think this would make
sense)?
You could keep UserRepository in the domain layer, but you'd better keep the "access and identify" subdomain and the "making bookings" core domain separated with each other. You could use different packages or namespace.
The next challenge is how to integrate these two domains. In my humble opinion, you may:
Expose a DomainService from "access and identify" subdomain to the application layer for making authentication/authorization decisions.
Sometimes we have to find out who made Diaries and Bookings, in this case, use the identifier of the User is enough. Other information such as "favorite tags" or something like that is usually not needed in "making booking" core domain.
Authentication is a generic domain, not a part of your domain. So, yes, keep it in some component in application layer.
Not all parts of your system should follow DDD patterns. You can use UserRepository if you see a benefit for this but I would use just some already available component/popular library of your environment like MembershipProvider in ASP.NET world.
By "security" I mean data access rights, for example:
Andrew only has read-only access to clients in France
Brian can update clients in France and Germany
Charles is an administrator, he has read and update rights for everything
I can see potential arguments for each layer.
Data Access Layer
The DAL only exposes clients to which the user has access, and passes an appropriate error up to the business layer when the user tries to do something unauthorised.
This simplifies the upper layers, and can reduce the data traffic for users who only have access to a small fraction of the data.
Business Layer
Because this is where the business logic resides and only the business layer has the complete knowledge of how the security should be implemented.
UI Layer
A tangent argument is because the UI layer is the one that deals with authentication.
A stronger argument is when the application has non-UI functions: calculating the daily P&L, archiving, etc. These programs don't have a security context and creating a fictitious 'system' user is a maintenance nightmare.
A separate layer?
Slotted somewhere inside the 3?
I'm looking for a cogent argument which will convince me that layer X is the best for large-scale 3-Tier applications. Please refrain from 'it depends' answers ;-).
Thanks.
I guess this may be a subjective topic. Nevertheless, we follow the principle to never trust any external source (e.g. data crossing a service boundary). Typically, modern applications are a bit different from the old client-server three-tier model, since they are usually service-oriented (I see a web server is also as a service).
This rules out the delegation of access checks to the client - the client may know about the allowed access and use this information to behave differently (e.g. not offer some functionality or so), but in the end only what the service (server) decides to allow counts.
On the other hand, the database or DAL is too low, since most checks also depend on some business logic or on external information (such as user roles). So this rules out the data layer; in our environments the data access is a trusted space that does not do any checks. In the end, the DB layer and the application server form a logical unit (one could call it a fortress - as per Roger Sessions Software Fortresses book), where no service boundary exists. If the app layer accesses another service however it has to perfom checks on the received data.
In summary, you might want to get a copy of Roger Sessions book because it does give some valuable input and food for thought on large-scale applications and how to deal with security and other issues.