So, I've got a WCF service that accepts commands and maps them to calls into the domain services layer. When doing write type of commands to the domain, this pattern is nearly perfect.
What I'm wondering is how everyone is doing reads, more specifically, getting lists of aggregates from the model for display. As I stated, I have a WCF service that calls into the service layer. Currently, I have a method on my service that returns a list of aggregate roots. Somehow, this feels a bit dirty. I'm polluting my domain services with GetByXXXX kind of methods.
I'm looking for a bit of guidance on the search and retrieval of domain objects through the application services layer.
Edit:
Thinking and reading a bit more, is it appropriate to directly use a repository in the application layer to handle fetching of entities?
I usually go with a simple query layer that returns a DataTable for collections and a DataRow for 1 item. For something more structured I would use a DTO. So all your GetByXXX methods could sit in the query layer.
Repositories are better suited for supporting operations that change state. Even when you're fetching an aggregate through a Repository, it is because you intend to change the state and persist it back right away:
var entity = repository.Get(id);
entity.ChangeSomeState();
repository.Save(entity);
In that scenario, Get returns an aggregate that is ready for modification (e.g. attached to the context if using EF, or session in NHibernate). The focus here is consistency.
Now, for queries you are better off with a Query class, which will support read-only scenarios and with a focus on performance.
All your GetByXXX will live in the Query class. You can even create specialized Query classes, e.g. one for Admin queries, another for Customer queries, and so on.
For extra info, have a look at these articles:
Command and Query Responsibility Segregation (CQRS) pattern
CQRS with MediatR in ASP.NET Core 3.1 – Ultimate Guide
Related
I have an aggregate with a root entity (Documentation) and a VO (Document). Documents are associated with files (pdfs, images, office documents, etc), so I have to persist the aggregate in a database and files in a ftp server (files cannot be saved in the database because space files is too large).
My db repository class implements an interface with methods like FindXXX, AddDocument, RemoveDocument and others. How could I implement ftp persistence? Should my db repository connect to ftp setver in AddDocument and RemoveDocument? Or I should create a ftp repository class that implements the interface. If so, methods like FindXXX not make sense.
As far as I know about DDD, each aggregate have only one interface repository that represents how can be persisted. It can have multiple "persistence modes" (in a db, ftp, file, etc) but the interface should the same.
As far as I know about DDD, each aggregate have only one interface repository that represents how can be persisted.
That's mostly true; people generally assume that an entire aggregate is going to be stored in a single place. When you distribute the state of the aggregate across multiple storage units, your failure modes need very careful attention.
So one thing to consider is whether the separately stored documents are something that are part of the aggregate, or something that is referenced by the aggregate.
If they are referenced by the aggregate, then you treat them like any other reference to another aggregate. The documentation aggregate stores a identifier/reference/hint for the document, and takes advantage of a domain service to access the document if it needs it.
If they are part of the aggregate, then the usual answer is that "the repository" will be a facade in front of a complicated infrastructure thing that masks the fact that the documentation and the document(s) are stored separately.
In other words, the infrastructure layer will be trying to orchestrate the load and store operations, and the rest of the system doesn't need to know the details.
Late response. But, simply put, you should have two services. In my reading of DDD, repositories are often considered as infrastructure services. In this case, you have two:
A repository / interface for storage, and basic retrieval of document IDs, metadata, and references
A repository / interface for storage, and basic retrieval of blobs of data
Sometimes it makes sense to have multiple aggregates, and repositories. In fact, some of Vaughn Vernon examples on bounded contexts (https://github.com/VaughnVernon/IDDD_Samples) do include aggregates holding references to other aggregates. I would argue that you should do what makes sense, and what feels appropriate.
Indeed, if you were running a post office collection centre, chances are you will have a way of 1. storing the small to large parcels, and 2. curating an index of where every small to large parcels are located in the centre so that you can retrieve it.
My db repository class implements an interface with methods like FindXXX, AddDocument, RemoveDocument and others. How could I implement ftp persistence? Should my db repository connect to ftp setver in AddDocument and RemoveDocument? Or I should create a ftp repository class that implements the interface.
If your database repository connects to FTP in addition to some other data store, you may arguably be putting too much logic, and responsibility in one area. That said, there is nothing wrong with doing this too.
If so, methods like FindXXX not make sense. As far as I know about DDD, each aggregate have only one interface repository that represents how can be persisted.
For this specific problem, most DDD practitioners will recommend you have a separate view service / model. It can produce a materialised / view DTO across repositories or services.
Fundamentally, it should be easy to test individual parts, and to replace underlying implementations. If you decided to switch (or even include support) from FTP to Google Cloud Storage / AWS S3 one day, then there might be more work involved, and changes to test cases.
Abstract
I am modelling a generic authorization subdomain for my application. The requirements are quite complicated as it needs to cope with multi tenants, hierarchical organisation structure, resource groups, user groups, permissions, user-editable permissions and so on. It's a mixture of RBAC (users assigned to roles, roles having permissions, permissions can execute commands) with claims-based auth.
Problem
When checking for business rule invariants, I have to traverse the permission "graph" to find a permission for a user to execute a command on a resource in an environment. The traversal depth is arbitrary, on multiple dimensions.
I could model this using code, but it would be best represented using a graph database as queries/updates on this aggregate would be faster. Also, it would reduce the complexity of the code itself. But this would require the graph database to be immediately consistent.
Still, I need to use CQRS/ES, and enable a distributed architecture.
So the graph database needs to be
Immediately consistent
And this introduces some drawbacks
When loading events from event-store, we have to reconstruct the graph database each time
Or, we have to introduce some kind of graph database snapshotting
Overhead when communicating with the graph database
But it has advantages
Reduced complexity of performing complex queries
Complex queries are resolved faster than with code
The graph database is perfect for this job
Why this question?
In other aggregates I modelled, I often have a EntityList instance or EntityHierarchy instance. They basically are ordered/hierarchical collection of sub-entities. Their implementation is arbitrary. They can support anything from indexing, key-value pairs, dynamic arrays, etc. As long as they implement the interfaces I declared for them. I often even have methods like findById() or findByName() on those entities (lists). Those methods are similar to methods that could be executed on a database, but they're executed in-memory.
Thus, why not have an implementation of such a list that could be bound to a database? For example, instead of having a TMemoryEntityList, I would have a TMySQLEntityList. In the case at hand, perhaps having an implementation of a TGraphAuthorizationScheme that would live inside a TOrgAuthPolicy aggregate would be desirable. As long as it behaves like a collection and that it's iterable and support the defined interfaces.
I'm building my application with JavaScript on Node.js. There is an in-memory implementation of this called LevelGraph. Maybe I could use that as well. But let's continue.
Proposal
I know that in DDD terms the infrastructure should not leak into the domain. That's what I'm trying to prevent. That's also one of the reasons I asked this question, is that it's the first time I encounter such a technical need, and I am asking people who are used to cope with this kind of problem for some advice.
The interface for the collection is IAuthorizationScheme. The implementation has to support deep traversal, authorization finding, etc. This is the interface I am thinking about implementing by supporting it with a graph database.
Sequence :
1 When a user asks to execute a command I first authenticate him. I find his organisation, and ask the OrgAuthPolicyRepository to load up his organisation's corresponding OrgAuthPolicy.
The OrgAuthPolicyRepository loads the events from the EventStore.
The OrgAuthPolicyRepository creates a new OrgAuthPolicy, with a dependency-injected TGraphAuthorizationScheme instance.
The OrgAuthPolicyRepository applies all previous events to the OrgAuthPolicy, which in turns call queries on the graph database to sync states of the GraphDatabase with the aggregate.
The command handler executes the business rule validation checks. Some of them might include checks with the aggregate's IAuthorizationScheme.
The business rules have been validated, and a domain event is dispatched.
The aggregate handles this event, and applies it to itself. This might include changes to the IAuthorizationScheme.
The eventBus dispatched the event to all listening eventHandlers on the read-side.
Example :
In resume
Is it conceivable/desirable to implement entities using external databases (ex. Graph Database) so that their implementation be easier? If yes, are there examples of such implementation, or guidelines? If not, what are the drawbacks of using such a technique?
To solve your task I would consider the following variants going from top to bottom:
Reduce task complexity by employing security frameworks or identity
management solutions. Some existent out of the box identity management solution might do the job. If it doesn't take a look on the frameworks to help you implement your own. Unfortunately I'm poorly familiar with Node.js world to advice
you any. In Java world that could be Apache Shiro or Spring Security. This could be a good option from both costs and security perspective
Maintain single model instead of CQRS. This eliminates consistency problems (if you will decide to have separate
resources to store your models). From my understanding
permissions should not be changed frequently but they will be accessed
frequently. This means you can live with one model optimised for
reads, avoiding consistency issues and maintaining 2 models. To
track down user behaviour you can implement auditing separately.
From my experience security auditing can require some additional
data which most likely is not in your data model.
Do it with CQRS. And here I would first consider revisit requirements to find a way to accept eventual consistency instead of strong consistency. This opens many options for implementation.
Regarding the question should you use introduce dedicated Graph Database it's impossible to answer without knowledge of your domain, budget, desired system throughput and performance, existent infrastructure, team knowledge and setup etc. You need to estimate costs of the solution with dedicated Graph Database and without it. My filling is that unless permission management is main idea of your project or your project is mature enough (by number of users and R&D capacities) dedicated database is unlikely to pay back it's costs for your task.
To understand what could be benefits of having dedicated Graph Database your existent storage solutions should be taken in opposite. These 2 articles explains pretty well what could be such benefits:
http://neo4j.com/developer/graph-db-vs-nosql/
http://neo4j.com/developer/graph-db-vs-rdbms/
We have recently decided to adopt DDD in my team for our new projects because of the so many obvious benefits (coming from the Active-Record pattern school) and there are a couple of things that are yet unclear.
Say I have an entity Transaction that depends on the following entities (that each in turn depends on other so many entities):
1. Customer
2. Account
3. Currency
When I make use of factories to instantiate a Transaction entity to pass to a Domain Service for some fancy business rules, do I make so many queries to setup all these dependent instances?
If I have overloads in my factory that skip such dependencies then those will be null in some cases and it will become too complicated to differentiate when I can access those properties and when I cannot. With Active-Record pattern I just use lazy loading and have them load only on demand. Any ideas with DDD?
EDIT:
In my scenario “Transaction” seems to be the best candidate for an Aggregate root. I have defined a method in my Application Service “InitiateTransaction” (also have a “FinalizeTransaction” as it involves a redirect to PayPal) and takes as parameters the DTOs needed to carry AccountId, CurrencyId, LanguageId and various other foreign keys as well as Transaction attributes.
When calling my Domain Services (Transaction Processor and Fraud Rule Evaluator), I need to specify the “Transaction” Aggregate with all dependencies loaded (“Transaction.Customer”, “Transaction.Currency”, etc.).
So if I am correct the steps required are:
1. Call some repository(ies) to retrieve Customer, Currency etc.
2. Call TransactionFactory with dependencies specified above to get a Transaction object
3. Call Domain Services with fully loaded Transaction object for business rules to take place
Correct? Additionally, my concern was about steps 1 and 2.
If “Customer”, “Currency” and other Entities/Value Objects “Transaction” depends on, have in turn other dependencies. Do I try to set up those as well? Because it seems to me that if I do I will end up with very bloated code in my Application Service and not very reusable to place in a separate method. However, if I don’t and just retrieve those from a repository with a “GetById(id)”as you suggested, my code could end up buggy as say I need property “Transaction.Customer.CreatedByUser” which returns a “User” instance, it will be null because repositories only load flat instances.
EDIT:
I ended up using GetById(id) to load only the dependencies I knew they were needed in my Services. Not a big fun of accidentally accessing null instances due to flat loading but I have my unit tests to protect me from taking it to production!!
I highly doubt it that Currency is an entity, however it's important to model things like how they defined and use by the real Domain. Forget factories or other implementation details like the db, you need to make sure you have defined the concepts right.
Once you've done that, you'd already identified the aggregate root as well. Btw, the entities should encapsulate the relevant business rules. Use Services to implement use-cases i.e to manage the interaction between the domain objects and other parts such as the repository.
You should keep EVERYTHING related to db and CRUD in the repository, and have the repo work only with the aggregate roots. Also, for querying purposes, you should use CQRS so that all the queries would be done on a read model. For Domain purposes, a Get(id) is 99% enough and that method returns an aggregate root.
Be aware that DDD is very tricky, the most difficult part is modeling the Domain correctly, all the buzzwords are useless if the model is wrong.
Probably similar questions have been asked many times but I think that every response helps to make the understanding of DDD better and better. I would like to describe how I perceive certain aspects of DDD. I have some basic uncertainties around it and would appreciate if someone could give a solid and practical anwser. Please note, these questions assume a 'classic' approach to DDD. This means using ORM's etc. Approaches like CQRS and event sourcing are not considered here.
Aggregates and entities are the primary objects that implement domain logic. They have state and identity. In this context, I perceive domain logic as the set of all commands that mutate that state. Does that make sense? Why is domain logic related exclusively to state? Is it legal to model domain objects that have no identitiy or no state? Why can't a domain object be implemented as a transaction script? Example: Consider an object that recommends you a partner for a dating site. That object has no real state, but it does quite a lot of domain logic? Putting that into the service layer implies that the domain model cannot cover all logic.
Access to other domain objects. Can aggregates have access to a repository? Example: When a (stateful) domain object needs to have access to all 'users' of the system to perform its work, it would need access them via the repository. As a consequence, an ORM would need to inject the repository when loading the object (which might be technically more challenging). If objects can't have access to repositories, where would you put the domain logic for this example? In the service layer? Isn't the service layer supposed to have no logic?
Aggregates and entities should not talk to the outside world, they are only concerned about their bounded context. We should not inject external dependencies (like IPaymentGateway or IEmailService) into a domain object, this would cause the domain to handle exceptions that come from outside. Solution: an event based approach. How do you send events then? You still need to inject the correct 'listeners' every time you instantiate a domain object. ORM's are about restoring 'data' but are not primarly intended to inject dependencies. Do we need an DI-ORM mix?
Domain objects and DTO's. When you query an aggregate root for its state does it return a projection of its state (DTO) or the domain objects themselves? In most models that I see, clients have full access to the domain data model, introducing a deep coupling to the actual structure of the domain. I perceive the 'object graph' behind an aggregate to be its own buisness. That's encapsulation, right? So for me an aggregate root should return only DTO's. DTO's are often defined in the service layer but my approach is to model it in the domain itself. The service layer might still add another level of abstraction, but that's a different choice. Is that a good advice?
Repositories handle all CRUD operations at the aggregate root level. What about other queries? Queries return DTO's and not domain objects. For that to work, the repsitory must be aware of the data structure of the domain which introduces a coupling. My advice is similar to before: Use events to populate views. Thus, the internal structure is not made public, only the events carry the necessary data to build up the view.
Unit of work. A controller at the system boundary will instantiate commands and pass them to a service layer which in turn loads the appropriate aggregates and forwards the commands. The controller might use multiple commands and pass them to multiple services. This is all controlled by the unit of work pattern. This means, repositories, entities, services - all participate in the same transaction. Do you agree?
Buisness logic is not domain logic. From a buisness perspective the realization of a use case might involve many steps: Registering a customer, sending an email, create a storage account, etc. This overall process can impossibly fit in a domain aggregate root. The domain object would need to have access to all kind of infrastructure. Solution: Workflows or sagas (or transaction script). Is that a good advice?
Thank you
The first best practice I can suggest is to read the Evans' book. Twice.
Too many "DDD projects" fail because developers pretend that DDD is simply OOP done right.
Then, you should really understand that DDD is for applications that have to handle very complex business rules correctly. In a nutshell: if you don't need to pay a domain expert to understand the business, you don't need DDD. The core concept of DDD, indeed, is the ubiquitous language that both the coders, the experts and the users share to understand each other.
Furthermore, you should read and understand what aggregates are (consistency boundaries) by reading Effective Aggregate Design by Vernon.
Finally, you might find useful the modeling patterns documented here.
Despite my comment above, I took a stab at your points. (note: I'm not Eric Evans or Jimmy Nilsson so take my "advice" with a grain of salt).
Your example "Consider an object that recommends you a partner for a dating site.", belongs in a Domain service (not an infrastructure service). See this article here - http://lostechies.com/jimmybogard/2008/08/21/services-in-domain-driven-design/
Aggregates do not access repositories directly, but they can create a unit of work which combines operations from multiple domain objects into one.
Not sure on this one. This should really be a question by itself.
That's debatable, in theory, the domain entities would not be directly available outside the aggregate root, but that is not always practical. I consider this decision on a case-by-case basis.
I not sure what you mean exactly by "queries". If modeling all possible "reading" scenarios in your domain does not seem practical or provide sufficient performance, it suggests a CQRS solution is probably best.
Yes, I agree. UOW is a tool in your toolbox that you can use in various layers.
This statement is fundamentally wrong "Business logic is not domain logic". The domain IS the representation business logic, thus one reason for using ubiquitous language.
I have a system that talks with the database using repositories. What is the correct definition when it is a remote service? Or better,
Repository is for databases as [...] is for external Web-Services.
I found in many places about ServiceAgents but I don't know if this is the correct definition.
In DDD, Repositories represent a virtual collection of entities (aggregate roots in particular). So, if you have 10M persisted customers, you would work with a repository as if it were a collection of all 10M in memory. Repositories generally only deal with the same types of operations you would find on a collection: Add something, Remove something, Find something in the collection.
If the actual persistence of the data occurs via a Web Service, the implementation of the repository might interact with a Web Service proxy rather than with a database. The fact that the persistence involves a web service doesn't drive how it should be expressed by your domain however. That is to say, whether the data is persisted via direct database calls, an ORM, a Web Service, or a courier pigeon is an implementation detail.
Now, if your domain model has dependencies to be facilitated by external services (e.g. credit card validation, address verification, etc.), this should be expressed as a Domain Service in the form of an interface which defines the required operations in terms of the domain model. To be clear, Domain Services are operations which are logically part of your domain, but don't fit cleanly for some reason or another on a given entity or value object. Behavior facilitated by an external service is just one example of when you might use a Domain Service, so don't think of Domain Services as "the repository pattern for Web Services" or some such thing.
Not sure if this is the official term but I'd refer to this as a "Service Proxy".
I'm just begging to wrap my head around most of the DDD principles but my understanding is that Repositories are generally used to abstract databases but they could be used to wrap anything that behaves as persistent and/or queryable store. So in short I guess what I'm suggesting is that if your service behaves enough like a database then I don't see any problem with using a repository to abstract access to the entities that it deals with.
Otherwise, if the web service doesn't directly deal with data or behavior that is in your domain, then perhaps your case may be a candidate for an infrastructure level service in your application layer.
If anybody has any objections then please comment as I would like to know if I myself am misunderstanding the intended use of repositories.