I have a question about implementing DDD and repository pattern.
Should I modify a entity inside a repository?
Let's say I have an Order and want to mark that order as finished.
As I see this I have two choices.
1.
var order _orderRepository.GetById(1);
order.Finish();
_orderRepository.Update(order);
...where the change is persisted to the database in the Update call.
2.
var order _orderRepository.GetById(1);
var finishedOrder = _orderRepository.Finish(order);
...where the change is persisted to the database in the Finish call.
Is there a advantage of using one method over the other? What is the DDD-way of doing this?
You should not modify it in the repository.
The reason is that the repository is responsible of abstracting away the persistence (i.e. reading/writing to the data storage).
If you also make it responsible of some business logic you are violating the Single Responsibility Principle.
If you are doing automated testing, it also means that you have to do integration tests to be sure that the database communication/mapping works and then unit tests to verify the business logic that you introduced in it.
It can seem trivial. But it's only trivial the first time you violate the principle. But one violation usually leads to another and another and finally an application that isn't as easy to maintain :)
An application where classes have mixed responsibilities are also harder to navigate. Each time you want to update a feature you have to go through all layers to find where the actual logic is done.
Use the application layer to coordinate behaviors for one or more domain objects, the domain objects should execute all state changes and lastly the repo should persist those changes to the database or wherever you are storing the domain's state.
Related
We got old legacy application with complex business logic which we need to rewrite. We consider to use cqrs and event sourcing. But it's not clear how to migrate data from the old database. Probable we need migrate it to the read database only, as we can't reproduce all the events to populate event store. But we atleast need to create some initial records in event store for each aggregate, like AggregateCreated? Or we need write a scripts and to use all the commands one by one to recreate aggregates in same way we will normally with event sourcing?
Using the existing database, or a transformed version of it, as a start of your read-side persistence is never a good idea. Your event-sourced system needs to have its start, so you get one of the main benefits of event sourcing - being able to create projections on-demand, using polyglot persistence.
Using commands for migration is also not a good idea for a simple reason that commands, by definition, can fail due to pre or post-condition check of invariant control. It also does not convey the meaning of migration, which is to represent the current system state as it is right now. Remember, that the current system stay is not something you can accept or deny. It is given to you and your job is to capture it.
The best practice for such a migration is to emit so-called migration events, like EntityXMigratedFromLegacy. Of course, the work might be substantial. Mainly because the legacy system model will most probably not match the new model, otherwise the reason for such a migration isn't entirely clear.
By using migration events you explicitly state the fact that a piece of state was moved from another place, as-is. You will always know how the migrated entity started its lifecycle in the new system - either by being migrated from legacy or by being initialised in the new system.
Probable we need migrate it to the read database only
No, your read model db can be dropped and recreated any time based on write side, only write side is your source of truth.
But we atleast need to create some initial records in event store for
each aggregate, like AggregateCreated?
Of course, and having ONLY the initial event could be not enough. If your current OrderAggregate has reservations, you must create ItemReservedEvent for-each reservation it has.
Or we need write a scripts and to use all the commands one by one to
recreate aggregates in same way we will normally with event sourcing?
Feels like that's the way you should go. Read old aggregate/entity from db and try to map it to a new one.
We have recently decided to adopt DDD in my team for our new projects because of the so many obvious benefits (coming from the Active-Record pattern school) and there are a couple of things that are yet unclear.
Say I have an entity Transaction that depends on the following entities (that each in turn depends on other so many entities):
1. Customer
2. Account
3. Currency
When I make use of factories to instantiate a Transaction entity to pass to a Domain Service for some fancy business rules, do I make so many queries to setup all these dependent instances?
If I have overloads in my factory that skip such dependencies then those will be null in some cases and it will become too complicated to differentiate when I can access those properties and when I cannot. With Active-Record pattern I just use lazy loading and have them load only on demand. Any ideas with DDD?
EDIT:
In my scenario “Transaction” seems to be the best candidate for an Aggregate root. I have defined a method in my Application Service “InitiateTransaction” (also have a “FinalizeTransaction” as it involves a redirect to PayPal) and takes as parameters the DTOs needed to carry AccountId, CurrencyId, LanguageId and various other foreign keys as well as Transaction attributes.
When calling my Domain Services (Transaction Processor and Fraud Rule Evaluator), I need to specify the “Transaction” Aggregate with all dependencies loaded (“Transaction.Customer”, “Transaction.Currency”, etc.).
So if I am correct the steps required are:
1. Call some repository(ies) to retrieve Customer, Currency etc.
2. Call TransactionFactory with dependencies specified above to get a Transaction object
3. Call Domain Services with fully loaded Transaction object for business rules to take place
Correct? Additionally, my concern was about steps 1 and 2.
If “Customer”, “Currency” and other Entities/Value Objects “Transaction” depends on, have in turn other dependencies. Do I try to set up those as well? Because it seems to me that if I do I will end up with very bloated code in my Application Service and not very reusable to place in a separate method. However, if I don’t and just retrieve those from a repository with a “GetById(id)”as you suggested, my code could end up buggy as say I need property “Transaction.Customer.CreatedByUser” which returns a “User” instance, it will be null because repositories only load flat instances.
EDIT:
I ended up using GetById(id) to load only the dependencies I knew they were needed in my Services. Not a big fun of accidentally accessing null instances due to flat loading but I have my unit tests to protect me from taking it to production!!
I highly doubt it that Currency is an entity, however it's important to model things like how they defined and use by the real Domain. Forget factories or other implementation details like the db, you need to make sure you have defined the concepts right.
Once you've done that, you'd already identified the aggregate root as well. Btw, the entities should encapsulate the relevant business rules. Use Services to implement use-cases i.e to manage the interaction between the domain objects and other parts such as the repository.
You should keep EVERYTHING related to db and CRUD in the repository, and have the repo work only with the aggregate roots. Also, for querying purposes, you should use CQRS so that all the queries would be done on a read model. For Domain purposes, a Get(id) is 99% enough and that method returns an aggregate root.
Be aware that DDD is very tricky, the most difficult part is modeling the Domain correctly, all the buzzwords are useless if the model is wrong.
So, I've got a WCF service that accepts commands and maps them to calls into the domain services layer. When doing write type of commands to the domain, this pattern is nearly perfect.
What I'm wondering is how everyone is doing reads, more specifically, getting lists of aggregates from the model for display. As I stated, I have a WCF service that calls into the service layer. Currently, I have a method on my service that returns a list of aggregate roots. Somehow, this feels a bit dirty. I'm polluting my domain services with GetByXXXX kind of methods.
I'm looking for a bit of guidance on the search and retrieval of domain objects through the application services layer.
Edit:
Thinking and reading a bit more, is it appropriate to directly use a repository in the application layer to handle fetching of entities?
I usually go with a simple query layer that returns a DataTable for collections and a DataRow for 1 item. For something more structured I would use a DTO. So all your GetByXXX methods could sit in the query layer.
Repositories are better suited for supporting operations that change state. Even when you're fetching an aggregate through a Repository, it is because you intend to change the state and persist it back right away:
var entity = repository.Get(id);
entity.ChangeSomeState();
repository.Save(entity);
In that scenario, Get returns an aggregate that is ready for modification (e.g. attached to the context if using EF, or session in NHibernate). The focus here is consistency.
Now, for queries you are better off with a Query class, which will support read-only scenarios and with a focus on performance.
All your GetByXXX will live in the Query class. You can even create specialized Query classes, e.g. one for Admin queries, another for Customer queries, and so on.
For extra info, have a look at these articles:
Command and Query Responsibility Segregation (CQRS) pattern
CQRS with MediatR in ASP.NET Core 3.1 – Ultimate Guide
Without getting into all of the gory details, I am trying to design a service-based solution that will be consumed by several client applications. The solution allows admins to create and modify document templates which are used by regular users to perform data entry. It is my intent to make the application a learning tool for best practices, techniques, etc.
And, at the same time, I have to accomodate a schizophrenic environment because the 'powers that be' cannot ever stick to their decisions regarding technologies and tools. For example, I am using Linq-to-SQL today because they aren't ready to go to EF4 but there is also discussion about switching over to NHibernate. So, I have to make the code as persistent ignorant as possible to minimize the work required should we change OR/M tools.
At this point, I am also limited to using the partial class approach to extend the Linq-to-SQL classes so they implement interfaces defined in my business layer. I cannot go with POCOs because management insists that we leverage all built-in tooling, etc. so I must support the Linq-to-SQL designer.
That said, my service interface has a StartSession method that accepts a template identifier in its signature. The operation flows like this:
If a session already exists in the database for the current user and specified template, update the record to show the current activity. If not, create a new session object.
The session is associated with an instance of the template, call it the "form". So if the session is new, I need to retrieve the template information to create the new "form", associate it with the session then save the session to the database. On the other hand, if the session already existed, then I need to also load the "form" with the data entered by the user and stored in the session previously.
Finally, the session (with form definition and data) is returned to the caller.
My first objective is to create clean separation between the logical layers of my application. The second is to maintain persistence ignorance (as mentioned above). Third, I have to be able to test everything so all dependencies must be externalized for easy mocking. I am using Unity as an IoC tool to help in this area.
To accomplish this, I have defined my service class and data contracts as needed to support the service interface. The service class will have a dependency injected from the business layer that actually performs the work. And here's where it has gotten messy for me.
I've been try to go the Unit of Work and Repository route to help with persistance ignorance. I have an ITemplateRepository and an ISessionRepository which I can access from my IUnitOfWork implementation. The service class gets an instance of my SessionManager class (in my BLL) injected. The SessionManager receives the IUnitOfWork implementation through constructor injection and will delegate all persistence to the UoW but I find myself playing a shell game with the various logic.
Should all of the logic described above be in the SessionManager class or perhaps the UoW implementation? I want as little logic as possible in the repository implementations because changing the data access platform could result in unwanted changes to the application logic. Since my repository is working against an interface, how do I best go about creating the new session (keeping in mind that a valid session has a reference to the template, er, form being used)? Would it be better to still use POCOs even though I have to support the designer and use a tool like AutoMapper inside the repository implementation to handle translating the objects?
Ugh!
I know I am just stuck in analysis paralysis so a little nudge is probably all I need. What would be ideal would be if someone could provide an example how you would you would solve the problem given the business rules and architectural constraints I've defined.
If you don't use POCOs then your not really going to be data store agnostic. And using POCOs will allow you to get your system up and running with memory based repositories which is what you'll likely want to use for your unit tests anyhow.
The AutoMapper sounds nice but I wouldn't consider it a deal breaker. Mapping POCOs to EF4, LinqToSql, nHibernate isn't that time consuming unless you have hundreds of tables. When/If your POCOs begin to diverge from your persistence layer then you might find that an AutoMapper wont really fit the bill.
In my design, I have Repository classes that get Entities from the database (How it does that it does not matter). But to save Entities back to the database, does it make sense to make the repository do this too? Or does it make more sense to create another class, (such as a UnitOfWork), and give it the responsibility to save stuff by having it accept Entities, and by calling save() on it to tell it to go ahead and do its magic?
In DDD, Repositories are definitely where ALL persistence-related stuff is expected to reside.
If you had saving to and loading from the database encapsulated in more than one class, database-related code will be spread over too many places in your codebase - thus making maintenance significantly harder. Moreover, there will be a high chance that later readers of this code might not understand it at first sight, because such a design does not adhere to the quasi-standards that most developers are expecting to find.
Of course, you can have separate Reader/Writer-helper classes, if that's appropriate in your project. But seen from the Business Layer, the only gateway to persistence should be the repository...
HTH!
Thomas
I would give the repository the overall responsibility for encapsulating all aspects of load and save. This ensures that tricky issues such as managing contention between readers and writes has a place to be managed.
The repository might well use your UnitOfWork class, and might need to expose a BeginUow and Commit methods.
Fowler says that repository api should mimic collection:
A Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection.