DDD consistency check when creating aggregates with relationship to another aggregates - domain-driven-design

When creating new aggregate which has a relation to another aggregates, where should I check do relation aggregates exist? Should it be in the application service, or inside factory by some domain service?
class ApplicationService
{
public void CreateNewAr(relationArId, relationArId2)
{
var relationAR = _relationArRepository.getById(relationArId);
if(relationAR == null)
throw NotFoundException();
var relationAR2 = _relationAr2Repository.getById(relationArId2);
if(relationAR2 == null)
throw NotFoundException();
var newAr = _newArFactory.CreateFromAr1And2(relationAR.id, relationAR2.id);
_newArRepository.Insert(newAr);
_uow.Commit();
}
}
or
class NewArFactory
{
public NewAr CreateFromAr1And2(relationArId, relationArId2)
{
var relationAR = _relationArRepository.getById(relationArId);
if(relationAR == null)
throw NotFoundException();
var relationAR2 = _relationAr2Repository.getById(relationArId2);
if(relationAR2 == null)
throw NotFoundException();
var newAr = new NewAr(relationAR.id, relationAR2.id);
return newAr;
}
}

TL;DR: Go for the domain. Factory or not it is another discussion.
Now, did you realize that, if NewArFactory is controling a change in the domain, NewArFactory is an aggregate?
Remember that the purpose of an aggregate is control the change. It can be a possible implementation detail but not always an aggregate itself needs to be persisted. When you retrieve an aggregate from persistence does not, always, mean that you should have a aggregate table (or splited in several tables). A repository could get the data (i.e. entities - VOs ids) needed to build an aggregate, using the info in persistence, and return it. The aggregate (through aggregate root) controls the change in the entities and then the new state is persisted.
Things in a domain, like in real life, does not appear from thin air. Try to understand what is happening. A car is assembled, a product arrives from china into our warehause, a user registered himself in our web app, etc. Then provide a better context to the creation of your domain representation (use case and process involved using domain ubiquitous language). From there; an aggregate will emerge to control the change.
Could be self creation, use another aggregate, etc. Even one aggregate that is not persisted could create an aggregate that is persisted... Uff, this thing get very complex to explain without full real domain context in the table.
Note: No aggregate was harmed nor persisted in the making of this answer.

Related

Repository methods for query children of aggregate root

I have Order aggregate root class containing children value objects:
class Order {
val id: String
val lines: Seq[OrderLine]
val destination: Destination
//...omit other fields
}
This is a CQRS read model, that is represented by order-search microservice responsible for searching orders by some filter.
There is OrderApplicationService that uses OrderRepository (I am not sure that it is a pure repository in ddd terms):
trait OrderRepository {
def search(filter:OrderFilter):Seq[Order]
def findById(orderId:String):Order
}
and ElasticSearchOrderRepository which uses ES as search engine.
Due to new requirements I need new api method for UI that will search for the all destinations across the orders by some filter. It should be /destinations endpoint, that will call repository to find all data. The performance is important in this case, so to search for all orders and that map them to destination doesn't seem a good solution.
What is the most appropriate option to solve this? :
Add new method in OrderRepository e.g. def searchOrderDestinations(filter:DestinationFilter): Seq[Destination]
Create new repository:
trait OrderDestinationRepository {
def searchOrderDestinations(filter:DestinationFilter): Seq[Destination]
}
The same is for application service - do I need to create new DestinationAppService?
Are these options applicable? Or maybe there is some better solution?
Thanks in advance!
This is a CQRS read model
Perfect - create and update a list of your orders indexed by destination, and use that to serve the query results.
Think "relational database that includes the data you need to create the view, and an index". Queries go to the database, which acts as a cache for the information. A background process (async) runs to update the information in database.
How often you run that process will depend on how stale the data in the view can be. How bad is it for the business if the view shows results as of 10 minutes ago? as of 1 minute ago? as of an hour ago?

Aggregate root optimization with nested objects & big collections

Lets assume scenario:
We have Users of the system
Each User have their Clients (Client is always assigned to one and only one User)
Users upload different Documents and a Document is always assigned to one and only one Client
One of the business rules is that User can upload up to X Documents in total, regardless of number of Clients.
By the book, i would make User an aggregate root which would contain collection of Clients. Then each Client would have collection of Documents uploaded for that particular client. When User attempts to upload new Document for given Client, we would load Users aggregate root with all of its Clients and their Documents, and on User class i'd have method like:
boolean CanUploadDocument()
{
int numberOfDocuments = //Iterate Clients and sum up total number of their documents;
//compare to maximum allowed number of docs for User instance
return numberOfDocuments < this.maxAllowedNumberOfDocuments;
}
All well and good, but maxAllowedNumberOfDocuments can be thousands or tens of thousands and it feels like a huge overkill to load them all from db just to count & compare them.
Putting int documentsCount on User seems like breaking the rules and introducing unnecessary redundancy.
Is this the case to introduce separate aggregate root like UserQuota where we would load just count of all Documents and do the check? Or maybe a value object UserDocumentCount which service would get and call method on User object:
boolean CanUploadDocument(UserDocumentCount count)
{
//compare to maximum allowed number of docs for User instance
return count < this.maxAllowedNumberOfDocuments;
}
What is the ddd-proper & optimized way to handle this?
Having a big User aggregate is not a solution but not because of the fact that it is slow and it needs an optimization, it's because of the internal fields cohesion.
In order to protect the quota limit the User aggregate needs only the uploaded documents and nothing more. This is a sign that you have in fact two aggregates, the second being UserDocuments with its method uploadDocument. This method internally checks the quote invariant. As an optimization, you could keep a int countOfDocumentsUploadedSoFar that is used in the uploadDocument method. The two aggregates share only the same identity (the UserId).
Note: no inheritance is needed between the two aggregates.
Introducing something like UserQuota looks like a good solution. This thing is a real domain concept, it has a right to be an entity. Just now it has one propery DocumentsCount, but in time probably you will need LasDocumentUploadedTime... MaxAllowedNumberOfDocuments can be part of the quota too, it will help when this number changed and the change should be applied only for new quotas, or then quotas became more personal.
Your domain operations should touch quotas too. For example, when uploading a document you initially read appropriate quota and check it, store document, then update the quota.

Transactional operation with SaveChanges and ExecuteStoreCommand

I have a problem that I would like to share. The context is a bit messy, so I will try to do my best in the explanation.
I need to create a transactional operation over a number of entities. I'm working with EF CodeFirst but with a legacy database that I can't change. In order to create a more consistent model than the database provides I'm projecting the database information into a more refined entities I created on my own.
As I need to use different contexts, my initial idea was to use TransactionScope which gave me good results in the past. Why do I need different contexts? Due to diverse problems with db, I can't make the updates only in one operation (UnitOfWork). I need to retrieve different IDs which only appears after SaveChanges().
using (var scope = new TransactionScope())
{
Operation1();
Operation2();
Operation3(uses ExecuteStoreCommand)
SaveChanges();
Operation4();
SaveChanges();
}
I know that, in order to use TransactionScope, I need to share the same connection among all the operations (And I'm doing it, passing the context to the objects). However, when I execute one of the operations (which uses ExecuteStoreCommand) or I try to do some update after the first SaveChanges I always receive the MSDTC error (the support for distributed transactions is disabled) or even more rare, as unloaded domains.
I don't know if someone can help me, at least to know which is the best direction for this scenario.
Have a look at this answer:
Entity Framework - Using Transactions or SaveChanges(false) and AcceptAllChanges()?
The answer does exactly what you require having a transaction, over multiple data contexts.
This post on Transactions and Connections in Entity Framework 4.0 I found really helpful too.
For people who may need a simpler solution, here's what I use when I need to mix ExecuteStoreCommand and SaveChanges in a transaction.
using (var dataContext = new ContextEntities())
{
dataContext.Connection.Open();
var trx = dataContext.Connection.BeginTransaction();
var sql = "DELETE TestTable WHERE SomeCondition";
dataContext.ExecuteStoreCommand(sql);
var list = CreateMyListOfObjects(); // this could throw an exception
foreach (var obj in list)
dataContext.TestTable.AddObject(obj);
dataContext.SaveChanges(); // this could throw an exception
trx.Commit();
}

Preferred way to store a child object in Azure Table Storage

I did a little expirement with storing child objects in azure table storage today.
Something like Person.Project where Person is the table entity and Person is just a POCO. The only way I was able to achieve this was by serializing the Project into byte[]. It might be what is needed, but is there another way around?
Thanks
Rasmus
Personally I would prefer to store the Project in a different table with the same partition key that its parent have, which is its Person's partition key. It ensures that the person and underlying projects will be stored in the same storage cluster. On the code side, I would like to have some attributes on top of the reference properties, for example [Reference(typeof(Person))] and [Collection(typeof(Project))], and in the data context class I can use some extension method it retrieve the child elements on demand.
In terms of the original question though, you certainly can store both parent and child in the same table - were you seeing an error when trying to do so?
One other thing you sacrifice by separating out parent and child into separate tables is the ability to group updates into a transaction. Say you created a new 'person' and added a number of projects for that person, if they are in the same table with same partition key you can send the multiple inserts as one atomic operation. With a multi-table approach, you're going to have to manage atomicity yourself (if that's a requirement of your data consistency model).
I'm presuming that when you say person is just a POCO you mean Project is just a POCO?
My preferred method is to store the child object in its own Azure table with the same partition key and row key as the parent. The main reason is that this allows you to run queries against this child object if you have to. You can't run just one query that uses properties from both parent and child, but at least you can run queries against the child entity. Another advantage is that it means that the child class can take up more space, the limit to how much data you can store in a single property is less than the amount you can store in a row.
If neither of these things are a concern for you, then what you've done is perfectly acceptable.
I have come across a similar problem and have implemented a generic object flattener/recomposer API that will flatten your complex entities into flat EntityProperty dictionaries and make them writeable to Table Storage, in the form of DynamicTableEntity.
Same API will then recompose the entire complex object back from the EntityProperty dictionary of the DynamicTableEntity.
Have a look at: https://www.nuget.org/packages/ObjectFlattenerRecomposer/
Usage:
//Flatten complex object (of type ie. Order) and convert it to EntityProperty Dictionary
Dictionary<string, EntityProperty> flattenedProperties = EntityPropertyConverter.Flatten(order);
// Create a DynamicTableEntity and set its PK and RK
DynamicTableEntity dynamicTableEntity = new DynamicTableEntity(partitionKey, rowKey);
dynamicTableEntity.Properties = flattenedProperties;
// Write the DynamicTableEntity to Azure Table Storage using client SDK
//Read the entity back from AzureTableStorage as DynamicTableEntity using the same PK and RK
DynamicTableEntity entity = [Read from Azure using the PK and RK];
//Convert the DynamicTableEntity back to original complex object.
Order order = EntityPropertyConverter.ConvertBack<Order>(entity.Properties);

Dealing with deadlocks in long-running Hibernate transactions

I have a Hibernate application that may produce concurrent inserts and updates (via Session.saveOrUpdate) to records with the same primary key, which is assigned. These transactions are somewhat long-running, perhaps 15 seconds on average (since data is collected from remote sources and persisted as it comes in). My DB isolation level is set to Read Committed, and I'm using MySQL and InnoDB.
The problem is this scenario creates excessive lock waits which timeout, either as a result of a deadlock or the long transactions. This leads me to a few questions:
Does the database engine only release its locks when the transaction is committed?
If this is the case, should I seek to shorten my transactions?
If so, would it be a good practice to use separate read and write transactions, where the write transaction could be made short and only take place after all of my data is gathered (the bulk of my transaction length involves collecting remote data).
Edit:
Here's a simple test that approximates what I believe is happening. Since I'm dealing with long running transactions, commit takes place long after the first flush. So just to illustrate my situation I left commit out of the test:
#Entity
static class Person {
#Id
Long id = Long.valueOf(1);
#Version
private int version;
}
#Test
public void updateTest() {
for (int i = 0; i < 5; i++) {
new Thread() {
public void run() {
Session s = sf.openSession();
Transaction t = s.beginTransaction();
Person p = new Person();
s.saveOrUpdate(p);
s.flush(); // Waits...
}
}.run();
}
}
And the queries that this expectantly produces, waiting on the second insert:
select id, version from person where id=?
insert into person (version, id) values (?, ?)
select id, version from person where id=?
insert into person (version, id) values (?, ?)
That's correct, the database releases locks only when the transaction is committed. Since you're using hibernate, you can use optimistic locking, which does locks the database for long periods of time. Essentially, hibernate does what you suggest, separating the reading and writing portions into separate transactions. On write, it checks that the data in memory has not been changed concurrently in the database.
Hibernate Reference - Optimistic Transactions
Opportunistic locking:
Base assumption: update conflicts do occur seldom.
Mechanic:
Read dataset with version field
Change dataset
Update dataset
3.1.Read Dataset with current Version field and key
If you get it, nobody has changed the record.
Apply the next version field value.
update the record.
If you do not get it, the record has been changed, return en aproriate message to the caller and you are done
Inserts are not affected, you either
have a separate primary key anyway
or you accept multiple record with identical values.
Therefore the example given above is not a case for optimistic locking.

Resources