Complex Finds in Domain Driven Design - domain-driven-design

I'm looking into converting part of an large existing VB6 system, into .net. I'm trying to use domain driven design, but I'm having a hard time getting my head around some things.
One thing that I'm completely stumped on is how I should handle complex find statements. For example, we currently have a screen that displays a list of saved documents, that the user can select and print off, email, edit or delete. I have a SavedDocument object that does the trick for all the actions, but it only has the properties relevant to it, and I need to display the client name that the document is for and their email address if they have one. I also need to show the policy reference that this document may have come from. The Client and Policy are linked to the SavedDocument but are their own aggregate roots, so are not loaded at the same time the SavedDocuments are.
The user is also allowed to specify several filters to reduce the list down. These to can be from properties that are stored on the SavedDocument or the Client and Policy.
I'm not sure how to handle this from a Domain driven design point of view.
Do I have a function on a repository that takes the filters and returns me a list of SavedDocuments, that I then have to turn into a different object or DTO, and fill with the additional client and policy information? That seem a little slow as I have to load all the details using multiple calls.
Do I have a function on a repository that takes the filters and returns me a list of SavedDocumentsForList objects that contain just the information I want? This seems the quickest but doesn't feel like I'm using DDD.
Do I load everything from their objects and do all the filtering and column selection in a service? This seems the slowest, but also appears to be very domain orientated.
I'm just really confused how to handle these situations, and I've not really seeing any other people asking questions about it, which masks me feel that I'm missing something.

Queries can be handled in a few ways in DDD. Sometimes you can use the domain entities themselves to serve queries. This approach can become cumbersome in scenarios such as yours when queries require projections of multiple aggregates. In this case, it is easier to use objects explicitly designed for the respective queries - effectively DTOs. These DTOs will be read-only and won't have any behavior. This can be referred to as the read-model pattern.

Related

Reuse same database tables in different repositories (repositories overlap on the data they access)

Suppose I have database tables Customer, Order, Item. I have OrderRepository that accesses, directly with SQL/my ORM, both the Order and Items table. E.g. I could have a method, getItems on the OrderRespositry that returns all items of that order.
Suppose I now also create ItemRepository. Given I now have 2 repositories accessing the same database table, is that generally considered poor design? My thinking is, sometimes a user wants to update the details about an Item (e.g. name), but when using the OrdersRepository, it doesn't really make sense to not be able to access the items directly (you want to know about all the items in an order)
Of course, the OrderRepository could internally create* an ItemRepository and call methods like getItemsById(ids: string[]). However, consider the case that I want to get all orders and items ever purchased by a Customer. Assuming you had the orderIds for a customer, you could have a getOrders(ids: string[]) on the OrderRepository to fetch all the orders and then do a second query to fetch all the Items. I feel you make your life harder (and less efficient) in the sense you have to do the join to match items with orders in the app code rather than doing a join in SQL.
If it's not considered bad practice, is there some kind of limit to how much overlap Repositories should have with each other. I've spent a while trying to search for this on the web, but it seems all the tutorials/blogs/vdieos really don't go further than 1 table per entity (which may be an anti-pattern).
Or am I missing a trick?
Thanks
FYI: using express with TypeScript (not C#)
is a repository creating another repository considered acceptable. shouldn't only the service layer do that?
It's difficult to separate the Database Model from the DDD design but you have to.
In your example:
GetItems should have this signature - OrderRepostiory.GetItems(Ids: int[]) : ItemEntity. Note that this method returns an Entity (not a DAO from your ORM). To get the ItemEntity, the method might pull information from several DAOs (tables, through your ORM) but it should only pull what it needs for the entity's hydration.
Say you want to update an item's name using the ItemRepository, your signature for that could look like ItemRepository.rename(Id: int, name: string) : void. When this method does it's work, it could change the same table as the GetItems above but note that it could also change other tables as well (For example, it could add an audit of the change to an AuditTable).
DDD gives you the ability to use different tables for different Contexts if you want. It gives you enough flexibility to make really bold choices when it comes the infrastructure that surrounds your domain. So ultimately, it's a matter of what makes sense for your specific situation and team. Some teams would apply CQRS and the GETOrder and Rename methods will look completely different under the covers.

Dependent entities within same aggregate

Situation:
We have a classic Order with OrderLines. Each OrderLine has reference to the ProductId.
Each Product has its RelatedProduct. For example, product
class Product {
string Id;
string Name;
string RelatedProductId;
decimal RelatedProductQuantity;
.
.
.
}
There is a business rule that whenever Product is added to Order with new OrderLine then Product with id=RelatedProductId should also be added in a quantity=RelatedProductQuantity.
Questions:
How to keep this rule within the domain so it doesn't spill over to application service but at the same time keep Order aggregate clean in a sense not to poison it by injecting repository or any data-fetching thing?
Should we use domain service? And if so, can domain service have repository injected, prepare all the data, create OrderLine (for both, base and related products), fill in the aggregate and save it to repository?
If none of the above, what's the best way to model it?
There are two common patterns that you will see here:
Fetch a copy of the information in your application code, then pass that information to the domain model as an argument
Pass the capability to fetch the information as an argument to the domain model
The second option is your classic "domain service" approach, where you use a "stateless" instance to fetch a copy of "global" state.
But, with the right perspective you might recognize that the first approach is the same mechanism - only it's the application code, rather than the domain code, that fetches the copy of the information.
In both cases, it's still the domain model deciding what to do with the copy of the information, so that's all right.
Possible tie breakers:
If the information you need to copy isn't local (ie: you are dealing with a distributed system, and the information isn't available in a local cache), then fetching that information will have failure modes, and you probably don't want to pollute the domain model with a bunch of code to handle that (in much the same way that you don't pollute your domain code with a bunch of database related concerns).
When it's hard to guess in advance which arguments are going to be passed to fetch the data, then it may make sense to let the domain code invoke that function directly. Otherwise, you end up with the application code asking the domain model for the arguments, and the passing the information back into the model, and this could even ping pong back and forth several times.
(Not that it can't be done: you can make it work - what's less clear is how happy you are going to be maintaining the code).
If you aren't sure... use the approach that feels more familiar.

CQRS/Event Sourcing - Does one expect to receive an Aggregate Id from the user/request?

I am currently just trying to learn some new programming patterns and I decided to give event sourcing a shot.
I have decided to model a warehouse as my aggregate root in the domain of shipping/inventory where the number of warehouses is generally pretty constant (i.e. a company wont be adding warehouses too often).
I have run into the question of how to set my aggregateId, which should correspond to a warehouse, on my server. Most examples I have seen, including this one, show the aggregate ID being generated server side when a new aggregate is being created (in my case a warehouse), and then passed in the command request when referring to that aggregate for subsequent commands.
Would you say this is the correct approach? Can I expect the user to know and pass aggregate Ids when issuing commands? I realize this is probably domain dependent and could also be a UI/UX choice as well, just wondering what other's have done. It would make more sense to me if the number of my event sourced aggregates were more frequent, such as with meal tabs or shopping carts.
Thanks!
Heuristic: aggregate id, in many cases, is analogous to the primary key used to distinguish entities in a database table. Many of the lessons of natural vs surrogate keys apply.
Can I expect the user to know and pass aggregate Ids when issuing commands?
You probably can't depend on the human to know the aggregate ids. But the client that the human operator is using can very well know them.
For instance, if an operator is going to be working in a single warehouse during a session, then we might look up the appropriate identifier, cache it, and use it when constructing messages on behalf of the user.
Analog: when you fill in a web form and submit it, the browser does the work of looking at the form action and using that information to construct the correct URI, and similarly the correct HTTP Request.
The client will normally know what the ID is, because it just got it during a previous query.
Creation patterns are weird. It can, in some circumstances, make sense for the client to choose the identifier to be used when creating a new aggregate. In others, it makes sense for the client to provide an identifier for the command message, and the server decides for itself what the aggregate identifier should be.
It's messaging, so you want to be careful about coupling the client directly to your internal implementation details -- especially if that client is under a different development schedule. If you get the message contract right, then the server and client can evolve in any way consistent with the contract at any time.
You may want to review Greg Young's 10 year retrospective, which includes a discussion of warehouse systems. TL;DR - in many cases the messages coming from the human operators are events, not commands.
Would you say this is the correct approach?
You're asking if one of Greg Young's Event Sourcing samples represents the correct approach... Given that the combination of CQRS and Event Sourcing was essentially (re)invented by Greg, I'd say there's a pretty good chance of that.
In general, letting the code that implements the Command-side generate a GUID for every Command, Event, or other persistent object that it needs to write is by far the simplest implementation, since GUIDs are guaranteed to be unique. In a distributed system, uniqueness without coordination is a big thing.
Can I expect the user to know and pass aggregate Ids when issuing commands?
No, and you particularly can't expect a user to know the GUID of their assets. What you may be able to do is to present the user with a list of his or her assets. Each item in the list will have the GUID associated, but it may not be necessary to surface that ID in the user interface. It's just data that the underlying UI object carries around internally.
In some cases, users do need to know the ID of some of their assets (e.g. if it involves phone support). In that case, you can add a lookup API to address that concern.

Strategies for search across disparate data sources

I am building a tool that searches people based on a number of attributes. The values for these attributes are scattered across several systems.
As an example, dateOfBirth is stored in a SQL Server database as part of system ABC. That person's sales region assignment is stored in some horrible legacy database. Other attributes are stored in a system only accessible over an XML web service.
To make matters worse, the the legacy database and the web service can be really slow.
What strategies and tips should I consider for implementing a search across all these systems?
Note: Although I posted an answer, I'm not confident its a great answer. I don't intend to accept my own answer unless no one else gives better insight.
You could consider using an indexing mechanism to retrieve and locally index the data across all the systems, and then perform your searches against the index. Searches would be an awful lot faster and more reliable.
Of course, this just shifts the problem from one part of your system to another - now your indexing mechanism has to handle failures and heterogeneous systems, but that may be an easier problem to solve.
Another factor is how often the data changes. If you have to query data in real-time that goes stale very quickly, then indexing may not be practical.
If you can get away with a restrictive search, start by returning a list based on the search criteria corresponding to the fastest data source. Then join up those records with the other systems and remove records which don't match the search criteria.
If you have to implement OR logic, this approach is not going to work.
While not an actual answer, this might at least get you partway to a workable solution. We had a similar situation at a previous employer - lots of data sources, different ways of accessing those data sources, different access permissions, military/government/civilian sources, etc. We used Mule, which is built around the Enterprise Service Bus concept, to connect these data sources to our application. My details are a bit sketchy, as I wasn't the actual implementor, just an integrator, but what we did was define a channel in Mule. Then you write a simple integration piece to go between the channel and the data source, and the application and the channel. The integration piece does the work of making the actual query, and formatting the results, so we had a generic SQL integration piece for accessing a database, and for things like web services, we had some base classes that implemented common functionality, so the actual customization of the integration piecess was a lot less work than it sounds like. The application could then query the channel, which would handle accessing the various data sources, transforming them into a normalized bit of XML, and return the results to the application.
This had a lot of advantages for our situation. We could include new data sources for existing queries by simply connecting them to the channel - the application didn't have to know or care what data sources where there, as it only looked at the data from the channel. Since data can be pushed or pulled from the channel, we could have a data source update the application when, for example, it was updated.
It took a while to get it configured and working, but once we got it going, we were pretty successful with it. In our demo setup, we ended up with 4 or 5 applications acting as both producers and consumers of data, and connecting to maybe 10 data sources.
Have you thought of moving the data into a separate structure?
For example, Lucene stores data to be searched in a schema-less inverted indexed. You could have a separate program that retrieves data from all your different sources and puts them in a Lucene index. Your search could work against this index and the search results could contain a unique identifier and the system it came from.
http://lucene.apache.org/java/docs/
(There are implementations in other languages as well)
Have you taken a look at YQL? It may not be the perfect solution but I might give you starting point to work from.
Well, for starters I'd parallelize the queries to the different systems. That way we can minimize the query time.
You might also want to think about caching and aggregating the search attributes for subsequent queries in order to speed things up.
You have the option of creating an aggregation service or middleware that aggregates all the different systems so that you can provide a single interface for querying. If you do that, this is where I'd do the previously mentioned cache and parallize optimizations.
However, with all of that it you will need weighing up the development time/deployment time /long term benefits of the effort against migrating the old legacy database to a faster more modern one. You haven't said how tied into other systems those databases are so it may not be a very viable option in the short term.
EDIT: in response to data going out of date. You can consider caching if your data if you don't need the data to always match the database in real time. Also, if some data doesn't change very often (e.g. dates of birth) then you should cache them. If you employ caching then you could make your system configurable as to what tables/columns to include or exclude from the cache and you could give each table/column a personalizable cache timeout with an overall default.
Use Pentaho/Kettle to copy all of the data fields that you can search on and display into a local MySQL database
http://www.pentaho.com/products/data_integration/
Create a batch script to run nightly and update your local copy. Maybe even every hour. Then, write your query against your local MySQL database and display the results.

How do I secure data access in my new API?

I am designing an API, and I'd like to ask a few questions about how best to secure access to the data.
Suppose the API is allowing access to artists. Artists have albums, that have songs.
The users of the API have access to a subset of all the artists. If a user calls the API asking for some artist, it is easy to check if the user is allowed to do so.
Next, if the user asks for an album, the API has to check if the album belongs to an artist that the user is allowed to access. Accessing songs means that the API has to check the album and then the artist before access can be granted.
In database terms, I am looking at an increasing number of joins between tables for each additional layer that is added. I don't want to do all those joins, and I also don't want to store the user id everywhere in order to limit the number of joins.
To work around this, I came up with the following approach.
The API gives the user a reference to an object, for instance an artist object. The user can then ask that artist object for the albums, which returns a list object. The list object can be traversed, and album objects can be obtained from it. Likewise, from an album object a songlist object can be obtained and from that, the individual song objects.
Since the API trusts the artist object, it also trusts any objects (albums in this case) that the user gets from it, without further checks. And so forth for all the other objects. So I am delegating the security/trust to objects down the chain.
I would like to ask you what you think of it, what's good or bad about it, and of course, how you would solve this "problem".
Second, how would you approach this if the API should be RESTful? My approach seems less applicable in that case.
Is this a real program or rather a sample to illustrate a question?
Because it is not clear why you would restrict access to the artists and albums rather than just to individual media items or even tracks.
I don't think that the joins should cost you that much, any half-smart DB system will do them cheaply enough when you are making a fairly simple criteria match on multiple tables.
IMHO, the problem with putting that much security logic into queries is that it limits your ability to handle more complex DRM issues that are sure to bound up. For example, what if the album is a collection from multiple artists? What if the album contains a track which is a duet and I only have access to one artist? etc, etc.
My view is that in those situations, a convenient programming model with sensible exception is much more important than the performance of individual queries, which you could always cache or optimize in the future. What you are trying to do with queries sounds like premature optimization.
Design your programming model as flexible as possible. Define a sensible sense of extensions, then work on implementing the database and optimize queries after profiling the real system.
It is possible that doing the joins is much faster than your object approach (although it is more elegant). With the joins you have only one db request, with the objects you have many. (Or you have to retrieve all the "possible" data in the first request, which could also slow down things)
I recommend doing the joins. If there is a problem about the sql you can ask at stackoverflow :D
Another idea:
If you make urls like "/beatles/whitealbum/happinesisawarmgun"
then you would know the artist in the begining of the request and could get the permission at once without traversing - because the url contains the traversal information. Just a thought.
It is a good idea to include a security descriptor for each resource and not only to a top-level one. In your example the security descriptor is simply artist's ID or a list of artists' IDs, if you support duets etc. So I would think about adding the list of IDs to both the artists and the songs tables. You can add a string field where the artist IDs for the resource will be written in comma-separated way.
Such solution scales well, you can add more layers without increasing time needed for security check. Adding a new resource also doesn't require any additional penalty except for one more field to insert (based on resource's parent field). And of course, this solution supports special situations described above (like more than one artists etc.).
This kind of solution also doesn't violate RESTful architecture.
And the fact that each resource contains its own security descriptor generalizes the resource's access permissions, making it possible to implement some completely different security policy in future (for example, making access permissions more granular, based on albums, not only artists).

Resources