Modelling Access Control in MongoDB - node.js

Does anyone have an example of modelling access control in MongoDB? The situation I'm thinking of is:
There are a set of resources, each being their own document (e.g. cars, people, trees etc.).
A user can gain access to a resource through an explicit grant, or implicitly by being the owner of a resource, existing in another collection (e.g. a role) or some other implicit ways.
In one collection.find() method, that could have skip and limit options applied (for pagination), is there a way to check all these explicit and implicit paths and produce a result of resources a user has access to?
In MySQL we have modelled this using a grants table with resource id, granting user id, authorized user id and operation (read, write etc.). We then, in one query, select all resources where at least one subquery is true, and the subqueries then check all the different paths to access e.g. one checks for a grant, one checks for ownership etc.
I just can't wrap my head around doing this in MongoDB, I'm not sure if it's even possible...
Thanks

You can't query more than one document at a time. Ideally, shouldn't access control be a part of the business logic. Your backend php/c#/language ought to ensure that the current request is authorized. If so, then simply query the requested document.
If you feel, you need to implement the exact same structure in mongodb, which I suggest you don't, then you will need to embed all those fields (the ones from the other mysql tables that help you identify whether the request is authorized) in each and every document of every collection. You will be duplicating data (denormalizing it). Which brings the headache of ensuring that all the copies are updated and have the same value.
Edit 1:
Lets talk about Car document. To track its owner, you will have owner property (this will contain the _id of the owner document). To track all users who can 'use' (explicit grant) the car, you will have an array allowerdDrivers (this will contain the _id of each user document). Lets assume the current user making the request belong to the 'admin' role. The user document will have an array applicableRoles that store the _id of each role document applicable.
To retrieve all cars that the user has access to, you only need to make two queries. One to fetch his roles. If he is an admin, return ALL cars. If he is not, then make another query where owner equals his id or allowedDrivers contains his id.
I understand your actual use case may be more complicated, but chances are there is a document-oriented way of solving that. You have to realize that the way data is modelled in documents is vastly different from how you would model it in a RDbMS.

Doing it in business logic would be painfully slow and inefficient.
How so? This is business logic, if user a owns post b then let them do the action (MVC style), otherwise don't.
That sounds like business logic to me and most frameworks consider this business logic to be placed within the controller action (of the MVC paradigm); i.e. in PHP Yii:
Yii::app()->roles->hasAccess('some_view_action_for_a_post', $post)
I think that by doing it in the database end you have confused your storage layer with your business layer.
Also with how complex some role based permission actions can get the queries you commit must be pretty big with many sub selects. Considering how MySQL creates and handles result sets (sub selects ARE NOT JOINS) I have a feeling these queries do not scale particularly well.
Also you have to consider when you want to change the roles, or a function that defines a role, that can access a certain object you will have to change your SQL queries directly instead of just adding the role to a roles table and assigning the object properties for that role and assigning users that role (AKA code changes).
So I would seriously look into how other frameworks in other languages (and your own) do their RBAC because I think you have blurred the line and made your life quite hard with what you have done, in fact here might be a good place to start: Group/rule-based authorization approach in node.js and express.js

Related

DDD considering Repositories & Services for entities

I've been getting acquainted with DDD and trying to understand the way Entities and Aggregate Roots interact.
Below is the example of the situation:
Let's say there is a user and he/she has multiple email addresses (can have up to 200 for the sake of example). Each email address has it's own identity and so does the user. And there is one to many relationship between users and their email.
From the above example I consider Users and Emails as two entities while Users is the aggregate root
DDD Rules that I came across:
Rule: Only aggregate root has access to the repository.
Question 1: Does it mean that I cannot have a separate database table/collection to store the emails separately? Meaning that the emails have to be embedded inside the user document.
Rule: Entities outside the aggregate can only access other entities in the aggregate via the aggregate root.
Question 2: Now considering I do split them up into two different tables/collection and link the emails by having a field in email called associatedUserId that holds the reference to the user that email belongs to. I can't directly have an API endpoint like /users/{userId}/emails and handle it directly in the EmailService.getEmailsByUserId(String userId)? If not how do I model this?
I am sorry if the question seems a bit too naive but I can't seem to figure it out.
Only aggregate root has access to the repository
Does it mean that I cannot have a separate database table/collection to store the emails separately? Meaning that the emails have to be embedded inside the user document.
It means that there should be a single lock to acquire if you are going to make any changes to any of the member entities of the aggregate. That certainly means that the data representation of the aggregate is stored in a single database; but you could of course distribute the information across multiple tables in that database.
Back in 2003, using relational databases as the book of record was common; one to many relationships would normally involve multiple tables all within the same database.
Entities outside the aggregate can only access other entities in the aggregate via the aggregate root.
I can't directly have an API endpoint like /users/{userId}/emails and handle it directly in the EmailService.getEmailsByUserId(String userId)?
Of course you can; you'll do that by first loading the root entity of the User aggregate, then invoking methods on that entity to get at the information that you need.
A perspective: Evans was taking a position against the idea that the application should be able to manipulate arbitrary entities in the domain model directly. Instead, the application should only be allowed to the "root" entities in the domain model. The restriction, in effect, means that the application doesn't really need to understand the constraints that are shared by multiple entities.
Four or five years later cqrs appeared, further refining this idea -- it turns out that in read-only use cases, the domain model doesn't necessarily contribute very much; you don't need to worry about the invariants if they have already been satisfied and you aren't changing anything.
In effect, this suggests that GET /users/{userId}/emails can just pull the data out of a read-only view, without necessarily involving the domain model at all. But POST /users/{userId}/emails needs to demonstrate the original care (meaning, we need to modify the data via the domain model)
does this mean that I need to first go to the UserRepo and pull out the user and then pull out the emails, can't I just make a EmailService talking to an Email Repo directly
In the original text by Evans, repositories give access to root entities, rather than arbitrary entities. So if "email" is a an entity within the "user aggregate", then it normally wouldn't have a repository of its own.
Furthermore, if you find yourself fighting against that idea, it may be a "code smell" trying to bring you to recognize that your aggregate boundaries are in the wrong place. If email and user are in different aggregates, then of course you would use different repositories to get at them.
The trick is to recognize that aggregate design is a reflection of how we lock our data for modification, not how we link our data for reporting.

DDD: where should logic go that tests the existence of an entity?

I am in the process of refactoring an application and am trying to figure out where certain logic should fit. For example, during the registration process I have to check if a user exists based upon their email address. As this requires testing if the user exists in the database it seems as if this logic should not be tied to the model as its existence is dictated by it being in the database.
However, I will have a method on the repository responsible for fetching the user by email, etc. This handles the part about retrieval of the user if they exist. From a use case perspective, registration seems to be a use case scenario and accordingly it seems there should be a UserService (application service) with a register method that would call the repository method and perform if then logic to determine if the user entity returned was null or not.
Am I on the right track with this approach, in terms of DDD? Am I viewing this scenario the wrong way and if so, how should I revise my thinking about this?
This link was provided as a possible solution, Where to check user email does not already exits?. It does help but it does not seem to close the loop on the issue. The thing I seem to be missing from this article would be who would be responsible for calling the CreateUserService, an application service or a method on the aggregate root where the CreateUserService object would be injected into the method along with any other relevant parameters?
If the answer is the application service that seems like you are loosing some encapsulation by taking the domain service out of the domain layer. On the other hand, going the other way would mean having to inject the repository into the domain service. Which of those two options would be preferable and more in line with DDD?
I think the best fit for that behaviour is a Domain Service. DS could access to persistence so you can check for existence or uniquenes.
Check this blog entry for more info.
I.e:
public class TransferManager
{
private readonly IEventStore _store;
private readonly IDomainServices _svc;
private readonly IDomainQueries _query;
private readonly ICommandResultMediator _result;
public TransferManager(IEventStore store, IDomainServices svc,IDomainQueries query,ICommandResultMediator result)
{
_store = store;
_svc = svc;
_query = query;
_result = result;
}
public void Execute(TransferMoney cmd)
{
//interacting with the Infrastructure
var accFrom = _query.GetAccountNumber(cmd.AccountFrom);
//Setup value objects
var debit=new Debit(cmd.Amount,accFrom);
//invoking Domain Services
var balance = _svc.CalculateAccountBalance(accFrom);
if (!_svc.CanAccountBeDebitted(balance, debit))
{
//return some error message using a mediator
//this approach works well inside monoliths where everything happens in the same process
_result.AddResult(cmd.Id, new CommandResult());
return;
}
//using the Aggregate and getting the business state change expressed as an event
var evnt = Transfer.Create(/* args */);
//storing the event
_store.Append(evnt);
//publish event if you want
}
}
from http://blog.sapiensworks.com/post/2016/08/19/DDD-Application-Services-Explained
The problem that you are facing is called Set based validation. There are a lot of articles describing the possible solutions. I will give here an extract from one of them (the context is CQRS but it can be applied to some degree to any DDD architecture):
1. Locking, Transactions and Database Constraints
Locking, transactions and database constraints are tried and tested tools for maintaining data integrity, but they come at a cost. Often the code/system is difficult to scale and can be complex to write and maintain. But they have the advantage of being well understood with plenty of examples to learn from. By implication, this approach is generally done using CRUD based operations. If you want to maintain the use of event sourcing then you can try a hybrid approach.
2. Hybrid Locking Field
You can adopt a locking field approach. Create a registry or lookup table in a standard database with a unique constraint. If you are unable to insert the row then you should abandon the command. Reserve the address before issuing the command. For these sort of operations, it is best to use a data store that isn’t eventually consistent and can guarantee the constraint (uniqueness in this case). Additional complexity is a clear downside of this approach, but less obvious is the problem of knowing when the operation is complete. Read side updates are often carried out in a different thread or process or even machine to the command and there could be many different operations happening.
3. Rely on the Eventually Consistent Read Model
To some this sounds like an oxymoron, however, it is a rather neat idea. Inconsistent things happen in systems all the time. Event sourcing allows you to handle these inconsistencies. Rather than throwing an exception and losing someone’s work all in the name of data consistency. Simply record the event and fix it later.
As an aside, how do you know a consistent database is consistent? It keeps no record of the failed operations users have tried to carry out. If I try to update a row in a table that has been updated since I read from it, then the chances are I’m going to lose that data. This gives the DBA an illusion of data consistency, but try to explain that to the exasperated user!
Accepting these things happen, and allowing the business to recover, can bring real competitive advantage. First, you can make the deliberate assumption these issues won’t occur, allowing you to deliver the system quicker/cheaper. Only if they do occur and only if it is of business value do you add features to compensate for the problem.
4. Re-examine the Domain Model
Let’s take a simplistic example to illustrate how a change in perspective may be all you need to resolve the issue. Essentially we have a problem checking for uniqueness or cardinality across aggregate roots because consistency is only enforced with the aggregate. An example could be a goalkeeper in a football team. A goalkeeper is a player. You can only have 1 goalkeeper per team on the pitch at any one time. A data-driven approach may have an ‘IsGoalKeeper’ flag on the player. If the goalkeeper is sent off and an outfield player goes in the goal, then you would need to remove the goalkeeper flag from the goalkeeper and add it to one of the outfield players. You would need constraints in place to ensure that assistant managers didn’t accidentally assign a different player resulting in 2 goalkeepers. In this scenario, we could model the IsGoalKeeper property on the Team, OutFieldPlayers or Game aggregate. This way, maintaining the cardinality becomes trivial.
You seems to be on the right way, the only stuff I didn't get is what your UserService.register does.
It should take all the values to register a user as input, validate them (using the repository to check the existence of the email) and, if the input is valid store the new User.
Problems can arise when the validation involve complex queries. In that case maybe you need to create a secondary store with special indexes suited for queries that you can't do with your domain model, so you will have to manage two different stores that can be out of sync (a user exists in one but it isn't replicated in the other one, yet).
This kind of problem happens when you store your aggregates in something like a key-value store where you can search just with the id of the aggregate, but if you are using something like a sql database that permits to search using your entities fields, you can do a lot of stuff with simple queries.
The only thing you need to take care is avoid to mix query logic and commands logic, in your example the lookup you need to do is easy, is just one field and the result is a boolean, sometimes it can be harder like time operations, or query spanning multiple tables aggregating results, in these cases it is better to make your (command) service use a (query) service, that offers a simple api to do the calculation like:
interface UserReportingService {
ComplexResult aComplexQuery(AComplexInput input);
}
That you can implement with a class that use your repositories, or an implementation that executes directly the query on your database (sql, or whatever).
The difference is that if you use the repositories you "think" in terms of your domain object, if you write directly the query you think in terms of your db abstractions (tables/sets in case of sql, documents in case of mongo, etc..). One or the other depends on the query you need to do.
It is fine to inject repository into domain.
Repository should have simple inteface, so that domain objects could use it as simple collection or storage. Repositories' main idea is to hide data access under simple and clear interface.
I don't see any problems in calling domain services from usecase. Usecase is suppossed to be archestrator. And domain services are actions. It is fine (and even unavoidable) to trigger domain actions by usecase.
To decide, you should analyze Where is this restriction come from?
Is it business rule? Or maybe user shouldn't be a part of model at all?
Usualy "User" means authorization and authentification i.e behaviour, that for my mind should placed in usecase. I prefare to create separate entity for domain (e.g. buyer) and relate it with usecase's user. So when new user is registered it possible to trigger creation of new buyer.

Modeling a hierarchical security with a graph

I'm trying to create an access control list for my documents. Each document needs to store which groups have CRUD rights, well not create rights on a created document. The rights may be hierarchical in that a document might not have explicit rights, but rather inherit them from the account that owns the document.
For example a user can have lots of recipes. By default the user only allows his group to read the recipes. Think Linux style groups where each user gets their own group. This default is part of the user's profile. Now for 10 recipes the user allows the group "public" to read the recipes. These are explicit designations within the recipe document itself.
Is a graph a good fit here? Should I have a vertex for user and a vertex for recipe with an edge between them called, I don't know, OWNS? In the query I could say, in pseudocode, WHERE recipe.CRUD.readers contains "public" OR recipe's OWNS inbound profile.recipes.CRUD.readers contains "public". Is this doable? Would Arango perform well on this?
I know I could duplicate the default CRUD from the profile down into the recipes. This seems like a) a massive waste of memory, and b) error prone if the system gets out of sync (do transactions work across collections in a cluster?), and c) operationally intensive since ever document has to get scanned and possibly updated.

Solve apparent need for outside reference to entity inside aggregate (DDD)

I'm trying to follow DDD principles to create a model for determining whether or not an Identity has access to an Action belonging to a Resource.
A Resource (e.g. a webservice) is something that holds a number of Actions (e.g. methods), which can be accessed or not. An Identity is something that wants to access one or more Actions on a Resource. For example, someone uses an api-key in a call to a webservice method, and it must be determined whether or not access is allowed.
As I currently see it, Identity and Resource are aggregate roots, and Action is an entity belonging to Resource. It doesn't seem to make sense for an Action to live on its own; it will always belong to one Resource. An Identity needs to know to which Resource Actions it has access. This seems to suggest the following model.
However, as I understand it, this violates the principle that something outside an aggregate cannot reference an entity within the aggregate. It must go through the root. Then I'm thinking, what if Action was the aggregate root and Resource an entity? But that doesn't seem very logical to me. I've also been thinking of merging Resource and Action into one entity, which would then be an aggregate root, but that also seems wrong to me.
So it leaves me kind of stuck on how to model this correctly using DDD principles. Anyone have a good idea on how to model this?
Update: The model I'm trying to create is the identity model for defining which resource actions an Identity is allowed to access. It is not a model for the actual implementation of resources and actions.
Update 2 - invariants:
Id of all objects is given at birth, is unique, and doesn't change. ApiKey of Identity must be unique across all Identities.
Name of Action must be unique within aggregate, but two different Resources can have Actions with same names, e.g. Resource "R1" can have an Action "A1" and Resource "R2" can also have an Action "A1", but the two "A1"s are not the same.
Query or Write Operation?
The domain model in terms of aggregates and entities has it's purpose in DDD in order to simplify expression and enforcement of the invariants - as write operations are applied to the model.
As mentioned in #VoiceOfUnreason's answer, the question 'Can this user do action A on resource R' is a question that doesn't necessarily need to flow through the domain model - it can be answered with a query against either a pre-projected read-only model, or standard SQL querying against the tables that make up the write model persistence (depend on your needs).
Splitting Contexts to Simplify Invariants
However, your question, whilst mostly about how to identify if an identity is allowed to carry out an action, is implicitly seeking a simpler model for the updating of resources, actions and permissions. So to explore that idea... there are implicitly two types of write operations:
Defining available resources and actions
Defining which resource action combinations a particular identity is permitted to carry out
It's possible that the model for these two types of operations might by simplified if they were split into different bounded contexts.
In the first, you'd model as you have done, an Aggregate with Resource as the aggregate root and Action as a contained entity. This permits enforcing the invariant that the action name must be unique within a resource.
As changes are made in this context, you publish events e.g. ActionAddedToResource, ActionRemovedFromResource.
In the second context, you'd have three aggregates:
Identity
ResourceAction
Properties: Id, ResourceId, ResourceName, ActionId, ActionName
Permission
ResourceAction instances would be updated based events published from the first context - created on ActionAddedToResource, removed on ActionRemovedFromResource. If there is a resource with no actions, there is no ResourceAction at all.
Permission would contain two identity references - IdentityId and ResourceActionId
This way when carrying out the operation "Permit this user to do this action on this resource" the operation is just to create a new Permission instance - reducing the set of operations that affect the Identity aggregate's consistency boundary - assuming there are no invariants that require the concept of a 'permission' to be enforced within an Identity aggregate?
This also simplifies the query side of things, as you just need to search for a Permission entry with matching identityId, resourceName and actionName after joining Permissions to ResourceActions.
Responsibility Layers
The DDD Book in the section on Strategic Design refers to organising your contexts according to responsibility layers. To use the terms from the book, the above suggestion is based on the idea of a 'capability' responsibility layer (defining resources and actions) and an 'operational' responsibility layer (defining identity permissions and checking identity permissions).
For example, someone uses an api-key in a call to a webservice method, and it must be determined whether or not access is allowed.
That's a query. Fundamentally, there's nothing wrong with answering a query by joining two read-only copies of entities that belong to different aggregates.
You do need to be aware that, because the aggregates can change independently of each other, and because they can change independently of your query, the answer you get when you do the join may be stale, and not entirely consistent.
For example, you may be joining a copy of an Identity written 100ms ago to a copy of an Action written 200ms ago. Either of the aggregates could be changing while you are running the query.
Based on the invariants you mention, Identity can contain a Resources dictionary/map where resourceId is the key and the value is a set of unique action names/ids. This gives you uniqueness of action names for each resource per identity:
Map<Resource, Set<Action>>
Alternatively, you could have a set/list of Resources and they have a collection of Actions on them. Uniqueness can be enforced by the collection types available in the language you're coding in:
Set<Resource> Resources
class Resource {
Set<Action> Actions
}
Even simpler, just create a Resource-Action key by combining the two ids and store it in a set or something to give you uniqueness:
Resource1-Action1
Resource1-Action2
Resource2-Action1
...etc
You can then have a method on Identity to add a new Resource-Action combination.
I don't see anything in your description to warrant Actions being Entities as they appear to have no identity of their own.
This is really simple though, so I am presuming you've simplified the domain considerably.
I will also expand on the bit identified by #VoiceOfUnreason:
For example, someone uses an api-key in a call to a webservice method,
and it must be determined whether or not access is allowed.
How would the particular bit of exposed functionality know what security is applied to it? The answer is provided by #Chris Simon: Permission.
I have a common implementation that I use that has not been distilled into an Identity & Access BC of its own but follows closely with what you are attempting --- I hope :)
A Session has a list of Permission strings. Typically I use a uri to represent a permission since it is quite readable. Something like my-system:\\users\list. Anyway, how the user is assigned these permissions could be anything. There may very well be a Role containing permissions and a user is assigned to one or more roles; or the user may even have a custom list of permissions.
When a SessionToken is requested (via authentication) the server retrieves the permissions for the relevant user and creates a session with the relevant permissions assigned to it. This results in a read-side token/permission.
Each exposed bit of functionality (such as a rest endpoint) is assigned a permission. In c# web-api it is simply an attribute on the method:
[RequiredPermission("my-system:\\order\create")]
My session token is passed in the header and a quick check determines whether the session has expired and whether the session (assigned to the user) has access to the resource.
Using your design the Action may very well carry the required Permission. The User would still require a list of either roles or UserAction entries that contain, perhaps, the ResourceId and ActionId. When the use logs in the read-optimized session structures are created using that structure.
If there is an arbitrary Action list that can be assigned to any Resource then both Resource and Action are probably aggregates. Then you would need a ResourceAction as mentioned by #Chris Simon. The ResourceAction would then contain the Permission.
That's my take on it...

Implementing Udi's Fetching Strategy - How do I search?

Background
Udi Dahan suggests a fetching strategy as a useful pattern to use for data access. I agree.
The concept is to make roles explicit. For example I have an Aggregate Root - Customer. I want customer in several parts of my application - a list of customers to select from, a view of the customer's details, and I want a button to deactivate a customer.
It seems Udi would suggest an interface for each of these roles. So I have ICustomerInList with very basic details, ICustomerDetail which includes the latest 10 products purchased, and IDeactivateCustomer which has a method to deactivate the customer. Each interface exposes just enough of my Customer Aggregate Root to get the job done in each situation. My Customer Aggregate Root implements all these interfaces.
Now I want to implement a fetching strategy for each of these roles. Each strategy can load a different amount of data into my Aggregate Root because it will be behind an interface exposing only the bits of information needed.
The general method to implement this part is to ask a Service Locator or some other style of dependency injection. This code will take the interface you are wanting, for example ICustomerInList, and find a fetching strategy to load it (IStrategyForFetching<ICustomerInList>). This strategy is implemented by a class that knows to only load a Customer with the bits of information needed for the ICustomerInList interface.
So far so good.
Question
What you pass to the Service Locator, or the IStrategyForFetching<ICustomerInList>. All of the examples I see are only selecting one object by a known id. This case is easy, the calling code passes this id through and will get back the specific interface.
What if I want to search? Or I want page 2 of the list of customers? Now I want to pass in more terms that the Fetching Strategy needs.
Possible solutions
Some of the examples I've seen use a predicate - an expression that returns true or false if a particular Aggregate Root should be part of the result set. This works fine for conditions but what about getting back the first n customers and no more? Or getting page 2 of the search results? Or how the results are sorted?
My first reaction is to start adding generic parameters to my IStrategyForFetching<ICustomerInList> It now becomes IStrategyForFetching<TAggregateRoot, TStrategyForSelecting, TStrategyForOrdering>. This quickly becomes complex and ugly. It's further complicated by different repositories. Some repositories only supply data when using a particular strategy for selecting, some only certain types of ordering. I would like to have the flexibility to implement general repositories that can take sorting functions along with specialised repositories that only return Aggregate Roots sorted in a particular fashion.
It sounds like I should apply the same pattern used at the start - How do I make roles explicit? Should I implement a strategy for fetching X (Aggregate Root) using the payload Y (search / ordering parameters)?
Edit (2012-03-05)
This is all still valid if I'm not returning the Aggregate Root each time. If each interface is implemented by a different DTO I can still use IStrategyForFetching. This is why this pattern is powerful - what does the fetching and what is returned doesn't have to map in any way to the aggregate root.
I've ended up using IStrategyForFetching<TEntity, TSpecification>. TEntity is the thing I want to get, TSpecification is how I want to get it.
Have you come across CQRS? Udi is a big proponent of it, and its purpose is to solve this exact issue.
The concept in its most basic form is to separate the domain model from querying. This means that the domain model only comes into play when you want to execute a command / commit a transaction. You don't use data from your aggregates & entities to display information on the screen. Instead, you create a separate data access service (or bunch of them) that contain methods that provide the exact data required for each screen. These methods can accept criteria objects as parameters and therefore do searching with whatever criteria you desire.
A quick sequence of how this works:
A screen shows a list of customers that have made orders in the last week.
The UI calls the CustomerQueryService passing a date as criteria.
The CustomerQueryService executes a query that returns only the fields required for this screen, including the aggregate id of each customer.
The user chooses a customer in the list, and chooses perform the 'Make Important Customer' action /command.
The UI sends a MakeImportantCommand to the Command Service (or Application Service in DDD terms) containing the ID of the customer.
The command service fetches the Customer aggregate from the repository using the ID passed in the command, calls the necessary methods and updates the database.
Building your app using the CQRS architecture opens you up to lot of possibilities regarding performance and scalability. You can take this simple example further by creating separate query databases that contain denormalised tables for every view, eventual consistency & event sourcing. There is a lot of videos/examples/blogs about CQRS that I think would really interest you.
I know your question was regarding 'fetching strategy' but I notice that he wrote this article in 2007, and it's likely that he considers CQRS its sucessor.
To summarise my answer:
Don't try and project cut down DTO's from your domain aggregates. Instead, just create separate query services that give you a tailored query for your needs.
Read up on CQRS (if you haven't already).
To add to the response by David Masters, I think all the fetching strategy interfaces are adding needless complexity. Having the Customer AR implement the various interfaces which are modeled after a UI is a needless constraint on the AR class and you will spend far to much effort trying to enforce it. Moreover, it is a brittle solution. What if a view requires data that while related to Customer, does not belong on the customer class? Does one then coerce the customer class and the corresponding ORM mappings to contain that data? Why not just have a separate set of classes for query purposes and be done with it? This allows you to deal with fetching strategies at the place where they belong - in the repository. Furthermore, what value does the fetching strategy interface abstraction really add? It may be an appropriate model of what is happening in the application, it doesn't help in implementing it.

Resources