Dealing with a user dependent application - domain-driven-design

An application that I'm currently writing is heavily dependent on the current logged in user, to give a concrete example lets say we have a list of Products.
Now every user has the 'rights' to see certain Products, particular details of this product, and edit / remove fewer of those.
E.g.:
The user can see 3/5 products
The user can see extra details from 2 out of those 3 products
...
As this is the case with most of the application's domain, I have a tendency to pass around the user in most methods. Which becomes cumbersome from time to time. As I have to pass in the user in some methods, just to pass it down to another one that needs it.
My gut tells me I'm missing something, but I'm not sure how I could tackle this problem.
I gave some thoughts at using a Class that holds this user, and inject that class everywhere I need it. Or using a static Property.
Now from time to time it is handy pass in the user in the method, I guess I could override it then:
public doSomething(User user = null)
{
var u = user ?? this.authService.User;
...
}
Are there other ways you could tackle this kind of problem ?

This is going to depend on where you are in the project in terms of progress. In some instances you may not have the leeway to change this but if you have more control or are starting out then you may have options.
Typically Identity & Access Control is a bounded context on its own. Authentication and authorization should not be in your core domain. Your core domain (or even sub-domains) are interested in doing what they do if you have access but it is not the domain's responsibility to determine that access.
The authorization should take place outside the domain. If you find that you are querying your domain then things probably need to change since you need a dedicated query layer that will probably apply the authorization. Any commands that are limited will have authorization applied at the integration/application layer. Whether we want to restrict a user from registering a new order or even new orders of a certain type should not really matter i.t.o. the domain since it is only the granularity that changes.
You may have a sub-domain that deals with the authorization specific to your domain and an Identity & Access Control generic sub-domain that is more orthogonal.
But you may be in a scenario where there is an uncomfortably high level of coupling between the data element authorization (a level of classification) and the structure. I am of the opinion that fluid classification should be kept away from ones structure as the repercussions of classification changes are too great.
Just some thoughts :)

Your gut is correct, keep listen to it.
Authorization checks should not be mixed with core domain checks. For example, the if that checks that the user may update the product details and the if that checks that the product details are long enough should not be contained in the same class or even the same bounded context. If you have a monolith then the two checks should be contained in separate namespaces/modules.
Now I will tell you how I do it. In my latest monolithic project I use CQRS a lot, I like the separation between Commands and Queries. I will give an example of command validation but this can be extended to query validation and even to non-CQRS architectures.
For every command I register zero or more command validators that check if the command may be sent to the aggregate. These validators are eventual consistent. If a command passes all the validators then the command is sent to the aggregate where it is further checked but in a strong consistent manner. So, we are talking about two kinds of validation: validation outside the aggregate and validation inside the aggregate. The checks that belongs to other bounded context can be implemented using command validators outside the aggregate, that's how I do it. And now some example source code, in PHP:
<?php
namespace CoreDomain {
class ProductAggregate
{
public function handle(ChangeProductDetails $command):void //no return value
{
//this check is strong consistent
//the method yields zero or more events or exception in case of failure
if (strlen($command->getProductDetails()) < 10) {
throw new \Exception("Product details must be at least 10 characters long");
}
yield new ProductDetailsWereChanged($command->getProductId(), $command->getProductDetails());
}
}
}
namespace Authorization {
class UserCanChangeProductDetailsValidator
{
private $authenticationReaderService;
private $productsPermissionsService;
public function validate(ChangeProductDetails $command): void //no return value, if all is good no exception are thrown
{
//this check is eventual consistent
if (!$this->productsPermissionsService->canUserChangeProductDetails($this->authenticationReaderService->getAuthenticatedUserId(), $command->getProductId())) {
throw new \Exception("User may not change product details");
}
}
}
}
This example uses a style where commands are sent directly to the aggregates but you should apply this pattern to other styles too. For brevity, the details of command validators registering are not included.

Related

Performing simple query inside of domain service

I've just run into a problem while trying to re-design our existing business objects to a adopt a more domain-driven design approach.
Currently, I have a Product Return aggregate root, which handles data related to a return for a specific product. As part of this aggregate, a date needs to be provided to say what month (and year) the return is currently for.
Each product return MUST be sequential, and so each product return must be the next month after the previous. Attempting to create a product return that doesn't follow this pattern should result in an exception.
I had thought about passing along a Domain Service to the method (or constructor) that sets the PeriodDate for the return, but I'm at a loss for how I would do this. Even if the domain service had a reference to a repository, I can't see it being appropriate to put a "GetNextReturnDate()" on that repository.
For background, each product return is associated with a product. I was reluctant to make the product the aggregate root, as loading up all the product returns just to add one seemed like an extremely non-performant way of doing things (considering this library is going to be used with a RESTful Web API).
Can anyone provide suggestions as to how I should model this? Is it a matter of just changing the aggregate root and dealing with the performance? Is there some place in the Domain that 'query' type services can be placed?
As an example, the current constructor the for product return looks like this:
public ProductReturn(int productID, int estimateTypeID, IProductService productService)
{
// This doesn't feel right, and I'm not sure how to implement it...
_periodDate = productService.GetNextReturnDate(productID);
// Other initialization code here...
}
The IProductService (and it's implementation) sit in the Domain layer, so there's no ability to call SQL directly from there (and I feel that's not what I should be doing here anyway)
Again, in all likelihood I've modelled this terribly, or I've missed something when designing the aggregate, so any help would be appreciated!
I Think my broader problem here is understanding how to implement constraints (be it foreign, unique, etc.) inside of a domain entity, without fetching an entire list of returns via a domain service, when a simple SQL query would give me the information required
EDIT: I did see this answer to another question: https://stackoverflow.com/a/48202644/9303178, which suggests having 'Domain Query' interfaces in the domain, which sound like they could return the sort of data I'm looking for.
However, I'm still worried I'm missing something here with my design, so again, I'm all open to suggestions.
EDIT 2: In response to VoiceOfUnreason's answer below, I figured I'd clarify a few things RE the PeriodDate property.
The rules on it are as follows:
CANNOT be NULL
It MUST be in sequence with other product returns, and it cannot be in a valid state without this fulfilled
It's a tricky one. I can't rely on the date passed in because it could then very well be out of sequence, but I can't figure out the date without the service being injected. I am going to transform the constructor into a method on a Factory to remove the 'constructor doing work' anti pattern.
I may be overly defensive over the way the code is currently, but it feels the amount of times the ReturnService has to be injected is wrong. Mind you, there are lots of scenarios where the return value must be recalculated, but it feels as though it would be easy to do this just before a save (but I couldn't think of a clean way to do this).
Overall I just feel this class has a bit of a smell to it (with the injected services and whatnot), but I may be needlessly worrying.
I had thought about passing along a Domain Service to the method (or constructor) that sets the PeriodDate for the return, but I'm at a loss for how I would do this.
I strongly suspect that passing a domain service to the method is the right approach to take.
A way of thinking about this: fundamentally, the aggregate root is a bag of cached data, and methods for changing the contents of the bag of data. During any given function call, it's entire knowledge of the world is that bag, plus the arguments that have been passed to the method.
So if you want to tell the aggregate something that it doesn't already know -- data that isn't currently in the bag -- then you have to pass that data as an argument.
That in turn comes in two forms; if you can know without looking in the aggregate bag what data to pass in, you just pass it as an argument. If you need some of the information hidden in the aggregate bag, then you pass a domain service, and let the aggregate (which has access to the contents of the bag), pass in the necessary data.
public ProductReturn(int productID, int estimateTypeID, IProductService productService)
{
// This doesn't feel right, and I'm not sure how to implement it...
_periodDate = productService.GetNextReturnDate(productID);
// Other initialization code here...
}
This spelling is a little bit odd; constructor does work is usually an anti-pattern, and it's a bit weird to pass in a service to compute a value, when you could just compute the value and pass it in.
If you feel like it's part of the business logic to decide how to compute _periodDate, which is to say if you think the rules for choosing the periodDate belong to the ProductReturn, then you would normally use a method on the object to encapsulate those rules. On the other hand, if periodDate is really decided outside of this aggregate (like productID is, in your example), then just pass in the right answer.
One idea that may be crossing you up: time is not something that exists in the aggregate bag. Time is an input; if the business rules need to know the current time to do some work, then you will pass that time to the aggregate as an argument (again, either as data, or as a domain service).
The user can’t pass a date in, because at any given time, the date for a return can ONLY be the next date from the last return.
Typically, you have a layer sitting between the user and the domain model -- the application; it's the application that decides what arguments to pass to the domain model. For instance, it would normally be the application that is passing the "current time" to the domain model.
On the other hand, if the "date of the last return" is something owned by the domain model, then it probably makes more sense to pass a domain service along.
I should also mention - a return is invalid without a date, so I can’t construct the entity, then hope that the method is called at a later time
Are you sure? Effectively, you are introducing an ordering constraint on the domain model - none of these messages is permitted unless that one has been received first, which means you've got a race condition. See Udi Dahan's Race Conditions Don't Exist
More generally, an entity is valid or in valid based on whether or not it is going to be able to satisfy the post conditions of its methods, you can loosen the constraints during construction if the post conditions are broader.
Scott Wlaschin's Domain Modeling Made Functional describes this in detail; in summary, _periodDate might be an Or Type, and interacting with it has explicit choices: do this thing if valid, do this other thing if invalid.
The idea that constructing a ProductReturn requires a valid _periodDate isn't wrong; but there are tradeoffs to consider that will vary depending on the context you are operating in.
Finally, if any date is saved to the database that ISN’T the next sequential date, the calculation of subsequent returns will fail as we require a sequence to do the calculation properly.
If you have strong constraints between data stored here and data stored somewhere else, then this could indicate a modeling problem. Make sure you understand the implications of Set Validation before you get too deeply invested into a design.
Your problem is querying across multiple aggregates(product returns) for decision making(creating a new product return aggregate).
Decision making based on querying across aggregates using a repository will always be wrong; we will never be able to guarantee consistency, as the state read from the repository will always be a little old.(Aggregates are transaction boundaries. The state read from repository will only be correct at that very instant. In the next instant the aggregate's state may change.)
In your domain what I will do is create a ProductReturnManager AggregateRoot which manages returns for a particular product and a ProductReturn Aggregate which specify one specific return of the product. ProductReturnManager AggregateRoot manages the lifecycle of ProductReturnAggregate to ensure consistency.
The logic for assigning a next month sequential date to ProductReturn is in ProductReturnManager (basically ProductReturnManager act as a constructor). The behavior of product return will be in ProductReturnAggregate.
The ProductReturnManager can be modeled as a Saga, which is created on first CreateProductReturnCommand (for a productId), and same saga is loaded for further CreateProductReturn commands(correlated by productId). It handles ProductReturnCreatedEvent to update its state. Saga creation logic will be according to your business rules(Eg. saga creation is done on InvoiceRaisedForProduct event and handles CreateProductReturn commands.)
sample code:
ProductReturnManagerSagaState{
ProductId productId;
//can cache details about last product return
ProductReturnDetails lastProductReturnDetails;
}
ProductReturnManagerSaga : Saga<ProductReturnManagerSagaState>,IAmStartedByMessages<CreateProductReturn>{
Handle(CreateProductReturn message){
//calculate next product return date
Date productReturnDate = getNextReturnDate(Data.lastProductReturnDetails.productReturnDate);
//create product return
ProductReturnAggregateService.createProductReturn(Data.productId, productReturnDate);
}
Handle(ProductReturnCreatedEvent message){
//logic for updating last product return details in saga state
}
}
ProductReturnAggregate{
ProductId productId;
Date productReturnDate;
ProductPayment productPayment;
ProductReturnState productReturnState;
//commands for product return
markProductReturnAsProcessing();
}
This is an excellent video by Udi Dahan on working across multiple aggregates.

DDD: where should logic go that tests the existence of an entity?

I am in the process of refactoring an application and am trying to figure out where certain logic should fit. For example, during the registration process I have to check if a user exists based upon their email address. As this requires testing if the user exists in the database it seems as if this logic should not be tied to the model as its existence is dictated by it being in the database.
However, I will have a method on the repository responsible for fetching the user by email, etc. This handles the part about retrieval of the user if they exist. From a use case perspective, registration seems to be a use case scenario and accordingly it seems there should be a UserService (application service) with a register method that would call the repository method and perform if then logic to determine if the user entity returned was null or not.
Am I on the right track with this approach, in terms of DDD? Am I viewing this scenario the wrong way and if so, how should I revise my thinking about this?
This link was provided as a possible solution, Where to check user email does not already exits?. It does help but it does not seem to close the loop on the issue. The thing I seem to be missing from this article would be who would be responsible for calling the CreateUserService, an application service or a method on the aggregate root where the CreateUserService object would be injected into the method along with any other relevant parameters?
If the answer is the application service that seems like you are loosing some encapsulation by taking the domain service out of the domain layer. On the other hand, going the other way would mean having to inject the repository into the domain service. Which of those two options would be preferable and more in line with DDD?
I think the best fit for that behaviour is a Domain Service. DS could access to persistence so you can check for existence or uniquenes.
Check this blog entry for more info.
I.e:
public class TransferManager
{
private readonly IEventStore _store;
private readonly IDomainServices _svc;
private readonly IDomainQueries _query;
private readonly ICommandResultMediator _result;
public TransferManager(IEventStore store, IDomainServices svc,IDomainQueries query,ICommandResultMediator result)
{
_store = store;
_svc = svc;
_query = query;
_result = result;
}
public void Execute(TransferMoney cmd)
{
//interacting with the Infrastructure
var accFrom = _query.GetAccountNumber(cmd.AccountFrom);
//Setup value objects
var debit=new Debit(cmd.Amount,accFrom);
//invoking Domain Services
var balance = _svc.CalculateAccountBalance(accFrom);
if (!_svc.CanAccountBeDebitted(balance, debit))
{
//return some error message using a mediator
//this approach works well inside monoliths where everything happens in the same process
_result.AddResult(cmd.Id, new CommandResult());
return;
}
//using the Aggregate and getting the business state change expressed as an event
var evnt = Transfer.Create(/* args */);
//storing the event
_store.Append(evnt);
//publish event if you want
}
}
from http://blog.sapiensworks.com/post/2016/08/19/DDD-Application-Services-Explained
The problem that you are facing is called Set based validation. There are a lot of articles describing the possible solutions. I will give here an extract from one of them (the context is CQRS but it can be applied to some degree to any DDD architecture):
1. Locking, Transactions and Database Constraints
Locking, transactions and database constraints are tried and tested tools for maintaining data integrity, but they come at a cost. Often the code/system is difficult to scale and can be complex to write and maintain. But they have the advantage of being well understood with plenty of examples to learn from. By implication, this approach is generally done using CRUD based operations. If you want to maintain the use of event sourcing then you can try a hybrid approach.
2. Hybrid Locking Field
You can adopt a locking field approach. Create a registry or lookup table in a standard database with a unique constraint. If you are unable to insert the row then you should abandon the command. Reserve the address before issuing the command. For these sort of operations, it is best to use a data store that isn’t eventually consistent and can guarantee the constraint (uniqueness in this case). Additional complexity is a clear downside of this approach, but less obvious is the problem of knowing when the operation is complete. Read side updates are often carried out in a different thread or process or even machine to the command and there could be many different operations happening.
3. Rely on the Eventually Consistent Read Model
To some this sounds like an oxymoron, however, it is a rather neat idea. Inconsistent things happen in systems all the time. Event sourcing allows you to handle these inconsistencies. Rather than throwing an exception and losing someone’s work all in the name of data consistency. Simply record the event and fix it later.
As an aside, how do you know a consistent database is consistent? It keeps no record of the failed operations users have tried to carry out. If I try to update a row in a table that has been updated since I read from it, then the chances are I’m going to lose that data. This gives the DBA an illusion of data consistency, but try to explain that to the exasperated user!
Accepting these things happen, and allowing the business to recover, can bring real competitive advantage. First, you can make the deliberate assumption these issues won’t occur, allowing you to deliver the system quicker/cheaper. Only if they do occur and only if it is of business value do you add features to compensate for the problem.
4. Re-examine the Domain Model
Let’s take a simplistic example to illustrate how a change in perspective may be all you need to resolve the issue. Essentially we have a problem checking for uniqueness or cardinality across aggregate roots because consistency is only enforced with the aggregate. An example could be a goalkeeper in a football team. A goalkeeper is a player. You can only have 1 goalkeeper per team on the pitch at any one time. A data-driven approach may have an ‘IsGoalKeeper’ flag on the player. If the goalkeeper is sent off and an outfield player goes in the goal, then you would need to remove the goalkeeper flag from the goalkeeper and add it to one of the outfield players. You would need constraints in place to ensure that assistant managers didn’t accidentally assign a different player resulting in 2 goalkeepers. In this scenario, we could model the IsGoalKeeper property on the Team, OutFieldPlayers or Game aggregate. This way, maintaining the cardinality becomes trivial.
You seems to be on the right way, the only stuff I didn't get is what your UserService.register does.
It should take all the values to register a user as input, validate them (using the repository to check the existence of the email) and, if the input is valid store the new User.
Problems can arise when the validation involve complex queries. In that case maybe you need to create a secondary store with special indexes suited for queries that you can't do with your domain model, so you will have to manage two different stores that can be out of sync (a user exists in one but it isn't replicated in the other one, yet).
This kind of problem happens when you store your aggregates in something like a key-value store where you can search just with the id of the aggregate, but if you are using something like a sql database that permits to search using your entities fields, you can do a lot of stuff with simple queries.
The only thing you need to take care is avoid to mix query logic and commands logic, in your example the lookup you need to do is easy, is just one field and the result is a boolean, sometimes it can be harder like time operations, or query spanning multiple tables aggregating results, in these cases it is better to make your (command) service use a (query) service, that offers a simple api to do the calculation like:
interface UserReportingService {
ComplexResult aComplexQuery(AComplexInput input);
}
That you can implement with a class that use your repositories, or an implementation that executes directly the query on your database (sql, or whatever).
The difference is that if you use the repositories you "think" in terms of your domain object, if you write directly the query you think in terms of your db abstractions (tables/sets in case of sql, documents in case of mongo, etc..). One or the other depends on the query you need to do.
It is fine to inject repository into domain.
Repository should have simple inteface, so that domain objects could use it as simple collection or storage. Repositories' main idea is to hide data access under simple and clear interface.
I don't see any problems in calling domain services from usecase. Usecase is suppossed to be archestrator. And domain services are actions. It is fine (and even unavoidable) to trigger domain actions by usecase.
To decide, you should analyze Where is this restriction come from?
Is it business rule? Or maybe user shouldn't be a part of model at all?
Usualy "User" means authorization and authentification i.e behaviour, that for my mind should placed in usecase. I prefare to create separate entity for domain (e.g. buyer) and relate it with usecase's user. So when new user is registered it possible to trigger creation of new buyer.

Domain security involving domain logic

Together with my application's domain logic I am trying to outline the security model. I am stuck with a requirement that prevents me from considering security just a cross-cutting concern over my domain logic. Here follows my situation.
A user in my system can potentially be allowed to create a certain kind of objects, say, 'filters'. I introduce a permission called 'CREATE_FILTER', and a user is either allowed to create filters or not, depending on whether the admin assigned such a permission to this user, or not. Ok.
Now consider a more complex requirement when the number of filters a user can create is limited. So, e.g. the admin should be able to set max number of filters any user is allowed to create, or even more complex, to assign max numbers individually to users, say value of 3 to User1, 5 to User2 and so on. So, the security system, in order to authorize filter creation to a user, is not sufficient to check whether a user has such a permission assigned, but has to analyze the domain model in a complex way in order to look how many filters there are already created by the user to make the decision. To make things more complex, we can imagine that the max limit will depend on the amount of money user has on their account, or something.
I want to conceptually separate (at least in my mind), whether such a complicated security logic purely pertains to security (which will of course depend on the domain model) or is this already a full-fledged part of the domain logic itself? Does it make sense to keep a 'permission' concept, when assigning/removing permissions does not help much (since it's domain state on which depends authorization decision rather than assigned permissions)? Would it be a way to go, say, to have a complicated permission concept which not simply allows an action by a mere fact of its existence but would rather involve some complex decision making logic?
Here's one way you could handle this ...
On one side you have a security model (might be a bounded context in ddd speak) that is solving the problem of assigning permissions to subjects (users), maybe indirectly through the use of roles. I would envision upper boundaries (max numbers) to be an attribute associated with the assigned permission.
There's also a query part to this model. Yet, it can only answer "simple" questions:
Has this user permission to create filters?
How many filters can this user create?
Some would even say this query part is a separate model altogether.
On the other end you have your application's model which is largely "security" free apart from these pesky requirements along the lines of "user John Doe can only create 3 filter". As an aside, it's doubtful we're still speaking of a "user" at this point, but rather of a person acting in a certain role in this particular use case. Anyway, back to how we could keep this somewhat separate. Suppose we have a somewhat layered approach and we have an application service with an authorization service in front. The responsibility of the authorization service is to answer the question "is this user allowed to perform this operation? yes or no?" and stop processing if the answer is no. Here's a very naive version of that (C#).
public class FilterAuthorizationServices :
Handles<CreateFilter>
{
public FilterAuthorizationServices(FilterRepository filterRepository) { ... }
public void Authorize(Subject subject, CreateFilter message)
{
if(!subject.HasPermissionTo("CreateFilter"))
{
throw new NotAuthorizedException("...");
}
if(filterRepository.CountFiltersCreatedBy(subject.Id) >
subject.GetTheMaximumNumberOfFiltersAllowedToCreate())
{
throw new NotAuthorizedException("...");
}
}
}
Notice how the application service is not even mentioned here. It can concentrate on invoking the actual domain logic. Yet, the authorization service is using both the query part of the above model (embodied by the Subject) and the model of the application (embodied by the FilterRepository) to fulfill the authorization request. Nothing wrong with doing it this way.
You could even go a step further and ditch the need for the application's model if that model could somehow provide the "current number of created filters" to the security model. But that might be a bridge too far for you, since that would lead down the path of evaluating dynamic expressions (which wouldn't necessarily be a bad place to be in). If you want to go there, may I suggest you create a mini DSL to define the required expressions and associated code to parse and evaluate them.
If you'd like brokered authorization you could look at something like XACML (https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=xacml) though you'll have to overcome your fear of XML first ;-)

Accessing Session in Domain Object

We use event sourcing in our app and also have a strict need to track the user who initiated changes to many of our objects. Currently we have code like this
class Order {
setNameBy(newname, User user) {
applyChange(new OrderRenamed(user.id, newname));
}
:
}
Since most of our methods are like this and all of them are called like this
setNameBy("a new name", SessionContext.currentUser)
we where contemplating accessing the SessionContext inside the domain object. i.e:
setNameBy(newname, User user) {
applyChange(new OrderRenamed(user.id, newname));
}
becomes
setName(newname) {
applyChange(new OrderRenamed(SessionContext.currenUser.id, newname));
}
I personally prefer the later method signature as it seams more natural on the other hand it feels a bit messy to access the SessionContext inside the Domain object.
So how do you best handle Session data like this in DDD/CQRS apps ?. Is it OK to access the SessionContext in the Domain objects or should I use other methods like event enrichment to add this information to the events emitted from the domain ?.
If tracking the user who initiated the change occurs frequently, then the SessionContext becomes an inherent part of the solution and therefore IMO the path of least resistance (a good enough solution). Perhaps a rewording to UserContext will make it sound less like a "dirty" technical coupling? :)
I often use a Thread bound Context in my applications (both Event sourced and not), and if your SessionContext.currentUser throws an exception in case a SessionContext hasn't been bound to the thread, then it can also help spot bugs during testing (at least it has for me).
The alternative could be to mark the Event as requiring User tracking (e.g. using an interface) and then enrich the event afterwards. This just feels a bit more cumbersome to me and can perhaps make problem solving harder since an unbound SessionContext exception will occur outside the business function requiring the User information.
Both solutions are IMO good enough solutions, so it's mostly a matter of where you want the coupling to the SessionContext.
I prefer to keep my domain models completely ignorant of outside details. If your domain objects require the user id to enforce business rules I would use your current approach and send the User as an argument. If you only need the user id for tracking/auditing purposes you can enrich the event.

Protecting sensitive entity data

I'm looking for some advice on architecture for a client/server solution with some peculiarities.
The client is a fairly thick one, leaving the server mostly to peristence, concurrency and infrastructure concerns.
The server contains a number of entities which contain both sensitive and public information. Think for example that the entities are persons, assume that social security number and name are sensitive and age is publicly viewable.
When starting the client, the user is presented with a number of entities, not disclosing any sensitive information. At any time the user can choose to log in and authenticate against the server, given the authentication is successful the user is granted access to the sensitive information.
The client is hosting a domain model and I was thinking of implementing this as some kind of "lazy loading", making the first request instantiating the entities and later refreshing them with sensitive data. The entity getters would throw exceptions on sensitive information when they've not been disclosed, f.e.:
class PersonImpl : PersonEntity
{
private bool undisclosed;
public override string SocialSecurityNumber {
get {
if (undisclosed)
throw new UndisclosedDataException();
return base.SocialSecurityNumber;
}
}
}
Another more friendly approach could be to have a value object indicating that the value is undisclosed.
get {
if (undisclosed)
return undisclosedValue;
return base.SocialSecurityNumber;
}
Some concerns:
What if the user logs in and then out, the sensitive data has been loaded but must be disclosed once again.
One could argue that this type of functionality belongs within the domain and not some infrastructural implementation(i.e. repository implementations).
As always when dealing with a larger number of properties there's a risk that this type of functionality clutters the code
Any insights or discussion is appreciated!
I think that this is actually a great example of using View Models. Your concern seems directly related to the consumption of the entities, because of the data that they contain. Instead of passing your entities all the way up to the UI, you could restrict them to live within the domain only - i.e. no entities are passed into or out of the domain at all, with most/all activities done with a command/query approach on the repositories. Repositories would then return a view model instead of the entity.
So how/why does this apply? You could actually have two different view models. One for authenticated and one for non-authenticated users. You expose the actual values for the sensitive data in the authenticated view model and not for the non-authenticated one. You could have them derived from a common interface, and then code against the interface instead of the object type. For your concrete implementation of the non-authenticated user, you can just populate the non-sensitive data, leaving the sensitive getters to do what you want them to do.
My opinion on a couple of points:
I am not a fan of lazy loading in entities. Lazy loading is a data access responsibility and not really part of the model. For me, it is a first-class member of the things I vehemently avoid in my domain, along with paging and sorting. As for how to relate these items together, I would rather loosely couple the objects via ID pointers to other entities. If I want/need the data contained by one of these entities, then I can load it. It is kind of like lazy loading in a way, but I enforce that it never happens in the domain model itself by doing this.
I am not a fan of throwing exceptions on getters. Setters, on the other hand, is fine. I look at it this way. The entity should always be in a valid state. Getters will not impact the state of the entity - setters will. Throwing on a setter is enforcing the integrity of the model. Using the two view model approach would allow me to move the logic to the presenter. So, I could basically do something like "if user is of type non-authorized, do this; otherwise do something else". Since what you are referring to would ultimately be a case of how the data is presented to the user, and not important to the model, I think it fits well. In general, I use nullable types for my properties that can be null and do not enforce anything on the getters, as it is not part of its responsibility, usually. Instead, I use roles to determine what view model to use.
The obvious drawback is that there is more coding required to use the view models, but it comes at the obvious benefit of decoupling presentation and views from the domain. It also will help in unit/integration testing, where you can verify that a certain view model cannot return a type of data.
However, you can use something akin to AutoMapper (depending on what your platform is) to help in populating your view model from your entities.
I made the mistake of posting the question without creating an OpenId so it looks like I'll have to comment here(?).
First of all, thanks for taking you time to answer - It certainly has more to do with how data is presented than how the model works. However, I feel the need to clarify a few things.
The domain model / entities are never referenced directly from the UI. I'm using a variant of the DM-V-VM pattern for UI/business model separation. For lazy loading and repository implementation in general I have entity implementations in a infrastructure layer where things like serialization, dirty tracking and lazy loading is handled.
So the domain layer has entities like:
class Entity {
virtual string SocialSecurityNumber { get; }
}
And the infrastructure layer adds some other functionality to be able to update and restore entites from a server:
class EntityImpl : Entity {
bool isDirty;
bool isLoaded;
// Provide the means to set value on deserialization
override string SocialSecurityNumber;
}
So the lazy loading behavior would be implemented in the infrastructure layer and never seen by the domain layer.
I agree that throwing on getters wouldn't be nice but my concerns are on how an anonymous view model would retrieve the data. As of now, to retrieve a list of entities the viewmodel would hold a reference to a domain repository, should I have two repositories, one for authenticated(and therefore disclosed) entities, and another one for the unauthenticated users - maybe even two different entities?

Resources