Object creation events in ServiceStack's OrmLite - servicestack

I need to set an event handler on objects that get instantiated by OrmLite, and can't figure out a good way to do it short of visiting every Get method in a repo (which obviously is not a good way).
To give some background - say I have a class User, which is pulled from database; it also implements INotifyPropertyChanged. I want to assign a handler to that event. Having it auto-populated from Funq would be ideal, but of course OrmLite doesn't ask Funq to hydrate the new object.
So I'm stuck.
Any hints in a right direction would be appreciated.

It sounds to me like you're mixing in presentation logic with your data access logic. If I was in your position I would not attempt to implement INotifyPropertyChanged on a model (such as your User class). Instead I would create a ViewModel and place the databinding logic there (MVVM Style).
Having INotifyPropertyChanged on the data model is not quite logical when you get down to it. If I were to update the database record it would not fire this event for example (but the property has changed). It makes a lot more sense on a ViewModel.
Beyond solving your original issue it also makes building complex screens a lot easier by letting you aggregate, compose, and filter data for display purposes. If you need to pull in information from your database, a RSS feed, a stock ticker web API, and twitter you can do so in your ViewModel.
public class User
{
[AutoIncrement]
public int Id { get; set; }
public string Name { get; set; }
}
public class UserViewModel : INotifyPropertyChanged
{
private string _name;
public UserViewModel(User user)
{
_name = user.Name;
}
public string Name
{
get { return _name; }
set {
if (value == _name) return;
_name = value;
OnPropertyChanged("Name");
}
}
public event PropertyChangedEventHandler PropertyChanged;
[NotifyPropertyChangedInvocator]
protected virtual void OnPropertyChanged(string propertyName)
{
if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
}
Small Note: This answer was written in the context of display data on a screen with a ViewModel, however, the same concept applies to observing model changes for any purpose.

Related

Product, Category, Attributes modelling in DDD

I'm trying to model online shop catalog using Domain Driven Design.
There are three main concepts I have right now: Product, Category, Attribute.
Attribute is a characteristic of a product. For instance things such as color, weight, number of CPU cores etc. There are attributes which possible values are fixed, for instance "condition" - can be new or used. Some of them are within some range of values, for instance "number of CPU cores". Some are freely created like "color".
Category have required attributes which every product within that category needs to have, and optional ones. Categories can have parent categories.
Product belongs to a single category which needs to be a leaf category(no children categories).
Now the problem I have is to model these three concepts as aggregates.
One option is to have three different aggregates: Product, Attribute, Category.
Product will have it's attribute values(each with parent id to Attribute AR). Attribute will be in different types(fixed, freely choosen, range). Category will have a list of IDs of Attributes which are required, and list of IDs
The issue here is that whenever I need to create a new product I would need to check if it has all of the required attributes, check the values, and then store the product. This validation would span three aggregates. Where should it go ? It should be domain service ?
Other option is to have 2 AR. Category, with it's products and Attributes. The issue here is again validation of correct values for a single attribute added to a product. The other huge issue I see here, is that I should fetch the whole aggregate from the repository. Given that category can have hundreds of products, I don't think that's a good idea. However it makes sense as a conceptual whole, as If I would like to delete a category, all of it's products should be deleted as well.
What I am missing here ?
In "Implementing Domain Driven Design", Vaugh Vernon uses the "specification pattern" to handle entity/aggregate validation. Without quoting the entire chapter, you have different possibilities : (Java is used in my example, I hope you get the overall idea)
Validating Attributes / Properties
If it is a simple validation process field by field, then validate each attribute separately inside the setter method.
class Product {
String name;
public Product(String name) {
setName(name);
}
public void setName(String name) {
if(name == null) {
throw new IllegalArgumentException("name cannot be null");
}
if(name.length() == 0) {
throw new IllegalArgumentException("name cannot be empty");
}
this.name = name;
}
}
Validating Whole Object
If you have to validate the whole object, you can use a kind of specification to help you. To avoid having the entity having too much responsibilities (dealing with the state, and validate it), you can use a Validator.
a. Create a generic Validator class, and implement it for your Product Validator. Use a NotificationHandler to deal with your validation error (exception, event, accumulating errors and then sending them ? up to you) :
public abstract class Validator {
private ValidationNotificationHandler notificationHandler;
public Validator(ValidationNotificationHandler aHandler) {
super();
this.setNotificationHandler(aHandler);
}
public abstract void validate();
protected ValidationNotificationHandler notificationHandler() {
return this.notificationHandler;
}
private void setNotificationHandler(ValidationNotificationHandler aHandler) {
this.notificationHandler = aHandler;
}
}
NotificationHandler is an interface, that you could implement given your requirements in term of validation error handling. Here is the interface proposed by Vaugh Vernon :
public interface ValidationNotificationHandler {
public void handleError(String aNotificationMessage);
public void handleError(String aNotification, Object anObject);
public void handleInfo(String aNotificationMessage);
public void handleInfo(String aNotification, Object anObject);
public void handleWarning(String aNotificationMessage);
public void handleWarning(String aNotification, Object anObject);
}
b. Implements this class with a specific validator ProductValidator:
public class ProductValidator extends Validator {
private Product product;
public ProductValidator(Product product, ValidationNotificationHandler aHandler) {
super(aHandler);
this.setProduct(product);
}
private void setProduct(Product product) {
this.product = product;
}
#Override
public void validate() {
this.checkForCompletness();
}
private void checkForCompletness() {
if(product.getName().equals("bad name") && anotherCondition()) {
notificationHandler().handleError("This specific validation failed");
}
...
}
}
And then, you can update your entity, with a validate method, that will call this validator to validate the whole object:
public class Product {
private String name;
public Product(String name) {
setName(name);
}
private void setName(String name) {
if (name == null) {
throw new IllegalArgumentException("Name cannot be null");
}
if (name.length() == 0) {
throw new IllegalArgumentException("Name cannot be empty");
}
this.name = name;
}
// Here is the new method to validate your object
public void validate(ValidationNotificationHandler aHandler) {
(new ProductValidator(this, aHandler)).validate();
}
}
Validating multiple aggregates
And finally, which is your direct concern, if you want to validate multiple aggregates to have something coherent, the recommendation is to create a Domain Service and a specific validator. The domain services can either have injected the repositories to look up for the different aggregates, or I everything is created by the application layers, then inject the different aggregates as method parameter:
public class ProductCategoryValidator extends Validator {
private Product product;
private Category category;
public ProductCategoryValidator(Product product, Category category, ValidationNotificationHandler aHandler) {
super(aHandler);
this.setProduct(product);
this.setCategory(category);
}
private void setCategory(Category category) {
this.category = category;
}
private void setProduct(Product product) {
this.product = product;
}
#Override
public void validate() {
this.checkForCompletness();
}
private void checkForCompletness() {
// Count number of attributes, check for correctness...
}
}
And the domain service that will call the Validator
public class ProductService {
// Use this is you can pass the parameters from the client
public void validateProductWithCategory(Product product, Category category, ValidationNotificationHandler handler) {
(new ProductCategoryValidator(product, category, handler)).validate();
}
// Use This is you need to retrieve data from persistent layer
private ProductRepository productRepository;
private CategoryReposiory categoryReposiory;
public ProductService(ProductRepository productRepository, CategoryReposiory categoryReposiory) {
this.productRepository = productRepository;
this.categoryReposiory = categoryReposiory;
}
public void validate(String productId, ValidationNotificationHandler handler) {
Product product = productRepository.findById(productId);
Category category = categoryReposiory.categoryOfProductId(productId);
(new ProductCategoryValidator(product, category, handler)).validate();
}
}
Like I said, I think you might be interested into the solution 3. As you have guessed it, you can use a Domain Service. But, add a specific validator to ensure the "responsibilities" are not mixed.
The issue here is that whenever I need to create a new product I would need to check if it has all of the required attributes, check the values, and then store the product. This validation would span three aggregates. Where should it go ? It should be domain service ?
The usual answer is that the retrieval of information (aka I/O) is done in an application service. Copies of that information are then passed, like other inputs, into the domain model.
A single "transaction" might include multiple calls to aggregate methods, as we fetch inputs from different places.
These copies of information are generally treated as data on the outside - we have an unlocked copy of the data here; while we are using that copy, the authoritative copy might be changing.
If you find yourself thinking that "the authoritative copy of the data over there isn't allowed to change while I use it over here" - that's a big red flag that either (a) you don't actually understand your real data constraints or (b) that you've drawn your aggregate boundaries incorrectly.
Most data from the real world is data on the outside (Bob's billing address may change without asking your permission - what you have in your database is a cached copy of the Bob's billing address as of some point in the past).

Domain events are customer defined, not hard-coded

Requirements for our SaaS product are to build a domain layer where any attribute or combination of attributes that are changed could trigger a domain event - and subsequently kick off a custom process, or notification.
So, I am hesitant to add tons of code to the domain layer that kicks off tons of DomainEvent objects which may not make sense to many tenants.
Each tenant will have the ability to (through a UI screen):
1. define which attributes they care about (e.g. "amount") and why (e.g. amount is now greater than $100)
2. define what happens when they change (e.g. kick off an approval process)
This seems like a business rules engine integration to me along with a BPMS. Does anyone have thoughts on a more lighter-weight framework or solution to this?
You could publish a generic event that has its constraints/specification defined against a unique Name. Let's call the event SpecificationEvent. Perhaps you would have a SpecificationEventService that can check you domain objects that implement a ISpecificationValueProvider and return a populated event that you could publish:
public interface ISpecificationEventValueProvider
{
object GetValue(string name);
}
public class SpecificationEventService
{
IEnumerable<SpecificationEvent> FindEvents(ISpecificationEventValueProvider provider);
}
public class SpecificationEvent
{
private List<SpecificationEventValue> _values;
public string Name ( get; private set; }
public IEnumerable<ISpecificationEventValue> Values
{
get { return new ReadOnlyCollection<ISpecificationEventValue>(_values); }
}
}
public class SpecificationEventValue
{
public string Name { get; private set; }
public object Value { get; private set; }
public SpecificationEventValue(string name, object value)
{
Name = name;
Value = value;
}
}
So you would define the custom events in some store. Possibly from some front-end that is used to defined the constraints that constitute the event. The SpecificationEventService would use that definition to determine whether the candidate object conforms to the requirements and then returns the event with the populated values that you can then publish.
The custom code could be registered in an endpoint where you handle the generic SpecificationEvent. Each of the custom handlers can be handed the event for handling but only the handler that determines that the event is valid for it will perform any real processing.
Hope that makes sense. I just typed this up so it is not production-level code and you could investigate the use of generics for the object :)

Add behavior to existing implementation - C# / Design Pattern

My current implementation for service and business layer is straight forward as below.
public class MyEntity { }
// Business layer
public interface IBusiness { IList<MyEntity> GetEntities(); }
public class MyBusinessOne : IBusiness
{
public IList<MyEntity> GetEntities()
{
return new List<MyEntity>();
}
}
//factory
public static class Factory
{
public static T Create<T>() where T : class
{
return new MyBusinessOne() as T; // returns instance based on T
}
}
//Service layer
public class MyService
{
public IList<MyEntity> GetEntities()
{
return Factory.Create<IBusiness>().GetEntities();
}
}
We needed some changes in current implementation. Reason being data grew over the time and service & client cannot handle the volume of data. we needed to implement pagination to the current service. We also expect some more features (like return fault when data is more that threshold, apply filters etc), so the design needs to be updated.
Following is my new proposal.
public interface IBusiness
{
IList<MyEntity> GetEntities();
}
public interface IBehavior
{
IEnumerable<T> Apply<T>(IEnumerable<T> data);
}
public abstract class MyBusiness
{
protected List<IBehavior> Behaviors = new List<IBehavior>();
public void AddBehavior(IBehavior behavior)
{
Behaviors.Add(behavior);
}
}
public class PaginationBehavior : IBehavior
{
public int PageSize = 10;
public int PageNumber = 2;
public IEnumerable<T> Apply<T>(IEnumerable<T> data)
{
//apply behavior here
return data
.Skip(PageNumber * PageSize)
.Take(PageSize);
}
}
public class MyEntity { }
public class MyBusinessOne : MyBusiness, IBusiness
{
public IList<MyEntity> GetEntities()
{
IEnumerable<MyEntity> result = new List<MyEntity>();
this.Behaviors.ForEach(rs =>
{
result = rs.Apply<MyEntity>(result);
});
return result.ToList();
}
}
public static class Factory
{
public static T Create<T>(List<IBehavior> behaviors) where T : class
{
// returns instance based on T
var instance = new MyBusinessOne();
behaviors.ForEach(rs => instance.AddBehavior(rs));
return instance as T;
}
}
public class MyService
{
public IList<MyEntity> GetEntities(int currentPage)
{
List<IBehavior> behaviors = new List<IBehavior>() {
new PaginationBehavior() { PageNumber = currentPage, }
};
return Factory.Create<IBusiness>(behaviors).GetEntities();
}
}
Experts please suggest me if my implementation is correct or I am over killing it. If it correct what design pattern it is - Decorator or Visitor.
Also my service returns JSON string. How can I use this behavior collections to serialize only selected properties rather than entire entity. List of properties comes from user as request. (Kind of column picker)
Looks like I don't have enough points to comment on your question. So, I am gonna make some assumption as I am not a C# expert.
Assumption 1: Looks like you are getting the data first and then applying the pagination using behavior object. If so, this is a wrong approach. Lets say there are 500 records and you are showing 50 records per fetch. Instead of simply fetching 50 records from DB, you are fetching 500 records for 10 times and on top of it you are adding a costly filter. DB is better equipped to do this job that C# or Java.
I would not consider pagination as a behavior with respect to the service. Its the behavior of the presentation layer. Your service should only worry about 'Data Granularity'. Looks like one of your customer wants all the data in one go and others might want a subset of that data.
Option 1: In DAO layer, have two methods: one for pagination and other for regular fetch. Based on the incoming params decide which method to call.
Option 2: Create two methods at service level. One for a small subset of data and the other for the whole set of data. Since you said JSON, this should be Restful service. Then based on the incoming URL, properly call the correct method. If you use Jersey, this should be easy.
In a service, new behaviors can be added by simply exposing new methods or adding new params to existing methods/functionalities (just make sure those changes are backward compatible). We really don't need Decorator or Visitor pattern. The only concern is no existing user should be affected.

Proper way to secure domain objects?

If I have an entity Entity and a service EntityService and EntityServiceFacade with the following interfaces:
interface EntityService {
Entity getEntity(Long id);
}
interface EntityServiceFacade {
EntityDTO getEntity(Long id);
}
I can easily secure the read access to an entity by controlling access to the getEntity method at the service level. But once the facade has a reference to an entity, how can I control write access to it? If I have a saveEntity method and control access at the service (not facade) level like this (with Spring security annotations here):
class EntityServiceImpl implements EntityService {
...
#PreAuthorize("hasPermission(#entity, 'write')")
public void saveEntity(Entity entity) {
repository.store(entity);
}
}
class EntityServiceFacadeImpl implements EntityServiceFacade {
...
#Transactional
public void saveEntity(EntityDTO dto) {
Entity entity = service.getEntity(dto.id);
entity.setName(dto.name);
service.save(entity);
}
}
The problem here is that the access control check happens already after I have changed the name of the entity, so that does not suffice.
How do you guys do it? Do you secure the domain object methods instead?
Thanks
Edit:
If you secure your domain objects, for example with annotations like:
#PreAuthorize("hasPermission(this, 'write')")
public void setName(String name) { this.name = name; }
Am I then breaking the domain model (according to DDD?)
Edit2
I found a thesis on the subject. The conclusion of that thesis says that a good way IS to annotate the domain object methods to secure them. Any thoughts on this?
I wouldn't worry about securing individual entity methods or properties from being modified.
Preventing a user from changing an entity in memory is not always necessary if you can control persistence.
The big gotcha here is UX, you want to inform a user as early as possible that she will probably be unable to persist changes made to that entity. The decision you will need to make is whether it is acceptable to delay the security check until persistence time or if you need to inform a user before (e.g. by deactivating UI elements).
If Entity is an interface, can't you just membrane it?
So if Entity looks like this:
interface Entity {
int getFoo();
void setFoo(int newFoo);
}
create a membrane like
final class ReadOnlyEntity implements Entity {
private final Entity underlying;
ReadOnlyEntity(Entity underlying) { this.underlying = underlying; }
public int getFoo() { return underlying.getFoo(); } // Read methods work
// But deny mutators.
public void setFoo(int newFoo) { throw new UnsupportedOperationException(); }
}
If you annotate read methods, you can use Proxy classes to automatically create membranes that cross multiple classes (so that a get method on a readonly Entity that returns an EntityPart returns a readonly EntityPart).
See deep attenuation in http://en.wikipedia.org/wiki/Object-capability_model for more details on this approach.

Can/Should a domain object be responsible for converting itself to another type?

We have a class Event (it's actually named differently, but I'm just making abstraction):
public class Event
{
public string Name { get; set; }
public string Description { get; set; }
public EventType EventType { get; set; }
}
We need to build an instance of a Message class with this object, but depending on the EventType, we use a different builder:
switch (event.EventType)
{
case EventType.First:
message = FirstMessageBuilder.Build(event);
break;
case EventType.Second:
message = SecondMessageBuilder.Build(event);
break;
}
Do you think this is acceptable, or should we take the following approach:
Make an abstract class:
public class Event
{
public string Name { get; set; }
public string Description { get; set; }
public abstract Message BuildMessage();
}
Then derive two classes: class FirstMessage and class SecondMessage and make the domain objects responsible for building the message.
I hope it isn't too abstract. The bottom line is we need to transform one class to another. A simple mapper won't do, because there are properties with XML content and such (due to a legacy application making the events). Just accept what we're trying to do here.
The real question is: can a domain object be responsible for such a transformation, or would you not recommend it? I would avoid the ugly switch statement, but add complexity somewhere else.
Whilst I agree with Thomas, you might want to look at the following design patterns to see if they help you:
Vistor Pattern
Double-Dispatch Pattern
Builder Pattern
Strictly speaking, a domain object shouldn't be responsible for anything other than representing the domain. "Changing type" is clearly a technical issue and should be done by some kind of service class, to maintain a clear separation of concerns...
In order to gain the readability of
var message = eventInstance.AsMessage();
as well following the single responsibility principle, you could define AsMessage() as an extension method of the event type.
There are few possible solutions. To use abstract factory:
public interface IMessageFactory
{
Message Create();
}
public class FirstMessageFactory : IMessageFactory
{
public Message Create()
{
//...
}
}
public class SomeService
{
private readonly IMessageFactory _factory;
public SomeService(IMessageFactory factory)
{
_factory = factory;
}
public void DoSomething()
{
var message = _factory.Create();
//...
}
}
Now you can wire IoC container to right factory for requested service.
To use Assembler which makes the transformation:
public interface IAssembler<TSource, TDestination>
{
TDestination Transform(TSource source);
}
This is quite similar to factory pattern, but if you are dependent on EventType, its possible to do it like:
public interface IAssembler<TEventType>
{
object Transform(object source);
}
I would encapsulate the logic into a separate Factory/Builder class, and use an extension method on Event to call the builder.
This would give you the best of both worlds.

Resources