Recently, i was thinking about how to use message to implement the domain object collaboration. And now i have some thoughts:
Each domain object will implement an interface if it want to response one message;
Each domain object will not depend on any other domain objects, that means we will not have the Model.OtherModel relation;
Each domain object only do the things which only modify itself;
Each domain object can send a message, and this message will be received by any other domain objects which are care about this message;
Totally, the only way of collaboration between domain objects is message, one domain object can send any messages or receive any messages as long as it need.
When i learn Evans's DDD, i see that he defines the aggregate concept in domain, i think aggregate is static and not suitable for objects interactions, he only focused on the static structure of objects or relationship between objects. In real world, object will interact using messages, not by referencing each other or aggregating other objects. In my opinion, all the objects are equal, that means they will not depend on any other objects.
For about how to implement the functionality of sending messages or receive messages, i think we can create a EventBus framework which is specially used for the collaboration of domain object. We can mapping the event type to the subscriber type in a dictionary. The key is event type, the value is a list of subscriber types. When one event is raised, we can find the corresponding subscriber types, and get all the subscriber domain objects from data persistence and then call the corresponding handle methods on each subscriber.
For example:
public class EventA : IEvent { }
public class EventB : IEvent { }
public class EventC : IEvent { }
public class ExampleDomainObject : Entity<Guid>{
public void MethodToRaiseAnExampleEvent()
{
RaiseEvent(new EventC());
}
}
public class A : Entity<Guid>, IEventHandler<EventB>, IEventHandler<EventC> {
public void Handle(EventB evnt)
{
//Response for EventB.
}
public void Handle(EventC evnt)
{
//Response for EventC.
}
}
public class B : IEventHandler<EventA>, IEventHandler<EventC> {
public void Handle(EventA evnt)
{
//Response for EventA.
}
public void Handle(EventC evnt)
{
//Response for EventC.
}
}
That's my thoughts. Hopes to hear your words.
Have you ever heard of event sourcing or CQRS?
It sounds like that's the direction your thoughts are heading.
There's a lot of great info out there. Many good blog posts about CQRS and Domain Events, messaging-based domains.
Some example implementations are available, and there's a helpful and active community where implementation details can be discussed.
Related
Requirements for our SaaS product are to build a domain layer where any attribute or combination of attributes that are changed could trigger a domain event - and subsequently kick off a custom process, or notification.
So, I am hesitant to add tons of code to the domain layer that kicks off tons of DomainEvent objects which may not make sense to many tenants.
Each tenant will have the ability to (through a UI screen):
1. define which attributes they care about (e.g. "amount") and why (e.g. amount is now greater than $100)
2. define what happens when they change (e.g. kick off an approval process)
This seems like a business rules engine integration to me along with a BPMS. Does anyone have thoughts on a more lighter-weight framework or solution to this?
You could publish a generic event that has its constraints/specification defined against a unique Name. Let's call the event SpecificationEvent. Perhaps you would have a SpecificationEventService that can check you domain objects that implement a ISpecificationValueProvider and return a populated event that you could publish:
public interface ISpecificationEventValueProvider
{
object GetValue(string name);
}
public class SpecificationEventService
{
IEnumerable<SpecificationEvent> FindEvents(ISpecificationEventValueProvider provider);
}
public class SpecificationEvent
{
private List<SpecificationEventValue> _values;
public string Name ( get; private set; }
public IEnumerable<ISpecificationEventValue> Values
{
get { return new ReadOnlyCollection<ISpecificationEventValue>(_values); }
}
}
public class SpecificationEventValue
{
public string Name { get; private set; }
public object Value { get; private set; }
public SpecificationEventValue(string name, object value)
{
Name = name;
Value = value;
}
}
So you would define the custom events in some store. Possibly from some front-end that is used to defined the constraints that constitute the event. The SpecificationEventService would use that definition to determine whether the candidate object conforms to the requirements and then returns the event with the populated values that you can then publish.
The custom code could be registered in an endpoint where you handle the generic SpecificationEvent. Each of the custom handlers can be handed the event for handling but only the handler that determines that the event is valid for it will perform any real processing.
Hope that makes sense. I just typed this up so it is not production-level code and you could investigate the use of generics for the object :)
Consider we have a BankCard Entity that is a part of Client Aggregate. Client may want to cancel her BankCard
class CancellBankCardCommandHandler
{
public function Execute(CancelBankCardCommand $command)
{
$client = $this->_repository->get($command->clienId);
$bankCard = $client->getBankCard($command->bankCardId);
$bankCard->clientCancelsBankCard();
$this->_repository->add($client);
}
}
class BankCard implements Entity
{
// constructor and some other methods ...
public function clientCancelsBankCard()
{
$this->apply(new BankCardWasCancelled($this->id);
}
}
class Client implements AggregateRoot
{
protected $_bankCards;
public function getBankCard($bankCardId)
{
if (!array_key_exists($bankCardId, $this->_bankCards) {
throw new DomainException('Bank card is not found!');
}
return $this->_bankCard[$bankCardId]);
}
}
Finally we have some domain repository instance which is reponsible for storing Aggregates.
class ClientRepository implements DomainRepository
{
// methods omitted
public function add($clientAggregate)
{
// here we somehow need to store BankCardWasCancelled event
// which is a part of BankCard Entity
}
}
My question is whether AggregateRoot responsible for tracking its Entities' events or not. Is it possible to get events of an Entity which is a part of an Aggregate from within its Aggregate or not?
How to actually persist the Client with all changes made to the bank card saving its consistency?
I would say the aggregate as a whole is responsible for tracking the changes that happened to it. Mechanically, that could be "distributed" among the aggregate root entity and any other entities within the aggregate or the aggregate root entity as the sole recorder or some external unit of work. Your choice, really. Don't get too hung up on the mechanics. Different languages/paradigms, different ways of implementing all this. If something happens to a child entity, just consider it a change part of the aggregate and record accordingly.
We have an application which stores its data in two different databases. At some point in the future we may only be storing our data in one database, so we want it to be as painful as possible to make this kind of change. For this reason, we wrap our DbContexts in a single MyDataContext which gets injected into our UnitOfWork and Repository classes.
class MyDataContext : IDataContext {
internal Database1Context Database1;
internal Database2Context Database2;
}
class UnitOfWork : IUnitOfWork {
MyDataContext myDataContext;
public UnitOfWork(MyDataContext myDataContext) {
this.myDataContext = myDataContext;
}
public Save() {
//todo: add transaction/commit/rollback logic
this.myDataContext.Database1.SaveChanges();
this.myDataContext.Database2.SaveChanges();
}
}
class Database1Context : DbContext {
public DbSet<Customer> Customers { get; set; }
}
class Database2Context : DbContext {
public DbSet<Customers> CustomerProfile { get; set; }
}
class CustomerRepository : ICustomerRepository {
MyDataContext myDataContext;
public CustomerRepository(MyDataContext myDataContext) {
this.myDataContext = myDataContext;
}
public GetCustomerById(int id) {
return this.myDataContext.Database1.Customers.Single(...);
}
}
My first question is, am I doing it right? I've been doing a lot of reading, but admittedly DDD is a little bit overwhelming at this point.
My second question is which layer of the application do the IUnitOfWork and IDataContext interfaces reside in? I know that the interfaces for repositories live in the Core/Domain layer/assembly of the application, but not sure about these two. Should these two even have interfaces?
My first question is, am I doing it right?
You can do that, but first reconsider why you're storing data in different places in the first place. Are distinct aggregates at play? Furthermore, if you wish to commit changes to two different databases within a transaction, you will need to use 2-phase commit which is best to avoid. If you have different aggregates, perhaps you can save them separately?
My second question is which layer of the application do the
IUnitOfWork and IDataContext interfaces reside in?
These can be placed in the application layer.
I was reading about anemic domain model is a antipattern and I had some questions about.
I have a database where three clients uses and each one of them has diferrents business rules to insert a product into database.
So, if I use a rich domain model, my code will be something like this:
public class Product: IValidatableObject
{
public int Id;
public Client Client;
public int ClientId;
public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
{
if (ClientId == 1)
DoValidationForClientOne();
else if (ClientId == 2)
DoValidationForClientTwo();
else if (ClientId == 3)
DoValidationForClientThree();
}
}
well, it's horrible, isn't?
Now if I have an anemic domain model I could simple create three services layer classes where each one of them would contain a validation for one especific client. Isn't it good?
My second argument is: if I have a desktop and a web application using the same rich domain model, how can I know when to throw a HttpException and when to throw some desktop exception? Wouldn't it better to separate it?
So, finally, why an anemic domain model is an anti pattern in a situation like this in my project?
An AnaemicDomainModel has its place: https://softwareengineering.stackexchange.com/questions/160782/are-factors-such-as-intellisense-support-and-strong-typing-enough-to-justify-the
Your domain model should not be throwing exceptions that are specific to a presentation platform. Let your presentation code sort that out. You should aim to make your domain model agnostic of its presentation.
As already stated - you showed just a DTO, not a domain entity.
In DDD, you would have some constant rules directly in Product entity and some number of ProductPolicies to encapsulate what can differ in handling products in defferent contexts. Horrible? No. Beautiful and powerful. But only if your domain is complex enough. If it's not - use an anemic model.
Your domain should not depend on anything. Should not know anything about web platform, desktop platform, ORM being used, DI container being used. So if you throw an exception, it should be a domain custom exception. Read about onion architecture or hexagonal architecure for more detailed explanation: http://jeffreypalermo.com/blog/the-onion-architecture-part-1/
I will recommend following:
Define IProductValidator interface, and provide 3 implementations as:
interface IProductValidator {
void validateProduct(Product product);
}
Change Client class, and add following methods to it
class Client {
void validateProduct(Product product) {
getProductValidator().validate(product);
}
IProductValidator getProductValidator() {
// this method returns validator, and it's better the method
// be abstract, and be implemented in sub-classes according
// to their type
}
}
And change the Product class to:
public class Product: IValidatableObject {
public int Id;
public Client client;
public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) {
client.validate(this);
}
}
Now tou
If I have an entity Entity and a service EntityService and EntityServiceFacade with the following interfaces:
interface EntityService {
Entity getEntity(Long id);
}
interface EntityServiceFacade {
EntityDTO getEntity(Long id);
}
I can easily secure the read access to an entity by controlling access to the getEntity method at the service level. But once the facade has a reference to an entity, how can I control write access to it? If I have a saveEntity method and control access at the service (not facade) level like this (with Spring security annotations here):
class EntityServiceImpl implements EntityService {
...
#PreAuthorize("hasPermission(#entity, 'write')")
public void saveEntity(Entity entity) {
repository.store(entity);
}
}
class EntityServiceFacadeImpl implements EntityServiceFacade {
...
#Transactional
public void saveEntity(EntityDTO dto) {
Entity entity = service.getEntity(dto.id);
entity.setName(dto.name);
service.save(entity);
}
}
The problem here is that the access control check happens already after I have changed the name of the entity, so that does not suffice.
How do you guys do it? Do you secure the domain object methods instead?
Thanks
Edit:
If you secure your domain objects, for example with annotations like:
#PreAuthorize("hasPermission(this, 'write')")
public void setName(String name) { this.name = name; }
Am I then breaking the domain model (according to DDD?)
Edit2
I found a thesis on the subject. The conclusion of that thesis says that a good way IS to annotate the domain object methods to secure them. Any thoughts on this?
I wouldn't worry about securing individual entity methods or properties from being modified.
Preventing a user from changing an entity in memory is not always necessary if you can control persistence.
The big gotcha here is UX, you want to inform a user as early as possible that she will probably be unable to persist changes made to that entity. The decision you will need to make is whether it is acceptable to delay the security check until persistence time or if you need to inform a user before (e.g. by deactivating UI elements).
If Entity is an interface, can't you just membrane it?
So if Entity looks like this:
interface Entity {
int getFoo();
void setFoo(int newFoo);
}
create a membrane like
final class ReadOnlyEntity implements Entity {
private final Entity underlying;
ReadOnlyEntity(Entity underlying) { this.underlying = underlying; }
public int getFoo() { return underlying.getFoo(); } // Read methods work
// But deny mutators.
public void setFoo(int newFoo) { throw new UnsupportedOperationException(); }
}
If you annotate read methods, you can use Proxy classes to automatically create membranes that cross multiple classes (so that a get method on a readonly Entity that returns an EntityPart returns a readonly EntityPart).
See deep attenuation in http://en.wikipedia.org/wiki/Object-capability_model for more details on this approach.