CRM 2011 Plugin development best practice - dynamics-crm-2011

I am inheriting a set of plugins that appear to be developed by different people. Some of them follow the pattern of one master plugin with many different steps. In this plugin none of the steps are cohesive or related in functionality, the author simply put them all in the same plugin with code internal to the plugin (if/else madness) that handles the various different entities, crm messages (update, create, delete, etc..) and stages (preValidation/post operation etc.).
The other developer seems to make a plugin for every entity type and/or related feature grouping. This results in multiple smaller plugins with fewer steps.
My question is this, assuming I have architected a way out of the if/else hell that the previous developer created in the 'one-plugin-to-rule-them-all' design, which approach is preferable from a CRM performance and long term maintenance (as in fewer side effects and difficulties with deployment, etc.) perspective?

I usually follow a model driven approach and design one plugin class per entity. On this class steps can be registered for the pre-validation, pre- and post-operation and asynchronous stages on the Create, Update, Delete and other messages, but always for only one entity at a time.
Doing so I can keep a clear oversight of the plugin logic that is triggered on an entity's events and also I do not need to bother about the order in which plugin steps are triggered.
Following this approach, of course, means I need a generic pattern for handling all supported events. For this purpose I designed a plugin base class responsible for the event routing. My deriving plugin classes only need to implement (override) the event handler methods (PreUpdate, PostCreate etc.).
Im my opinion plugin classes should only be used to glue system events to the business logic. Therefore the code performing the desired actions should be placed in separate classes. Plugin classes only route the events, prepare the data and call the business logic.
Some developers tend to design one plugin class per step or even per implemented requirement. Doing so keeps your plugin classes terse (which is positive), but when logic gets complicated you can easily loose track of what is going on for a single entity. (Recently I worked with a CRM implementation that had an entity having 21 plugin classes registered for it. Understanding what was going on and adding new behaviour to this entity proved to be very tricky and time consuming.)

Related

CQRS Commands and Queries - Do they belong in the domain?

In CQRS, do they Commands and Queries belong in the Domain?
Do the Events also belong in the Domain?
If that is the case are the Command/Query Handlers just implementations in the infrastructure?
Right now I have it layed out like this:
Application.Common
Application.Domain
- Model
- Aggregate
- Commands
- Queries
Application.Infrastructure
- Command/Query Handlers
- ...
Application.WebApi
- Controllers that utilize Commands and Queries
Another question, where do you raise events from? The Command Handler or the Domain Aggregate?
Commands and Events can be of very different concerns. They can be technical concerns, integration concerns, domain concerns...
I assume that if you ask about domain, you're implementing a domain model (maybe even with Domain Driven Design).
If this is the case I'll try to give you a really simplified response, so you can have a starting point:
Command: is a business intention, something you want a system to do. Keep the definition of the commands in the domain. Technically it is just a pure DTO. The name of the command should always be imperative "PlaceOrder", "ApplyDiscount" One command is handled only by one command handler and it can be discarded if not valid (however you should make all the validation possible before sending the command to your domain so it cannot fail)
Event: this is something that has happened in the past. For the business it is the immutable fact that cannot be changed. Keep the definition of the domain event it in the domain. Technicaly it's also a DTO object. However the name of the event should always be in the past "OrderPlaced", "DiscountApplied". Events generally are pub/sub. One publisher many handlers.
If that is the case are the Command/Query Handlers just implementations in the infrastructure?
Command Handlers are semantically similar to the application service layer. Generally application service layer is responsible for orchestrating the domain. It's often build around business use cases like for example "Placing an Order". In those use cases invoke business logic (which should be always encapsulated in the domain) through aggregate roots, querying, etc. It's also a good place to handle cross cutting concerns like transactions, validation, security, etc.
However, application layer is not mandatory. It depends on the functional and technical requirements and the choices of architecture that has been made.
Your layring seems correct. I would better keep command handlers at the boundary of the system. If there is not a proper application layer, a command handler can play a role of the use case orchestrator. If you place it in the Domain, you won't be able to handle cross cutting concerns very easily. It's a tradeoff. You should be aware of the pro and cons of your solution. It may work in one case and not in another.
As for the event handlers. I handle it generally in
Application layer if the event triggers modification of another Aggregate in the same bounded context or if the event trigger some infrastructure service.
Infrastructure layer if the event need to be split to multiple consumers or integrate other bounded context.
Anyway you should not blindly follow the rules. There are always tradeoffs and different approaches can be found.
Another question, where do you raise events from? The Command Handler or the Domain Aggregate?
I'm doing it from the domain aggregate root. Because the domain is responsible for raising events.
As there is always a technical rule, that you should not publish events if there was a problem persisting the changes in the aggregate and vice-versa I took the approach used in Event Sourcing and that is pragmatic. My aggregate root has a collection of Unpublished events. In the implementation of my repository I would inspect the collection of Unpublished events and pass them to the middleware responsible for publishing events. It's easy to control that if there is an exception persisting an aggregate root, events are not published. Some says that it's not the responsibility of the repository, and I agree, but who cares. What's the choice. Having awkward code for event publishing that creeps into your domain with all the infrastructure concerns (transaction, exception handling, etc) or being pragmatic and handle all in the Infrastructure layer? I've done both and believe me, I prefer to be pragmatic.
To sum up, there is no a single way of doing things. Always know your business needs and technical requirements (scalability, performance, etc.). Than make your choices based on that. I've describe what generally I've done in the most of cases and that worked. It's just my opinion.
In some implementations, Commands and handlers are in the Application layer. In others, they belong in the domain. I've often seen the former in OO systems, and the latter more in functional implementations, which is also what I do myself, but YMMV.
If by events you mean Domain Events, well... yes I recommend to define them in the Domain layer and emit them from domain objects. Domain events are an essential part of your ubiquitous language and will even be directly coined by domain experts if you practise Event Storming for instance, so it definitely makes sense to put them there.
What I think you should keep in mind though is that no rule about these technical details deserves to be set in stone. There are countless questions about DDD template projects and layering and code "topology" on SO, but frankly I don't think these issues are decisive in making a robust, performant and maintainable application, especially since they are so context dependent. You most likely won't organize the code for a trading system with millions of aggregate changes per minute in the same way that you would a blog publishing platform used by 50 people, even if both are designed with a DDD approach. Sometimes you have to try things for yourself based on your context and learn along the way.
Command and events are DTOs. You can have command handlers and queries in any layer/component. An event is just a notification that something changed. You can have all type of events: Domain, Application etc.
Events can be generated by both handler and aggregate it's up to you. However, regardless where they are generated the command handler should use a service bus to publish the events. I prefer to generate domain events inside the aggregate root.
From a DDD strategic point of view, there are just business concepts and use cases. Domain events, commands, handlers are technical details. However all domain use cases are usually implemented as a command handler, therefore command handlers should be part of the domain as well as the query handlers implementing queries used by the domain. Queries used by the UI can be part of the UI and so on.
The point of CQRS is to have at least 2 models and the Command should be the domain model itself. However you can have a Query model, specialised for domain usage but it's still a read (simplified) model. Consider the command model as being used only for updates, the read model only for queries. But, you can have multiple read models (to be used by a specific layer or component) or just a generic (used for everything query) one.

DDD & Factories - Intensive CRUD Operations

We have recently decided to adopt DDD in my team for our new projects because of the so many obvious benefits (coming from the Active-Record pattern school) and there are a couple of things that are yet unclear.
Say I have an entity Transaction that depends on the following entities (that each in turn depends on other so many entities):
1. Customer
2. Account
3. Currency
When I make use of factories to instantiate a Transaction entity to pass to a Domain Service for some fancy business rules, do I make so many queries to setup all these dependent instances?
If I have overloads in my factory that skip such dependencies then those will be null in some cases and it will become too complicated to differentiate when I can access those properties and when I cannot. With Active-Record pattern I just use lazy loading and have them load only on demand. Any ideas with DDD?
EDIT:
In my scenario “Transaction” seems to be the best candidate for an Aggregate root. I have defined a method in my Application Service “InitiateTransaction” (also have a “FinalizeTransaction” as it involves a redirect to PayPal) and takes as parameters the DTOs needed to carry AccountId, CurrencyId, LanguageId and various other foreign keys as well as Transaction attributes.
When calling my Domain Services (Transaction Processor and Fraud Rule Evaluator), I need to specify the “Transaction” Aggregate with all dependencies loaded (“Transaction.Customer”, “Transaction.Currency”, etc.).
So if I am correct the steps required are:
1. Call some repository(ies) to retrieve Customer, Currency etc.
2. Call TransactionFactory with dependencies specified above to get a Transaction object
3. Call Domain Services with fully loaded Transaction object for business rules to take place
Correct? Additionally, my concern was about steps 1 and 2.
If “Customer”, “Currency” and other Entities/Value Objects “Transaction” depends on, have in turn other dependencies. Do I try to set up those as well? Because it seems to me that if I do I will end up with very bloated code in my Application Service and not very reusable to place in a separate method. However, if I don’t and just retrieve those from a repository with a “GetById(id)”as you suggested, my code could end up buggy as say I need property “Transaction.Customer.CreatedByUser” which returns a “User” instance, it will be null because repositories only load flat instances.
EDIT:
I ended up using GetById(id) to load only the dependencies I knew they were needed in my Services. Not a big fun of accidentally accessing null instances due to flat loading but I have my unit tests to protect me from taking it to production!!
I highly doubt it that Currency is an entity, however it's important to model things like how they defined and use by the real Domain. Forget factories or other implementation details like the db, you need to make sure you have defined the concepts right.
Once you've done that, you'd already identified the aggregate root as well. Btw, the entities should encapsulate the relevant business rules. Use Services to implement use-cases i.e to manage the interaction between the domain objects and other parts such as the repository.
You should keep EVERYTHING related to db and CRUD in the repository, and have the repo work only with the aggregate roots. Also, for querying purposes, you should use CQRS so that all the queries would be done on a read model. For Domain purposes, a Get(id) is 99% enough and that method returns an aggregate root.
Be aware that DDD is very tricky, the most difficult part is modeling the Domain correctly, all the buzzwords are useless if the model is wrong.

ServiceStack Development Tooling?

Not sure if this is the most effective place to ask this question. Please redirect if you think it is best posted elsewhere to reach a better audience.
I am currently building some tooling in Visual Studio 2013 (using NuPattern) for a project that implements standard REST service using the ServiceStack framework. That is, the tooling helps you implement REST services, that meet a set of design rules and guidelines (in this case advocated by the ApiGee guidelines) for good REST service design.
Based on some simple configuration by the service developer, for each resource they wish to expose as a REST endpoint, with any number of named verbs (of type: GET, PUT, POST or DELETE) the tooling generates the following code files, with conventional names and folder structure into the projects of your own solution in Visual Studio (all in C# at this point):
The service class, and the service interface containing each named verb.
Both the request DTO and response DTOs, containing each named field.
The validator classes for each request DTO, which validates each request DTO field.
a manager class (and interface) that handles the actual calls with data unwrapped from DTOs.
Integration Tests that verify each verb with common edge test cases, and verifies status codes, web exceptions, basic connectivity.
Unit Tests for each service and manager class, that verify parameters and common edge cases, and exception handling.
etc.
The toolkit is proving to be extremely useful in getting directly to the inner coding of the actual service, by taking care of the SS plumbing in a consistent manner. From there is basically up to you what you do with the data passed to you from the request DTOs.
Basically, once the service developer names the resource, and chooses the REST verbs they want to support (typically any of these common ones: Get, List, Create, Update, Delete), they simply jump straight to the implementation of the actual code which does the good stuff rather than worrying about coding up all the types around the web operations and plumbing them into the SS framework. Of course we support nested routes and that good stuff so that your REST API can evolve appropriately.
The toolkit is evolving as more is learned about building REST services with ServiceStack, and as we want to add more flexibility to it.
Since there is so much value being discovered with this toolkit in our specific project, I wanted to see if others in the ServiceStack community (particularly those new to it or old hands at it) would see any value in us making it open source, and let the community evolve it with their own expertise to help others move forward quicker with ServiceStack?
(And, of course, selfishly give us a chance to pay forward to others, out of respect for the many contributions others have selflessly made in the ServiceStack communities that have helped us move forward.)
Let us know what you think, we can post a video demonstrating the toolkit as it is now so you can see what the developers experience is currently.
Video walkthrough of the VS.NET Extension
A video walking through of the workflow is available on:
http://www.youtube.com/watch?v=ejTyvKba_vo
Toolkit Project
The toolkit is now available here:
https://github.com/jezzsantos/servicestacktoolkit

Help w/ DDD, SOA and PI

Without getting into all of the gory details, I am trying to design a service-based solution that will be consumed by several client applications. The solution allows admins to create and modify document templates which are used by regular users to perform data entry. It is my intent to make the application a learning tool for best practices, techniques, etc.
And, at the same time, I have to accomodate a schizophrenic environment because the 'powers that be' cannot ever stick to their decisions regarding technologies and tools. For example, I am using Linq-to-SQL today because they aren't ready to go to EF4 but there is also discussion about switching over to NHibernate. So, I have to make the code as persistent ignorant as possible to minimize the work required should we change OR/M tools.
At this point, I am also limited to using the partial class approach to extend the Linq-to-SQL classes so they implement interfaces defined in my business layer. I cannot go with POCOs because management insists that we leverage all built-in tooling, etc. so I must support the Linq-to-SQL designer.
That said, my service interface has a StartSession method that accepts a template identifier in its signature. The operation flows like this:
If a session already exists in the database for the current user and specified template, update the record to show the current activity. If not, create a new session object.
The session is associated with an instance of the template, call it the "form". So if the session is new, I need to retrieve the template information to create the new "form", associate it with the session then save the session to the database. On the other hand, if the session already existed, then I need to also load the "form" with the data entered by the user and stored in the session previously.
Finally, the session (with form definition and data) is returned to the caller.
My first objective is to create clean separation between the logical layers of my application. The second is to maintain persistence ignorance (as mentioned above). Third, I have to be able to test everything so all dependencies must be externalized for easy mocking. I am using Unity as an IoC tool to help in this area.
To accomplish this, I have defined my service class and data contracts as needed to support the service interface. The service class will have a dependency injected from the business layer that actually performs the work. And here's where it has gotten messy for me.
I've been try to go the Unit of Work and Repository route to help with persistance ignorance. I have an ITemplateRepository and an ISessionRepository which I can access from my IUnitOfWork implementation. The service class gets an instance of my SessionManager class (in my BLL) injected. The SessionManager receives the IUnitOfWork implementation through constructor injection and will delegate all persistence to the UoW but I find myself playing a shell game with the various logic.
Should all of the logic described above be in the SessionManager class or perhaps the UoW implementation? I want as little logic as possible in the repository implementations because changing the data access platform could result in unwanted changes to the application logic. Since my repository is working against an interface, how do I best go about creating the new session (keeping in mind that a valid session has a reference to the template, er, form being used)? Would it be better to still use POCOs even though I have to support the designer and use a tool like AutoMapper inside the repository implementation to handle translating the objects?
Ugh!
I know I am just stuck in analysis paralysis so a little nudge is probably all I need. What would be ideal would be if someone could provide an example how you would you would solve the problem given the business rules and architectural constraints I've defined.
If you don't use POCOs then your not really going to be data store agnostic. And using POCOs will allow you to get your system up and running with memory based repositories which is what you'll likely want to use for your unit tests anyhow.
The AutoMapper sounds nice but I wouldn't consider it a deal breaker. Mapping POCOs to EF4, LinqToSql, nHibernate isn't that time consuming unless you have hundreds of tables. When/If your POCOs begin to diverge from your persistence layer then you might find that an AutoMapper wont really fit the bill.

Should repositories be both loading and saving entities?

In my design, I have Repository classes that get Entities from the database (How it does that it does not matter). But to save Entities back to the database, does it make sense to make the repository do this too? Or does it make more sense to create another class, (such as a UnitOfWork), and give it the responsibility to save stuff by having it accept Entities, and by calling save() on it to tell it to go ahead and do its magic?
In DDD, Repositories are definitely where ALL persistence-related stuff is expected to reside.
If you had saving to and loading from the database encapsulated in more than one class, database-related code will be spread over too many places in your codebase - thus making maintenance significantly harder. Moreover, there will be a high chance that later readers of this code might not understand it at first sight, because such a design does not adhere to the quasi-standards that most developers are expecting to find.
Of course, you can have separate Reader/Writer-helper classes, if that's appropriate in your project. But seen from the Business Layer, the only gateway to persistence should be the repository...
HTH!
Thomas
I would give the repository the overall responsibility for encapsulating all aspects of load and save. This ensures that tricky issues such as managing contention between readers and writes has a place to be managed.
The repository might well use your UnitOfWork class, and might need to expose a BeginUow and Commit methods.
Fowler says that repository api should mimic collection:
A Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection.

Resources