DDD: How to refactor (wrap) remote service call into domain? - domain-driven-design

There is a service class FooService and method named fetchFoos that calls remote service, deserialize the JSON response and returns graph of value objects (starting with root Foo object). For now, there is no other behavior with this remote service, i.e. we are just fetching some 3rd party data. Speaking in DDD terms, this is closed bounded context with sole purpose of providing data, using its own models.
We may leave this method as a service; but... it seems it would be better if we may rename it to something more 'linguistic'.
For example, we could migrate singleton service to a simple bean named: FooFetcher (any better name?) and have method fetchFooForBar() that does the same. Then instead of injecting the service, we would simple create a new instance of this object and use it.
I even think that FooFetcher is a wrong domain name, it should be just Foos and the method would be fetchForBar().
However, some other ppl think that should come from a repository - so basically, we would just need to rename the FooService to FooRepository.
Any collective wisdom on how to encapsulate remote services in DDD?

Assuming Foo is an entity in your bounded context, you can think of this service as an infrastructure service that will be invoked from a repository.
In the following example, I named the fetcher "FooFetchService" and it has a method called "getFoo" return a JSON string with the "contents" of the foo object
public interface FooRepository {
public Foo getById(String fooId);
}
public class RemoteFooRepository implements FooRepository {
#Inject
FooFetchService fooFetchService;
public Foo getById(String fooId) {
String returnedFoo = fooFetchService.getFoo(fooId);
/* add code here to deserialize the JSON contents of the returnFoo variable to an object Foo foo*/
return foo;
}
}
RemoteFooRepository is just an implementation of FooRepository which happens to retrieve a Foo via some remote service. You can inject this in any of your other services classes that need it.

Related

How to use the strategy pattern with managed objects

I process messages from a queue. I use data from the incoming message to determine which class to use to process the message; for example origin and type. I would use the combination of origin and type to look up a FQCN and use reflection to instantiate an object to process the message. At the moment these processing objects are all simple POJOs that implement a common interface. Hence I am using a strategy pattern.
The problem I am having is that all my external resources (mostly databases accessed via JPA) are injected (#Inject) and when I create the processing object as described above all these injected objects are null. The only way I know to populate these injected resources is to make each implementation of the interface a managed bean by adding #stateless. This alone does not solve the problem because the injected members are only populated if the class implementing the interface is itself injected (i.e. container managed) as opposed to being created by me.
Here is a made up example (sensitive details changed)
public interface MessageProcessor
{
public void processMessage(String xml);
}
#Stateless
public VisaCreateClient implements MessageProcessor
{
#Inject private DAL db;
…
}
public MasterCardCreateClient implements MessageProcessor…
In the database there is an entry "visa.createclient" = "fqcn.VisaCreateClient", so if the message origin is "Visa" and the type is "Create Client" I can look up the appropriate processing class. If I use reflection to create VisaCreateClient the db variable is always null. Even if I add the #Stateless and use reflection the db variable remains null. It's only when I inject VisaCreateClient will the db variable get populated. Like so:
#Stateless
public QueueReader
{
#Inject VisaCreateClient visaCreateClient;
#Inject MasterCardCreateClient masterCardCreateClient;
#Inject … many more times
private Map<String, MessageProcessor> processors...
private void init()
{
processors.put("visa.createclient", visaCreateClient);
processors.put("mastercard.createclient", masterCardCreateClient);
… many more times
}
}
Now I have dozens of message processors and if I have to inject each implementation then register it in the map I'll end up with dozens of injections. Also, should I add more processors I have to modify the QueueReader class to add the new injections and restart the server; with my old code I merely had to add an entry into the database and deploy the new processor on the class path - didn't even have to restart the server!
I have thought of two ways to resolve this:
Add an init(DAL db, OtherResource or, ...) method to the interface that gets called right after the message processor is created with reflection and pass the required resource. The resource itself was injected into the QueueReader.
Add an argument to the processMessage(String xml, Context context) where Context is just a map of resources that were injected into the QueueReader.
But does this approach mean that I will be using the same instance of the DAL object for every message processor? I believe it would and as long as there is no state involved I believe it is OK - any and all transactions will be started outside of the DAL class.
So my question is will my approach work? What are the risks of doing it that way? Is there a better way to use a strategy pattern to dynamically select an implementation where the implementation needs access to container managed resources?
Thanks for your time.
In a similar problem statement I used an extension to the processor interface to decide which type of data object it can handle. Then you can inject all variants of the handler via instance and simply use a loop:
public interface MessageProcessor
{
public boolean canHandle(String xml);
public void processMessage(String xml);
}
And in your queueReader:
#Inject
private Instance<MessageProcessor> allProcessors;
public void handleMessage(String xml) {
MessageProcessor processor = StreamSupport.stream(allProcessors.spliterator(), false)
.filter(proc -> proc.canHandle(xml))
.findFirst()
.orElseThrow(...);
processor.processMessage(xml);
}
This does not work on a running server, but to add a new processor simply implement and deploy.

Decorating Repositories with AutoFac

Hi I have a maybe a common problem that I think not entirely can be solved by Autofac or any IoC container. It can be a design problem that I need some fresh input on.
I have the classic MVC web solution with EF 6. Its been implemented in a true DDD style with Anti-corruption layer, three bounded contexts, cross-cutting concerns movers out to infrastructure projects. It has been a real pleasure to see all pieces fall in to place in good way. We also added Commands to CUD operations into Domain.
Now here is the problem. Customer want a change log that tracks every entities property and when updates are done we need to save into change log values before and after update. We have implemented that successful in a ILoggerService that wraps a Microsoft test utility that we uses to detect changes. But I, my role is Software Architect, took the decision to Decorate our generic repositories with a ChangeTrackerRepository that have a dependency on ILoggerService. This works fine. The Decorator track methods Add(…) and Modify(…) in our IRepository<TEntity>.
The problem is that we have Repositories that have custom repositories that have custom queries like this:
public class CounterPartRepository : Repository<CounterPart>, ICounterPartRepository
{
public CounterPartRepository(ManagementDbContext unitOfWork)
: base(unitOfWork)
{}
public CounterPart GetAggregate(Guid id)
{
return GetSet().CompleteAggregate().SingleOrDefault(s => s.Id == id);
}
public void DeleteCounterPartAddress(CounterPartAddress address)
{
RemoveChild(address);
}
public void DeleteCounterPartContact(CounterPartContact contact)
{
RemoveChild(contact);
}
}
We have simple repositories that just closes the generic repository and get proper EF Bounded context injected into it (Unit Of Work pattern):
public class AccrualPeriodTypeRepository : Repository<AccrualPeriodType>, IAccrualPeriodTypeRepository
{
public AccrualPeriodTypeRepository(ManagementDbContext unitOfWork)
: base(unitOfWork)
{
}
}
The problem is that when decorating AccrualPeriodTypeRepository with AutoFac through generic Decorator we can easily inject that repo into CommandHandler actor like this
public AddAccrualPeriodCommandHandler(IRepository<AccrualPeriod> accrualRepository)
This works fine.
But How do we also decorate CounterPartRepository???
I have gone through several solutions in my head and they all end up with a dead-end.
1) Manually decorate every custom repository generate to many custom decorators that it will be near unmaintainable.
2) Decorate the closed Repository Repository with extended custom queries. This smells bad. Should be part of that repository?
3) If we consider 2… maybe Skip our Services and only rely on IRepository for operating on our Aggregate Roots and IQueryHandler (see article https://cuttingedge.it/blogs/steven/pivot/entry.php?id=92)
I need some fresh input to a common problem I think, when it comes to decorating your repositories when you have custom closed repositories and simple repositories also closed but both inherit from same Repository
Have you consider decorating command handlers instead of decorating repositories?
Repos are too low level, and it is not their responsibility to know what should be logged and how.
What about the following:
1) You have your command handlers in a way:
public class DeleteCounterPartAddressHandler : IHandle<DeleteCounterPartAddressCommand>
{
//this might be set by a DI container, or passed to a constructor
public ICounterPartRepository Repository { get; set; }
public void Handle(DeleteCounterPartAddressCommand command)
{
var counterpart = repository.GetPropertyById(command.CounterPartId);
// in DDD you always want to read and aggregate
// and save an aggregate as a whole
property.DeleteAdress(command.AddressId);
repository.Save(counterpart)
}
}
2) Now you can simply use Chain Of Responsibility pattern to "decorate" your handlers with logging, transactions, whatever:
public class LoggingHandler<T> : IHandler<T> {
private readonly IHandler<T> _innerHandler;
public LoggingHandler(IHandler<T> innerHandler) {
_innerHandler = innerHandler;
}
public void Handle(T command)
{
//Obviously you do it properly, but you get the idea
_log.Info("Before");
_innerHandler.Handle(command);
_log.Info("After");
}
}
Now you have just one piece of code responsible for logging and you can compose it with any command handler, so if you ever want to log a particular command then you just "wrap" it with the logging handler, and it is still your IHandle<T> so the rest of the system is not impacted.
And you can do it with other concerns too (threading, queueing, transactions, multiplexing, routing, etc.) without messing around and plumbing this stuff here and there.
Concerns are very well separated this way.
It is also much better (to me) because you log on a real operation (business) level, rather than on low-level repository.
Hope it helps.
P.S. In DDD you really want your repositories to only expose aggregate-level methods because Aggregates suppose to take care of their invariants (and nothing else, no services, no repositories), and because Aggregate represents transaction boundary.
Really, it is up to the Repository how to get the Aggregate from persisted storage and how to persist it back, outside it should look like you ask someone for an object and it gives you an object you can call behaviors on.
So normally you would only get an aggregate from the repository, call its behavior(s) and then save it back. Which really means that your repositories would mostly have GetById and Save methods, not some internals like "UpdateThatPartOfAnAggregate".

Trying to understand IOC and binding

I am very new to concept of IOC and I understand the fact that they help us resolve different classes in different contexts. Your calling class will just interact with Interface and Interface with decide which implementation to give you and it takes care of newing up the object.
Please do correct me if I am understanding is wrong because my question is based on that:
Now, I see this pattern very often in these projects:
private readonly IEmailService emailService;
private readonly ITemplateRenderer templateRenderer;
private readonly IHtmlToTextTransformer htmlToTextTransformer;
public TemplateEmailService(IEmailService emailService,
ITemplateRenderer templateRenderer,
IHtmlToTextTransformer htmlToTextTransformer)
{
this.emailService = emailService;
this.htmlToTextTransformer = htmlToTextTransformer;
this.templateRenderer = templateRenderer;
}
I understand that this helps using all the implementations of these classes without newing them up and also you don't have to decide WHICH implementaion to get, your IOC decides it for you, right?
but when I code like this, I do not even touch any IOC congiguration files. And again I am usin git for 2 days only but from all the tutorials that I have read, I was expecting my self to configure something which says "Resolve IParent to Child" class. But it works without me doing anything like it. Is it because there is only one implementaion of these interfaces? and If I do have more than one implementations then and then only I will have to configure resolved explicitly?
The code sample you have is a case of Constructor Injection.
In a traditional code, you would have a parameterless constructor, and in it you would "new-up" your objects like this:
IEmailService emailService = new EmailService();
So your code is explictly controlling which implementation gets assigned to the interface variable.
In IoC using constructor injection, control is inverted, meaning the container is "driving the bus" and is creating your TemplateEmailService object. When it is about to create it, the container looks at your constructor parameters (IEmailService , ITemplateRenderer , etc.) and feeds those objects to your class for use.
The IoC container can be configured so that interface A gets fulfilled by implementation B (or C) explicitly. Each one has a way to do it. Or it could do it by convention (IFoo fulfilled by Foo), or even attributes in classes, whatever.
So to answer your question-- you can explicitly define which implementations get used to fulfill certain interfaces. Got to read the IoC container docs for how to.
One more thing - "when you code like this", you technically don't have to be using an IoC container. In fact, your class should not have a direct reference to the container - it will maximize the reusability, and also allow easy testing. So you would wire-up interfaces to implementation classes elsewhere.

MEF and Factory Pattern

i am trying to refactor my project to improve testability, therefor i'm introducing an abstract factory.
My application collects data from different sources by using ICrawlers.
These ICrawlers use 3rd party libraries to access different sources, like e.g. twitter.
Example: My TwitterCrawler uses TweetSharp to access twitter data.
My first version strongly coupled the TweetSharp client to the Crawler. Now i abstracted the TweetSharp to a ITwitterClient and a TweetSharpTwitterClient implementation.
Next step is to introduce a ITwitterClientFactory with a DefaultTwitterClientFactory that creates TweetSharpTwitterClients. This should bring me closer to my goal (testability) because i can switch the factory to MockTwitterClientFactory that creates a MockTwitterClient, that delivers some test output.
Now, let me come to my point.
I am using MEF for dependency injection (but i'm rather new to it). What I'm doing is this:
public class TwitterCrawler : CrawlerBase, ICrawler
{
[Import]
public ITwitterClientFactory TwitterClientFactory {get; set;}
public override Process()
{
ITwitterClient twitterClient = TwitterClientFactory.MakeSingletonClient();
// do something with twitterClient
}
}
Whereas my DefaultTwitterClientFactory exports itself to MEF:
[Export(typeof(ITwitterClient))]
public class DefaultTwitterClientFactory: ITwitterClientFactory
{
// implementation of ITwitterClientFactory
// provides methods to create instances of ITwitterClient implementations
}
Now, while this works so far, my question is, how to switch the factory?
How can i create a unit test and use the MockClientFactory instead of the DefaultTwitterClientFactory?
Is my approach good at all? Is it better to manually set the factory that is to be used?
Somewhere something like
... new TwitterCrawler(mockedTwitterClientFactory)
or even
.... new TwitterCrawler(mockedTwitterClient)?
This actually only moves the problem outside of TwitterClient, but still somewhere i have to decide how to construct the ITwitterClient and what factory to use for that purpose.
Should i dive more into the mechanics of MEF (ExportProvider?)
You shouldn't need to use the composer/container in your unit tests - just wire the SUT directly with the Test Doubles.
Something like this:
var sut = new TwitterCrawler();
sut.TwitterClientFactory = new FakeTwitterClientFactory();
However, you should really refactor from Property Injection to Constructor Injection, as the property implies that the dependency is optional.
BTW, your DefaultTwitterClientFactory doesn't export itself, it exports ITwitterClient.

How do I implement repository pattern and unit of work when dealing with multiple data stores?

I have a unique situation where I am building a DDD based system that needs to access both Active Directory and a SQL database as persistence. Initially this wasnt a problem because our design was setup where we had a unit of work that looked like this:
public interface IUnitOfWork
{
void BeginTransaction()
void Commit()
}
and our repositories looked like this:
public interface IRepository<T>
{
T GetByID()
void Save(T entity)
void Delete(T entity)
}
In this setup our load and save would handle the mapping between both data stores because we wrote it ourselves. The unit of work would handle transactions and would contain the Linq To SQL data context that the repositories would use for persistence. The active directory part was handled by a domain service implemented in infrastructure and consumed by the repositories in each Save() method. Save() was responsible with interacting with the data context to do all the database operations.
Now we are trying to adapt it to entity framework and take advantage of POCO. Ideally we would not need the Save() method because the domain objects are being tracked by the object context and we would just need to add a Save() method on the unit of work to have the object context save the changes, and a way to register new objects with the context. The new proposed design looks more like this:
public interface IUnitOfWork
{
void BeginTransaction()
void Save()
void Commit()
}
public interface IRepository<T>
{
T GetByID()
void Add(T entity)
void Delete(T entity)
}
This solves the data access problem with entity framework, but does not solve the problem with our active directory integration. Before, it was in the Save() method on the repository, but now it has no home. The unit of work knows nothing other than the entity framework data context. Where should this logic go? I argue this design only works if you only have one data store using entity framework. Any ideas how to best approach this issue? Where should I put this logic?
I wanted to come back and followup with what I have learned since I posted this. It seems if you are going to keep true to repository pattern, the data stores it persists to do not matter. If there are two data stores, write to them both in the same repository. What is important is to keep up the facade that repository pattern represents: an in memory collection. I would not do separate repositories because that doesn't feel like a true abstraction to me. You are letting the technology under the hood dictate the design at that point. To quote from the dddstepbystep.com:
What Sits Behind A Repository? Pretty
much anything you like. Yep, you heard
it right. You could have a database,
or you could have many different
databases. You could use relational
databases, or object databases. You
could have an in memory database, or
a singleton containing a list of in
memory items. You could have a REST
layer, or a set of SOA services, or a
file system, or an in memory cache…
You can have pretty much anything –
your only limitation is that the
Repository should be able to act like
a Collection to your domain. This
flexibility is a key difference
between Repository and traditional
data access techniques.
http://thinkddd.com/assets/2/Domain_Driven_Design_-_Step_by_Step.pdf
First I assume you are using an IoC container. I advocate you make true Repositories for each entity type. This means you will wrap each object context EntitySet in a class that implements something like:
interface IRepository<TEntity> {
TEntity Get(int id);
void Add(TEntity entity);
void Save(TEntity entity);
void Remove(TEntity entity);
bool CanPersist<T>(T entity);
}
CanPersist merely returns whether that repository instance supports persisting the passed entity, and is used polymorphically by UnitOfWork.Save described below.
Each IRepository will also have a constructor that allows the IRepository to be constructed in "transactional" mode. So, for EF, we might have:
public partial EFEntityARepository : IRepository<EntityA> {
public EFEntityARepository(EFContext context, bool transactional) {
_context = context;
_transactional = transactional;
}
public void Add(EntityA entity) {
_context.EntityAs.Add(entity);
if (!_transactional) _context.SaveChanges();
}
}
UnitOfWork should look like this:
interface UnitOfWork {
void Add(TEntity entity);
void Save(TEntity entity);
void Remove(TEntity entity);
void Complete();
}
The UnitOfWork implementation will use dependency injection to get instances of all IRepository. In UnitOfWork.Save/Add/Remove, the UoW will pass the argument entity into CanPerist of each IRepository. For any true return values, the UnitOfWork will store that entity in a private collection specific to that IRepository and to the intended operation. In Complete, the UnitOfWork will go through all private entity collections and call the appropriate operation on the appropriate IRepository for each entity.
If you have an entity that needs to be partially persisted by EF and partially persisted by AD, you would have two IRepository classes for that entity type (they would both return true from CanPersist when passed an instance of that entity type).
As for maintaining atomicity between EF and AD, that is a separate non-trivial problem.
IMO I would wrap the calls to both of these repos in a service type of class. Then I would use IoC/DI to inject the repo types into the service class. You would have 2 repos, 1 for the Ent. framework and 1 that supports AD. This way each repo deals with only its underlaying data store and doesn't have to cross over.
What I have done to support multiple units of work types, is to have IUnitOfWork be more of a factory. I create another type called IUnitOfWorkScope which is the actual unit of work and it has only a commit method.
namespace Framework.Persistance.UnitOfWork
{
public interface IUnitOfWork
{
IUnitOfWorkScope Get();
IUnitOfWorkScope Get(bool shared);
}
public interface IUnitOfWorkScope : IDisposable
{
void Commit();
}
}
This allows me to inject different implementations of the unit of work into a service and be able to use them side by side.

Resources