Factory pattern in DDD - domain-driven-design

Which is correct or suggested way to use factories in DDD?
Should factory method receive all necessary parameters from application service, or we are allowed to inject repositories and extract needed data inside factory?
Should it be (example 1):
public class UserTokenFactory : IUserTokenFactory
{
IUserTypeResourceRepository _userTypeResourceRepository;
public UserTokenFactory(IUserTypeResourceRepository userTypeResourceRepository)
{
_userTypeResourceRepository = userTypeResourceRepository;
}
public async Task<UserToken> CreateWithAsync(User user)
{
var userTypeResources = await _userTypeResourceRepository.GetByUserTypeIdAsync(user.UserTypeId);
//Some logic for creating user tokens
throw new NotImplementedException();
}
}
or as (Example 2)
public class UserTokenFactory : IUserTokenFactory
{
IUserTypeResourceRepository _userTypeResourceRepository;
public UserTokenFactory(IUserTypeResourceRepository userTypeResourceRepository)
{
_userTypeResourceRepository = userTypeResourceRepository;
}
public UserToken CreateWith(User user, List<UserTypeResource> userTypeResources)
{
//Some logic for creating user tokens
throw new NotImplementedException();
}
}

You are allowed to inject services into factories. Your factory is basically a domain service which happens to create objects. However, I'd probably rely on the ISP here and define an interface like IResolveUserType rather than depending on the wider IUserTypeResourceRepository interface.

Related

DDD entity with complex creation process

How entities with complex creation process should be created in DDD? Example:
Entity
- Property 1
- Property 2: value depends on what was provided in Property 1
- Property 3: value depends on what was provided in Property 1
- Property 4: value depends on what was provided in Property 1, 2 and 3
I have two ideas but both looks terrible:
Create entity with invalid state
Move creation process to service
We are using REST API so in first scenario we will have to persist entity with invalid state and in second scenario we move logic outside of the entity.
You can use the Builder Pattern to solve this problem.
You can make a Builder that has the logic for the dependencies between properties and raise Exceptions, return errors or has a mechanism to tell the client which are the next valid steps.
If you are using an object orienterd language, the builder can also return different concrete classes based on the combination of these properties.
Here's a very simplified example. We will store a configuration for EventNotifications that can either listen on some Endpoint (IP, port) or poll.
enum Mode { None, Poll, ListenOnEndpoint }
public class EventListenerNotification {
public Mode Mode { get; set; }
public Interval PollInterval { get; set; }
public Endpoint Endpoint { get; set; }
}
public class Builder {
private Mode mMode = Mode.None;
private Interenal mInterval;
private Endpoint mEndpoint;
public Builder WithMode(Mode mode) {
this.mMode = mode;
return this;
}
public Builder WithInterval(Interval interval) {
VerifyModeIsSet();
verifyModeIsPoll();
this.mInterval = interval;
return this;
}
public Builder WithEndpoint(Endpoint endpoint) {
VerifyModeIsSet();
verifyModeIsListenOnEndpoint();
this.mInterval = interval;
return this;
}
public EventListenerNotification Build() {
VerifyState();
var entity = new EventListenerNotification();
entity.Mode = this.mMode;
entity.Interval = this.mInterval;
entity.Endpoint = this.mEndpoint;
return entity;
}
private void VerifyModeIsSet() {
if(this.mMode == Mode.None) {
throw new InvalidModeException("Set mode first");
}
}
private void VerifyModeIsPoll() {
if(this.mMode != Mode.Poll) {
throw new InvalidModeException("Mode should be poll");
}
}
private void VerifyModeIsListenForEvents() {
if(this.mMode != Mode.ListenForEvents) {
throw new InvalidModeException("Mode should be ListenForEvents");
}
}
private void VerifyState() {
// validate properties based on Mode
if(this.mMode == Mode.Poll) {
// validate interval
}
if(this.mMode == Mode.ListenForEvents) {
// validate Endpoint
}
}
}
enum BuildStatus { NotStarted, InProgress, Errored, Finished }
public class BuilderWithStatus {
private readonly List<Error> mErrors = new List<Error>();
public BuildStatus Status { get; private set; }
public IReadOnlyList<Error> Errors { get { return mErrors; } }
public BuilderWithStatus WithInterval(Interval inerval) {
if(this.mMode != Mode.Poll) {
this.mErrors.add(new Error("Mode should be poll");
this.Status = BuildStatus.Errored;
}
else {
this.mInterval = interval;
}
return this;
}
// rest is same as above, but instead of throwing errors you can record the error
// and set a status
}
Here are some resources with more information and other machisms that you can use:
https://martinfowler.com/articles/replaceThrowWithNotification.html
https://martinfowler.com/eaaDev/Notification.html
https://martinfowler.com/bliki/ContextualValidation.html
Take a look at chapter 6 of the Evans book, which specifically talks about the life cycle of entities in the domain model
Creation is usually handled with a factory, which is to say a function that accepts data as input and returns a reference to an entity.
in second scenario we move logic outside of the entity.
The simplest answer is for the "factory" to be some method associate with the entity's class - ie, the constructor, or some other static method that is still part of the definition of the entity in the domain model.
But problem is that creation of the entity requires several steps.
OK, so what you have is a protocol, which is to say a state machine, where you collect information from the outside world, and eventually emit a new entity.
The instance of the state machine, with the data that it has collected, is also an entity.
For example, creating an actionable order might require a list of items, and shipping addresses, and billing information. But we don't necessarily need to collect all of that information at the same time - we can get a little bit now, and remember it, and then later when we have all of the information, we emit the submitted order.
It may take some care with the domain language to distinguish the tracking entity from the finished entity (which itself is probably an input to another state machine....)

Passing user and auditing information in calls to Reliable Services in Service Fabric transport

How can I pass along auditing information between clients and services in an easy way without having to add that information as arguments for all service methods? Can I use message headers to set this data for a call?
Is there a way to allow service to pass that along downstream also, i.e., if ServiceA calls ServiceB that calls ServiceC, could the same auditing information be send to first A, then in A's call to B and then in B's call to C?
There is actually a concept of headers that are passed between client and service if you are using fabric transport for remoting. If you are using Http transport then you have headers there just as you would with any http request.
Note, below proposal is not the easiest solution, but it solves the issue once it is in place and it is easy to use then, but if you are looking for easy in the overall code base this might not be the way to go. If that is the case then I suggest you simply add some common audit info parameter to all your service methods. The big caveat there is of course when some developer forgets to add it or it is not set properly when calling down stream services. It's all about trade-offs, as alway in code :).
Down the rabbit hole
In fabric transport there are two classes that are involved in the communication: an instance of a IServiceRemotingClient on the client side, and an instance of IServiceRemotingListener on the service side. In each request from the client the messgae body and ServiceRemotingMessageHeaders are sent. Out of the box these headers include information of which interface (i.e. which service) and which method are being called (and that's also how the underlying receiver knows how to unpack that byte array that is the body). For calls to Actors, which goes through the ActorService, additional Actor information is also included in those headers.
The tricky part is hooking into that exchange and actually setting and then reading additional headers. Please bear with me here, it's a number of classes involved in this behind the curtains that we need to understand.
The service side
When you setup the IServiceRemotingListener for your service (example for a Stateless service) you usually use a convenience extension method, like so:
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
yield return new ServiceInstanceListener(context =>
this.CreateServiceRemotingListener(this.Context));
}
(Another way to do it would be to implement your own listener, but that's not really what we wan't to do here, we just wan't to add things on top of the existing infrastructure. See below for that approach.)
This is where we can provide our own listener instead, similar to what that extention method does behind the curtains. Let's first look at what that extention method does. It goes looking for a specific attribute on assembly level on your service project: ServiceRemotingProviderAttribute. That one is abstract, but the one that you can use, and which you will get a default instance of, if none is provided, is FabricTransportServiceRemotingProviderAttribute. Set it in AssemblyInfo.cs (or any other file, it's an assembly attribute):
[assembly: FabricTransportServiceRemotingProvider()]
This attribute has two interesting overridable methods:
public override IServiceRemotingListener CreateServiceRemotingListener(
ServiceContext serviceContext, IService serviceImplementation)
public override IServiceRemotingClientFactory CreateServiceRemotingClientFactory(
IServiceRemotingCallbackClient callbackClient)
These two methods are responsible for creating the the listener and the client factory. That means that it is also inspected by the client side of the transaction. That is why it is an attribute on assembly level for the service assembly, the client side can also pick it up together with the IService derived interface for the client we want to communicate with.
The CreateServiceRemotingListener ends up creating an instance FabricTransportServiceRemotingListener, however in this implementation we cannot set our own specific IServiceRemotingMessageHandler. If you create your own sub class of FabricTransportServiceRemotingProviderAttribute and override that then you can actually make it create an instance of FabricTransportServiceRemotingListener that takes in a dispatcher in the constructor:
public class AuditableFabricTransportServiceRemotingProviderAttribute :
FabricTransportServiceRemotingProviderAttribute
{
public override IServiceRemotingListener CreateServiceRemotingListener(
ServiceContext serviceContext, IService serviceImplementation)
{
var messageHandler = new AuditableServiceRemotingDispatcher(
serviceContext, serviceImplementation);
return (IServiceRemotingListener)new FabricTransportServiceRemotingListener(
serviceContext: serviceContext,
messageHandler: messageHandler);
}
}
The AuditableServiceRemotingDispatcher is where the magic happens. It is our own ServiceRemotingDispatcher subclass. Override the RequestResponseAsync (ignore HandleOneWay, it is not supported by service remoting, it throws an NotImplementedException if called), like this:
public class AuditableServiceRemotingDispatcher : ServiceRemotingDispatcher
{
public AuditableServiceRemotingDispatcher(ServiceContext serviceContext, IService service) :
base(serviceContext, service) { }
public override async Task<byte[]> RequestResponseAsync(
IServiceRemotingRequestContext requestContext,
ServiceRemotingMessageHeaders messageHeaders,
byte[] requestBodyBytes)
{
byte[] userHeader = null;
if (messageHeaders.TryGetHeaderValue("user-header", out auditHeader))
{
// Deserialize from byte[] and handle the header
}
else
{
// Throw exception?
}
byte[] result = null;
result = await base.RequestResponseAsync(requestContext, messageHeaders, requestBodyBytes);
return result;
}
}
Another, easier, but less flexible way, would be to directly create an instance of FabricTransportServiceRemotingListener with an instance of our custom dispatcher directly in the service:
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
yield return new ServiceInstanceListener(context =>
new FabricTransportServiceRemotingListener(this.Context, new AuditableServiceRemotingDispatcher(context, this)));
}
Why is this less flexible? Well, because using the attribute supports the client side as well, as we see below
The client side
Ok, so now we can read custom headers when receiving messages, how about setting those? Let's look at the other method of that attribute:
public override IServiceRemotingClientFactory CreateServiceRemotingClientFactory(IServiceRemotingCallbackClient callbackClient)
{
return (IServiceRemotingClientFactory)new FabricTransportServiceRemotingClientFactory(
callbackClient: callbackClient,
servicePartitionResolver: (IServicePartitionResolver)null,
traceId: (string)null);
}
Here we cannot just inject a specific handler or similar as for the service, we have to supply our own custom factory. In order not to have to reimplement the particulars of FabricTransportServiceRemotingClientFactory I simply encapsulate it in my own implementation of IServiceRemotingClientFactory:
public class AuditedFabricTransportServiceRemotingClientFactory : IServiceRemotingClientFactory, ICommunicationClientFactory<IServiceRemotingClient>
{
private readonly ICommunicationClientFactory<IServiceRemotingClient> _innerClientFactory;
public AuditedFabricTransportServiceRemotingClientFactory(ICommunicationClientFactory<IServiceRemotingClient> innerClientFactory)
{
_innerClientFactory = innerClientFactory;
_innerClientFactory.ClientConnected += OnClientConnected;
_innerClientFactory.ClientDisconnected += OnClientDisconnected;
}
private void OnClientConnected(object sender, CommunicationClientEventArgs<IServiceRemotingClient> e)
{
EventHandler<CommunicationClientEventArgs<IServiceRemotingClient>> clientConnected = this.ClientConnected;
if (clientConnected == null) return;
clientConnected((object)this, new CommunicationClientEventArgs<IServiceRemotingClient>()
{
Client = e.Client
});
}
private void OnClientDisconnected(object sender, CommunicationClientEventArgs<IServiceRemotingClient> e)
{
EventHandler<CommunicationClientEventArgs<IServiceRemotingClient>> clientDisconnected = this.ClientDisconnected;
if (clientDisconnected == null) return;
clientDisconnected((object)this, new CommunicationClientEventArgs<IServiceRemotingClient>()
{
Client = e.Client
});
}
public async Task<IServiceRemotingClient> GetClientAsync(
Uri serviceUri,
ServicePartitionKey partitionKey,
TargetReplicaSelector targetReplicaSelector,
string listenerName,
OperationRetrySettings retrySettings,
CancellationToken cancellationToken)
{
var client = await _innerClientFactory.GetClientAsync(
serviceUri,
partitionKey,
targetReplicaSelector,
listenerName,
retrySettings,
cancellationToken);
return new AuditedFabricTransportServiceRemotingClient(client);
}
public async Task<IServiceRemotingClient> GetClientAsync(
ResolvedServicePartition previousRsp,
TargetReplicaSelector targetReplicaSelector,
string listenerName,
OperationRetrySettings retrySettings,
CancellationToken cancellationToken)
{
var client = await _innerClientFactory.GetClientAsync(
previousRsp,
targetReplicaSelector,
listenerName,
retrySettings,
cancellationToken);
return new AuditedFabricTransportServiceRemotingClient(client);
}
public Task<OperationRetryControl> ReportOperationExceptionAsync(
IServiceRemotingClient client,
ExceptionInformation exceptionInformation,
OperationRetrySettings retrySettings,
CancellationToken cancellationToken)
{
return _innerClientFactory.ReportOperationExceptionAsync(
client,
exceptionInformation,
retrySettings,
cancellationToken);
}
public event EventHandler<CommunicationClientEventArgs<IServiceRemotingClient>> ClientConnected;
public event EventHandler<CommunicationClientEventArgs<IServiceRemotingClient>> ClientDisconnected;
}
This implementation simply passes along anything heavy lifting to the underlying factory, while returning it's own auditable client that similarily encapsulates a IServiceRemotingClient:
public class AuditedFabricTransportServiceRemotingClient : IServiceRemotingClient, ICommunicationClient
{
private readonly IServiceRemotingClient _innerClient;
public AuditedFabricTransportServiceRemotingClient(IServiceRemotingClient innerClient)
{
_innerClient = innerClient;
}
~AuditedFabricTransportServiceRemotingClient()
{
if (this._innerClient == null) return;
var disposable = this._innerClient as IDisposable;
disposable?.Dispose();
}
Task<byte[]> IServiceRemotingClient.RequestResponseAsync(ServiceRemotingMessageHeaders messageHeaders, byte[] requestBody)
{
messageHeaders.SetUser(ServiceRequestContext.Current.User);
messageHeaders.SetCorrelationId(ServiceRequestContext.Current.CorrelationId);
return this._innerClient.RequestResponseAsync(messageHeaders, requestBody);
}
void IServiceRemotingClient.SendOneWay(ServiceRemotingMessageHeaders messageHeaders, byte[] requestBody)
{
messageHeaders.SetUser(ServiceRequestContext.Current.User);
messageHeaders.SetCorrelationId(ServiceRequestContext.Current.CorrelationId);
this._innerClient.SendOneWay(messageHeaders, requestBody);
}
public ResolvedServicePartition ResolvedServicePartition
{
get { return this._innerClient.ResolvedServicePartition; }
set { this._innerClient.ResolvedServicePartition = value; }
}
public string ListenerName
{
get { return this._innerClient.ListenerName; }
set { this._innerClient.ListenerName = value; }
}
public ResolvedServiceEndpoint Endpoint
{
get { return this._innerClient.Endpoint; }
set { this._innerClient.Endpoint = value; }
}
}
Now, in here is where we actually (and finally) set the audit name that we want to pass along to the service.
Call chains and service request context
One final piece of the puzzle, the ServiceRequestContext, which is a custom class that allows us to handle an ambient context for a service request call. This is relevant because it gives us an easy way to propagate that context information, like the user or a correlation id (or any other header information we want to pass between client and service), in a chain of calls. The implementation ServiceRequestContext looks like:
public sealed class ServiceRequestContext
{
private static readonly string ContextKey = Guid.NewGuid().ToString();
public ServiceRequestContext(Guid correlationId, string user)
{
this.CorrelationId = correlationId;
this.User = user;
}
public Guid CorrelationId { get; private set; }
public string User { get; private set; }
public static ServiceRequestContext Current
{
get { return (ServiceRequestContext)CallContext.LogicalGetData(ContextKey); }
internal set
{
if (value == null)
{
CallContext.FreeNamedDataSlot(ContextKey);
}
else
{
CallContext.LogicalSetData(ContextKey, value);
}
}
}
public static Task RunInRequestContext(Func<Task> action, Guid correlationId, string user)
{
Task<Task> task = null;
task = new Task<Task>(async () =>
{
Debug.Assert(ServiceRequestContext.Current == null);
ServiceRequestContext.Current = new ServiceRequestContext(correlationId, user);
try
{
await action();
}
finally
{
ServiceRequestContext.Current = null;
}
});
task.Start();
return task.Unwrap();
}
public static Task<TResult> RunInRequestContext<TResult>(Func<Task<TResult>> action, Guid correlationId, string user)
{
Task<Task<TResult>> task = null;
task = new Task<Task<TResult>>(async () =>
{
Debug.Assert(ServiceRequestContext.Current == null);
ServiceRequestContext.Current = new ServiceRequestContext(correlationId, user);
try
{
return await action();
}
finally
{
ServiceRequestContext.Current = null;
}
});
task.Start();
return task.Unwrap<TResult>();
}
}
This last part was much influenced by the SO answer by Stephen Cleary. It gives us an easy way to handle the ambient information down a hierarcy of calls, weather they are synchronous or asyncronous over Tasks. Now, with this we have a way of setting that information also in the Dispatcher on the service side:
public override Task<byte[]> RequestResponseAsync(
IServiceRemotingRequestContext requestContext,
ServiceRemotingMessageHeaders messageHeaders,
byte[] requestBody)
{
var user = messageHeaders.GetUser();
var correlationId = messageHeaders.GetCorrelationId();
return ServiceRequestContext.RunInRequestContext(async () =>
await base.RequestResponseAsync(
requestContext,
messageHeaders,
requestBody),
correlationId, user);
}
(GetUser() and GetCorrelationId() are just helper methods that gets and unpacks the headers set by the client)
Having this in place means that any new client created by the service for any aditional call will also have the sam headers set, so in the scenario ServiceA -> ServiceB -> ServiceC we will still have the same user set in the call from ServiceB to ServiceC.
what? that easy? yes ;)
From inside a service, for instance a Stateless OWIN web api, where you first capture the user information, you create an instance of ServiceProxyFactoryand wrap that call in a ServiceRequestContext:
var task = ServiceRequestContext.RunInRequestContext(async () =>
{
var serviceA = ServiceProxyFactory.CreateServiceProxy<IServiceA>(new Uri($"{FabricRuntime.GetActivationContext().ApplicationName}/ServiceA"));
await serviceA.DoStuffAsync(CancellationToken.None);
}, Guid.NewGuid(), user);
Ok, so to sum it up - you can hook into the service remoting to set your own headers. As we see above there is some work that needs to be done to get a mechanism for that in place, mainly creating your own subclasses of the underlying infrastructure. The upside is that once you have this in place, then you have a very easy way for auditing your service calls.

Using Catel with Repository Pattern, EF6 and View Models

I cannot find any documentation on connecting a view model to a repository using Catel.
I have set up the Repository Pattern and my Models with EF6 Code First (all extending from ModelBase) but need to know how to use it with a ViewModel.
Do I need to create a service for the UnitOfWork? And if so, how? How will I use this in a ViewModel?
I am currently using the repository as a model in my viewmodel, but i do not think this is the correct way to do it? See my CompaniesViewModel below:
IUnitOfWork uow;
public CompaniesViewModel()
{
uow = new UnitOfWork<SoftwareSolutionsContext>();
CompanyRepository = uow.GetRepository<ICompanyRepository>();
}
public override string Title { get { return "Companies"; } }
protected override async Task Close()
{
uow.Dispose();
await base.Close();
}
protected override async Task Initialize()
{
Companies = new ObservableCollection<Company>(CompanyRepository.GetAll());
await base.Initialize();
}
public ObservableCollection<Company> Companies
{
get { return GetValue<ObservableCollection<Company>>(CompaniesProperty); }
set { SetValue(CompaniesProperty, value); }
}
public static readonly PropertyData CompaniesProperty = RegisterProperty("Companies", typeof(ObservableCollection<Company>), null);
[Model]
public ICompanyRepository CompanyRepository
{
get { return GetValue<ICompanyRepository>(CompanyRepositoryProperty); }
private set { SetValue(CompanyRepositoryProperty, value); }
}
public static readonly PropertyData CompanyRepositoryProperty = RegisterProperty("CompanyRepository", typeof(ICompanyRepository));
Essentially, I have 2 scenarios for working on the data:
getting all the data to display on a datagrid
selecting a record on the datagrid to open another view for editing a single record
Any guidance would be appreciated.
This is a very difficult subject, because there are basically a few options here:
Create abstractions in services (so the VM's only work with services, the services are your API into the db). The services work with the UoW
There are some people thinking that 1 is overcomplicated. In that case, you can simply use the UoW inside your VM's
Both have their pros and cons, just pick what you believe in most.

What is the recommended way to run asp.net identity functions in transaction?

Using asp.net identity RTW version.
I need to perform several actions in a transaction, including both UserMananger function calls and other operations on my DbContext (example: create new user, add it to group and perform some business-logic operations).
How should I do this?
My thoughts follow.
TransactionScope
using (var scope = new TransactionScope(TransactionScopeOption.Required))
{
// Do what I need
if (everythingIsOk) scope.Complete();
}
The problem is: UserManager functions are all async, and TransactionScope was not designed to work with async/await. It seems to be solved in .Net Framework 4.5.1. But I use Azure Web Sites to host my project builds, so I cannot target 4.5.1 yet.
Database transaction
public class SomeController : Controller
{
private MyDbContext DbContext { get; set; }
private UserManager<User> UserManager { get; set; }
public AccountController()
{
DbContext = new MyDbContext()
var userStore = new UserStore<IdentityUser>(DbContext);
UserManager = new UserManager<IdentityUser>(userStore);
}
public async ActionResult SomeAction()
{
// UserManager uses the same db context, so they can share db transaction
using (var tran = DbContext.Database.BeginTransaction())
{
try
{
// Do what I need
if (everythingIsOk)
tran.Commit();
else
{
tran.Rollback();
}
}
catch (Exception)
{
tran.Rollback();
}
}
}
}
That seems to work, but how can I unit-test it?
UserManager<> constructor accepts IUserStore<>, so I can easily stub it.
UserStore<> constructor accepts DbContext, no idea how I can stub this.
You can implement your own test user store that can be stubbed out for your unit test.
If you want to use the actual EF UserStore in your tests, that also will work, but it will be creating a database using the DefaultConnection string by default. You could specify a DatabaseInitializer to always drop/recreate your tables in your tests if you wanted to ensure a clean db for every test.

Using factory pattern for modeling similar subscriptions

I have the following question that's been nagging at me for quite some time.
I'd like to model the following domain entity "Contact":
public class Contact:IEntity<Contact>
{
private readonly ContactId _Id;
public ContactId Id
{
get { return this._Id; }
}
private CoreAddress _CoreAddress;
public CoreAddress CoreAddress
{
get { return this._CoreAddress; }
set
{
if (value == null)
throw new ArgumentNullException("CoreAddress");
this._CoreAddress = value;
}
}
private ExtendedAddress _ExtendedAddress;
public ExtendedAddress ExtendedAddress
{
get { return this._ExtendedAddress; }
set
{
if (value == null)
throw new ArgumentNullException("ExtendedAddress");
this._ExtendedAddress = value;
}
}
private readonly IList<ContactExchangeSubscription> _Subscriptions
= new List<ContactExchangeSubscription>();
public IEnumerable<ContactExchangeSubscription> Subscriptions
{
get { return this._Subscriptions; }
}
public Contact(ContactId Id, CoreAddress CoreAddress, ExtendedAddress ExtendedAddress)
{
Validations.Validate.NotNull(Id);
this._Id = Id;
this._CoreAddress = CoreAddress;
this._ExtendedAddress = ExtendedAddress;
}
}
As you can see it has a collection of subscriptions. A subscription is modeled like this:
public class ContactExchangeSubscription
{
private ContactId _AssignedContact;
public ContactId AssignedContact
{
get { return this._AssignedContact; }
set
{
if (value == null)
throw new ArgumentNullException("AssignedContact");
this._AssignedContact = value;
}
}
private User _User;
public User User
{
get { return this._User; }
set
{
Validations.Validate.NotNull(value, "User");
this._User = value;
}
}
private ExchangeEntryId _EntryId;
public ExchangeEntryId EntryId
{
get { return this._EntryId; }
set
{
if (value == null)
throw new ArgumentNullException("EntryId");
this._EntryId = value;
}
}
public ContactExchangeSubscription(ContactId AssignedContact, User User, ExchangeEntryId EntryId)
{
this._AssignedContact = AssignedContact;
this._User = User;
this._EntryId = EntryId;
}
}
Now I've been thinking that I shouldnt model a storage technology (Exchange) in my domain, after all, we might want to switch our application to other subscription providers. The property "EntryId" is specific to Exchange. A subscription would always need a User and a ContactId, though.
Is there a better way to model the Subscription? Should I use a factory or abstract factory for the Subscription type to cover other types of subscriptions, should the need arise?
EDIT: So let's toss an abstract factory in the ring and introduce some interfaces:
public interface IContactSubscriptionFactory
{
IContactSubscription Create();
}
public interface IContactSubscription
{
ContactId AssignedContact { get;}
User User { get; }
}
How would a concrete factory for a ContactExchangeSubscription be coded? Remember that this type will need the EntryID field, so it has to get an additional ctr parameter. How to handle different constructor paremeters on different sub-types in factories in general?
I think the answer is staring you in the face in that you need to work against an interface making it easier to introduce new subscription providers (if that's the right term) in the future. I think this is more of an OO design question that DDD.
public interface ISubscriptionProvider
{
ContactId AssignedContact { get; }
User User { get; }
}
And the code in your contract becomes
private readonly IList<ISubscriptionProvider> _subscriptions
= new List<ISubscriptionProvider>();
public IEnumerable<ISubscriptionProvider> Subscriptions
{
get { return _subscriptions; }
}
With regards to using a factory; the purpose of a factory is to construct your domain objects when a creation strategy is required. For example a SubscriptionProviderFactory could be used within your repository when you rehydrate your aggregate and would make the decision to return the ContactExchangeSubscription (as an ISubscriptionProvider) or something else based on the data passed into it.
One final point but perhaps this is just because of the way you have shown your example. But I would say your not really following DDD, the lack of behaviour and with all your propeties having public getters and setters, suggestions your falling into the trap of building an Aemic Domain Model.
After some research I came up with this. Code first, explanation below:
public interface IContactFactory<TSubscription> where TSubscription : IContactSubscription
{
Contact Create(ContactId Id, CoreAddress CoreAddress, ExtendedAddress ExtendedAddress, TSubscription Subscription);
}
public class ContactFromExchangeFactory : IContactFactory<ContactExchangeSubscription>
{
public Contact Create(ContactId Id, CoreAddress CoreAddress, ExtendedAddress ExtendedAddress, ContactExchangeSubscription ExchangeSubscription)
{
Contact c = new Contact(Id, CoreAddress, ExtendedAddress);
c.AddSubscription(ExchangeSubscription);
return c;
}
}
I realized that I dont need a factory for the Contactsubscription but rather for the contact itself.
I learned some things about factories along the way:
They are only to be used when creating (really) new entities, not when rebuilding them from a SQL DB for example
They live in the domain layer (see above!)
Factories are more suitable for similar objects that differ in behaviour rather than data
I welcome comments and better answers.

Resources