App-level settings in DDD? - domain-driven-design

Just wanted to get the groups thoughts on how to handle configuration details of entities.
What I'm thinking of specifically is high level settings which might be admin-changed. the sort of thing that you might store in the app or web.config ultimately, but from teh DDD perspective should be set somewhere in the objects explicitly.
For sake of argument, let's take as an example a web-based CMS or blog app.
A given blog Entry entity has any number of instance settings like Author, Content, etc.
But you also might want to set (for example) default Description or Keywords that all entries in the site should start with if they're not changed by the author. Sure, you could just make those constants in the class, but then the site owner couldn't change the defaults.
So my thoughts are as follows:
1) use class-level (static) properties to represent those settings, and then set them when the app starts up, either setting them from the DB or from the web.config.
or
2) use a separate entity for holding the settings, possibly a dictionary, either use it directly or have it be a member of the Entry class
What strikes you all as the most easy / flexible? My concerns abou the first one is that it doesn't strike me as very pluggable (if I end up wanting to add more features) as changing an entity's class methods would make me change the app itself as well (which feels like an OCP violation). The second one feels like it's more heavy, though, especially if I then have to cast or parse values out of a dictionary.

I would say that that whether a value is configurable or not is irrelevant from the Domain Model's perspective - what matters is that is is externally defined.
Let's say that you have a class that must have a Name. If the Name is always required, it must be encapsulated as an invariant irrespective of the source of the value. Here's a C# example:
public class MyClass
{
private string name;
public MyClass(string name)
{
if(name == null)
{
throw new ArgumentNullException("name");
}
this.name = name;
}
public string Name
{
get { return this.name; }
set
{
if(value == null)
{
throw new ArgumentNullException("name");
}
this.name = value;
}
}
}
A class like this effectively protects the invariant: Name must not be null. Domain Models must encapsulate invariants like this without any regard to which consumer will be using them - otherwise, they would not meet the goal of Supple Design.
But you asked about default values. If you have a good default value for Name, then how do you communicate that default value to MyClass.
This is where Factories come in handy. You simply separate the construction of your objects from their implementation. This is often a good idea in any case. Whether you choose an Abstract Factory or Builder implementation is less important, but Abstract Factory is a good default choice.
In the case of MyClass, we could define the IMyClassFactory interface:
public interface IMyClassFactory
{
MyClass Create();
}
Now you can define an implementation that pulls the name from a config file:
public ConfigurationBasedMyClassFactory : IMyClassFactory
{
public MyClass Create()
{
var name = ConfigurationManager.AppSettings["MyName"];
return new MyClass(name);
}
}
Make sure that code that needs instances of MyClass use IMyClassFactory to create it instead of new'ing it up manually.

Related

Implementing user-defined business rules with DDD

Let's say If I have an application which let's user create business rules to be applied on a domain entity. A rule can be a combination of a condition and multiple actions where if condition evaluates to true then corresponding actions are executed. This rule is created by users in free-form text format which is then converted to a proprietary format which rule engine can understand and execute.
E.g. For an employee management system, if there is business rule to check if an employee is working in current Role for more than an year and has performed better than expected then can be promoted to next role with a 10% salary increment. This business rule can be entered by users as below.
Condition: Employee.CurrentRoleLength > 1 && Employee.ExceededExpectations()
Action: Employee.PromoteToNextRole() | Employee.GiveSalaryIncrement(10)
Note that multiple Actions are delimited with a |. Also in order to execute this rule, application uses a separate rule engine class library to parse this condition and both actions to a proprietary format, say, ExecutableScript also defined in the rule engine class library.
Now in order to model this requirement using DDD; I have come up with following Domain objects.
Rule (Entity)
Condition (Value Object)
Action (Value Object)
where Rule is an Entity which contains a Condition Value Object and a list of Action Value Objects as below.
public class Rule : Entity
{
public Condition Condition { get; private set; }
public IList<Action> Actions { get; private set;}
public Rule(Condition condition, IList<Action> actions)
{
Condition = condition;
Actions = actions;
}
}
public sealed class Condition : ValueObject<Condition>
{
public string ConditionText { get; private set;}
public ExecutableScript ExecutableCondition{ get; private set;}
public Condition(string conditionText)
{
ConditionText = conditionText;
}
public Parse()
{
ExecutableCondition = // How to parse using external rule engine ??;
}
public Execute()
{
// How to execute using external rule engine ??;
}
}
public sealed class Action : ValueObject<Action>
{
public string ActionText{ get; private set;}
public ExecutableScript ExecutableAction{ get; private set;}
public Action(string actionText)
{
ActionText = actionText;
}
public Parse()
{
ExecutableAction = // How to parse using external rule engine ??;
}
public Execute()
{
// How to execute using external rule engine ??;
}
}
Based on above domain model, I have following questions.
How can I parse and execute Condition and Actions without having a dependency on external rule engine. I understand Domain layer should not have any dependency on outer layers and should be confined to it's own.
Even if I Parse Condition and Actions outside their domain objects, still their parsed ExceutableScript value need to be present within them which will still need dependency on external rule engine.
Is it just that DDD is not the right approach for this scenario and I am going into wrong direction.
Sorry for the long post. Any help would be highly appreciated.
Thanks.
Technical domains may benefit from DDD tactical patterns, but the cost of creating the right abstractions is usually higher than with other domains because it often requires to abstract away complex data structures.
A good way to start thinking about the required abstractions is to ask yourself what abstractions would be needed if you were to swap the underlying technologies.
Here you have a complex text-based expression from which an ExecutableScript is created by the rules engine.
If you think about it there three major elements here:
The text-based expression syntax which is proprietary.
The ExecutableScript which is proprietary; I will assume this is an Abstract Syntax Tree (AST) with an embedded interpreter.
The rule evaluation context which is probably proprietary.
If you were to swap the underlying technology to execute the rules then the expression syntax of the other rule engine may be different and it would certainly have an entirely different rule interpretation mechanism.
At this point we have identified what have to be abstracted, but not what would be the proper abstractions.
You could decide to implement your own expression syntax, your own parser, your own AST which would be a tree-based representation of the expression in memory and finally your own rule evaluation context. This set of abstractions would then be consumed by specific rule engines. For instance, your current rule engine would have to convert a domain.Expression AST to an ExecutableScript.
Something like this (I left out the evaluation context intentionally as you did not provide any information on it).
However, creating your set of abstractions could be costly, especially if you do not anticipate to swap your rule engine. If the syntax of your current rules engine suits your needs then you may use it as your abstraction for text-based expressions. You can do this because it doesn't require a proprietary data structure to represent text in memory; it's just a String. If you were to swap your rule engine in the future then you could still use the old engine to parse the expression and then rely on the generated AST to generate the new one for the other rule engine or you could go back to writing your own abstractions.
At this point, you may decide to simply hold that expression String in your domain and pass it to an Executor when it has to be evaluated. If you are concerned by the performance cost of re-generating the ExecutableScript each time then you should first make sure that is indeed an issue; premature optimization is not desirable.
If you find out that it is too much overhead then you could implement memoization in the infrastructure executor. The ExecutableScript could either be stored in memory or persisted to disk. You could potentially use a hash of the string-based expression to identify it (beware collisions), the entire string, an id assigned by the domain or any other strategy.
Last but not least. Keep in mind that if rule actions aren't processed by aggregates or if the rule predicate spans multiple aggregates then the data used to evaluate the expression may have been stale. I'm not expanding on this because I have no idea how you plan to generate the rule evaluation context and process actions, but I thought it was still worth mentioning because invariant enforcement is an important aspect of every domains.
If you determine that all rules may be eventually consistent or that decisions made on stale data are acceptable then I'd also consider creating an entirely separate bounded context for that, perhaps called "Rule Management & Execution".
EDIT:
Here's an example that shows how creating a rule may look like form the application service perspective, given that expressions are stored as Strings in the domain.
//Domain
public interface RuleValidator {
boolean isValid(Rule rule);
}
public class RuleFactory {
private RuleValidator validator;
//...
public Rule create(RuleId id, Condition condition, List<Action> actions) {
Rule rule = new Rule(id, condition, actions);
if (!validator.isValid(rule)) {
throw new InvalidRuleException();
}
return rule;
}
}
//App
public class RuleApplicationService {
private RuleFactory ruleFactory;
private RuleRepository ruleRepository;
//...
public void createRule(String id, String conditionExpression, List<String> actionExpressions) {
transaction {
List<Action> actions = createActionsFromExpressions(actionExpressions);
Rule rule = ruleFactory.create(new RuleId(id), new Condition(conditionExpression), actions);
ruleRepository.add(rule); //this may also create and persist an `ExecutableScript` object transparently in the infrastructure, associated with the rule id.
}
}
}
How can I parse and execute Condition and Actions without having a dependency on external rule engine. I understand Domain layer should not have any dependency on outer layers and should be confined to it's own.
This part is easy: dependency inversion. The domain defines a service provider interface that describes how it wants to talk to some external service. Typically, the domain will pass a copy of some of its internal state to the service, and get back an answer that it can then apply to itself.
So you might see something like this in your model
Supervisor.reviewSubordinates(EvaluationService es) {
for ( Employee e : this.subbordinates ) {
// Note: state is an immutable value type; you can't
// change the employee entity by mutating the state.
Employee.State currentState = e.currentState;
Actions<Employee.State> actions = es.evaluate(currentState);
for (Action<Employee.State> a : actions ) {
currentState = a.apply(currentState);
}
// replacing the state of the entity does change the
// entity, but notice that the model didn't delegate that.
e.currentState = currentState;
}
}

Dapper does not warn or fail with missing data

Let's say I have a class (simplistic for example) and I want to ensure that the PersonId and Name fields are ALWAYS populated.
public class Person
{
int PersonId { get; set; }
string Name { get; set; }
string Address { get; set; }
}
Currently, my query would be:
Person p = conn.Query<Person>("SELECT * FROM People");
However, I may have changed my database schema from PersonId to PID and now the code is going to go through just fine.
What I'd like to do is one of the following:
Decorate the property PersonId with an attribute such as Required (that dapper can validate)
Tell dapper to figure out that the mappings are not getting filled out completely (i.e. throw an exception when not all the properties in the class are filled out by data from the query).
Is this possible currently? If not, can someone point me to how I could do this without affecting performance too badly?
IMHO, the second option would be the best because it won't break existing code for users and it doesn't require more attribute decoration on classes we may not have access to.
At the moment, no this is not possible. And indeed, there are a lot of cases where it is actively useful to populate a partial model, so I wouldn't want to add anything implicit. In many cases, the domain model is an extended view on the data model, so I don't think option 2 can work - and I know it would break in a gazillion places in my code ;p If we restrict ourselves to the more explicit options...
So far, we have deliberately avoided things like attributes; the idea has been to keep it as lean and direct as possible. I'm not pathologically opposed to attributes - just: it can be problematic having to probe them. But maybe it is time... we could perhaps also allow simple column mapping at the same time, i.e.
[Map(Name = "Person Id", Required = true)]
int PersonId { get; set; }
where both Name and Required are optional. Thoughts? This is problematic in a few ways, though - in particular at the moment we only probe for columns we can see, in particular in the extensibility API.
The other possibility is an interface that we check for, allowing you to manually verify the data after loading; for example:
public class Person : IMapCallback {
void IMapCallback.BeforePopulate() {}
void IMapCallback.AfterPopulate() {
if(PersonId == 0)
throw new InvalidOperationException("PersonId not populated");
}
}
The interface option makes me happier in many ways:
it avoids a lot of extra reflection probing (just one check to do)
it is more flexible - you can choose what is important to you
it doesn't impact the extensibility API
but: it is more manual.
I'm open to input, but I want to make sure we get it right rather than rush in all guns blazing.

Domain Driven Design - Access modifier for domain entities

I am just starting out with domain driven design and have a project for my domain which is structured like this:
Domain
/Entities
/Boundaries
/UserStories
As I understand DDD, apart from the boundaries with which the outside world communicates with the domain, everything in the domain should be invisble. All of the examples I have seen of entity classes within a domain have a public access modifer, for example here I have a entity named Message:
public class Message
{
private string _text;
public string Text
{
get { return _text; }
set { _text = value; }
}
public Message()
{
}
public bool IsValid()
{
// Do some validation on text
}
}
Would it not be more correct if the entity class and its members were marked as internal so it is only accessible within the domain project?
For example:
internal class Message
{
private string _text;
internal string Text
{
get { return _text; }
set { _text = value; }
}
internal Message()
{
}
internal bool IsValid()
{
// Do some validation on text
}
}
I think there's a confusion here: the Bounded Context is a concept which defines the context in which a model is valid there aren't classes actualy named Boundary. Maybe those are objects for anti corruption purposes, but really the Aggregate Root should deal with that or some entry point in the Bounded Context.
I wouldn't structure a Domain like this, this is artificial, you should structure the Domain according to what make sense in the real world process. You're using DDD to model a real world process in code and I haven't heard anyone outside software devel talking aobut Entities or Value Objects. They talk about Orders, Products, Prices etc
Btw that Message is almost certain a value object, unless the Domain really needs to identify uniquely each Message. Here the Message is a Domain concept, I hope you don't mean a command or an event. And you should put the validation in the constructor or in the method where the new value is given.
In fairness this code is way to simplistc, perhaps you've picked the wrong example. About the classes being internal or public, they might be one or another it isn't a rule, it depends on many things. At one extreme you'll have the approach where almost every object is internal but implements a public interface common for the application, this can be highly inefficient.
A rule of the thumb: if the class is used outside the Domain assembly make it public, if it's something internally used by the Domain and/or implements a public interface, make it internal.

ServiceStack: RESTful Resource Versioning

I've taken a read to the Advantages of message based web services article and am wondering if there is there a recommended style/practice to versioning Restful resources in ServiceStack? The different versions could render different responses or have different input parameters in the Request DTO.
I'm leaning toward a URL type versioning (i.e /v1/movies/{Id}), but I have seen other practices that set the version in the HTTP headers (i.e Content-Type: application/vnd.company.myapp-v2).
I'm hoping a way that works with the metadata page but not so much a requirement as I've noticed simply using folder structure/ namespacing works fine when rendering routes.
For example (this doesn't render right in the metadata page but performs properly if you know the direct route/url)
/v1/movies/{id}
/v1.1/movies/{id}
Code
namespace Samples.Movies.Operations.v1_1
{
[Route("/v1.1/Movies", "GET")]
public class Movies
{
...
}
}
namespace Samples.Movies.Operations.v1
{
[Route("/v1/Movies", "GET")]
public class Movies
{
...
}
}
and corresponding services...
public class MovieService: ServiceBase<Samples.Movies.Operations.v1.Movies>
{
protected override object Run(Samples.Movies.Operations.v1.Movies request)
{
...
}
}
public class MovieService: ServiceBase<Samples.Movies.Operations.v1_1.Movies>
{
protected override object Run(Samples.Movies.Operations.v1_1.Movies request)
{
...
}
}
Try to evolve (not re-implement) existing services
For versioning, you are going to be in for a world of hurt if you try to maintain different static types for different version endpoints. We initially started down this route but as soon as you start to support your first version the development effort to maintain multiple versions of the same service explodes as you will need to either maintain manual mapping of different types which easily leaks out into having to maintain multiple parallel implementations, each coupled to a different versions type - a massive violation of DRY. This is less of an issue for dynamic languages where the same models can easily be re-used by different versions.
Take advantage of built-in versioning in serializers
My recommendation is not to explicitly version but take advantage of the versioning capabilities inside the serialization formats.
E.g: you generally don't need to worry about versioning with JSON clients as the versioning capabilities of the JSON and JSV Serializers are much more resilient.
Enhance your existing services defensively
With XML and DataContract's you can freely add and remove fields without making a breaking change. If you add IExtensibleDataObject to your response DTO's you also have a potential to access data that's not defined on the DTO. My approach to versioning is to program defensively so not to introduce a breaking change, you can verify this is the case with Integration tests using old DTOs. Here are some tips I follow:
Never change the type of an existing property - If you need it to be a different type add another property and use the old/existing one to determine the version
Program defensively realize what properties don't exist with older clients so don't make them mandatory.
Keep a single global namespace (only relevant for XML/SOAP endpoints)
I do this by using the [assembly] attribute in the AssemblyInfo.cs of each of your DTO projects:
[assembly: ContractNamespace("http://schemas.servicestack.net/types",
ClrNamespace = "MyServiceModel.DtoTypes")]
The assembly attribute saves you from manually specifying explicit namespaces on each DTO, i.e:
namespace MyServiceModel.DtoTypes {
[DataContract(Namespace="http://schemas.servicestack.net/types")]
public class Foo { .. }
}
If you want to use a different XML namespace than the default above you need to register it with:
SetConfig(new EndpointHostConfig {
WsdlServiceNamespace = "http://schemas.my.org/types"
});
Embedding Versioning in DTOs
Most of the time, if you program defensively and evolve your services gracefully you wont need to know exactly what version a specific client is using as you can infer it from the data that is populated. But in the rare cases your services needs to tweak the behavior based on the specific version of the client, you can embed version information in your DTOs.
With the first release of your DTOs you publish, you can happily create them without any thought of versioning.
class Foo {
string Name;
}
But maybe for some reason the Form/UI was changed and you no longer wanted the Client to use the ambiguous Name variable and you also wanted to track the specific version the client was using:
class Foo {
Foo() {
Version = 1;
}
int Version;
string Name;
string DisplayName;
int Age;
}
Later it was discussed in a Team meeting, DisplayName wasn't good enough and you should split them out into different fields:
class Foo {
Foo() {
Version = 2;
}
int Version;
string Name;
string DisplayName;
string FirstName;
string LastName;
DateTime? DateOfBirth;
}
So the current state is that you have 3 different client versions out, with existing calls that look like:
v1 Release:
client.Post(new Foo { Name = "Foo Bar" });
v2 Release:
client.Post(new Foo { Name="Bar", DisplayName="Foo Bar", Age=18 });
v3 Release:
client.Post(new Foo { FirstName = "Foo", LastName = "Bar",
DateOfBirth = new DateTime(1994, 01, 01) });
You can continue to handle these different versions in the same implementation (which will be using the latest v3 version of the DTOs) e.g:
class FooService : Service {
public object Post(Foo request) {
//v1:
request.Version == 0
request.Name == "Foo"
request.DisplayName == null
request.Age = 0
request.DateOfBirth = null
//v2:
request.Version == 2
request.Name == null
request.DisplayName == "Foo Bar"
request.Age = 18
request.DateOfBirth = null
//v3:
request.Version == 3
request.Name == null
request.DisplayName == null
request.FirstName == "Foo"
request.LastName == "Bar"
request.Age = 0
request.DateOfBirth = new DateTime(1994, 01, 01)
}
}
Framing the Problem
The API is the part of your system that exposes its expression. It defines the concepts and the semantics of communicating in your domain. The problem comes when you want to change what can be expressed or how it can be expressed.
There can be differences in both the method of expression and what is being expressed. The first problem tends to be differences in tokens (first and last name instead of name). The second problem is expressing different things (the ability to rename oneself).
A long-term versioning solution will need to solve both of these challenges.
Evolving an API
Evolving a service by changing the resource types is a type of implicit versioning. It uses the construction of the object to determine behavior. Its works best when there are only minor changes to the method of expression (like the names). It does not work well for more complex changes to the method of expression or changes to the change of expressiveness. Code tends to be scatter throughout.
Specific Versioning
When changes become more complex it is important to keep the logic for each version separate. Even in mythz example, he segregated the code for each version. However, the code is still mixed together in the same methods. It is very easy for code for the different versions to start collapsing on each other and it is likely to spread out. Getting rid of support for a previous version can be difficult.
Additionally, you will need to keep your old code in sync to any changes in its dependencies. If a database changes, the code supporting the old model will also need to change.
A Better Way
The best way I've found is to tackle the expression problem directly. Each time a new version of the API is released, it will be implemented on top of the new layer. This is generally easy because changes are small.
It really shines in two ways: first all the code to handle the mapping is in one spot so it is easy to understand or remove later and second it doesn't require maintenance as new APIs are developed (the Russian doll model).
The problem is when the new API is less expressive than the old API. This is a problem that will need to be solved no matter what the solution is for keeping the old version around. It just becomes clear that there is a problem and what the solution for that problem is.
The example from mythz's example in this style is:
namespace APIv3 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var data = repository.getData()
request.FirstName == data.firstName
request.LastName == data.lastName
request.DateOfBirth = data.dateOfBirth
}
}
}
namespace APIv2 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var v3Request = APIv3.FooService.OnPost(request)
request.DisplayName == v3Request.FirstName + " " + v3Request.LastName
request.Age = (new DateTime() - v3Request.DateOfBirth).years
}
}
}
namespace APIv1 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var v2Request = APIv2.FooService.OnPost(request)
request.Name == v2Request.DisplayName
}
}
}
Each exposed object is clear. The same mapping code still needs to be written in both styles, but in the separated style, only the mapping relevant to a type needs to be written. There is no need to explicitly map code that doesn't apply (which is just another potential source of error). The dependency of previous APIs is static when you add future APIs or change the dependency of the API layer. For example, if the data source changes then only the most recent API (version 3) needs to change in this style. In the combined style, you would need to code the changes for each of the APIs supported.
One concern in the comments was the addition of types to the code base. This is not a problem because these types are exposed externally. Providing the types explicitly in the code base makes them easy to discover and isolate in testing. It is much better for maintainability to be clear. Another benefit is that this method does not produce additional logic, but only adds additional types.
I am also trying to come with a solution for this and was thinking of doing something like the below. (Based on a lot of Googlling and StackOverflow querying so this is built on the shoulders of many others.)
First up, I don’t want to debate if the version should be in the URI or Request Header. There are pros/cons for both approaches so I think each of us need to use what meets our requirements best.
This is about how to design/architecture the Java Message Objects and the Resource Implementation classes.
So let’s get to it.
I would approach this in two steps. Minor Changes (e.g. 1.0 to 1.1) and Major Changes (e.g 1.1 to 2.0)
Approach for minor changes
So let’s say we go by the same example classes used by #mythz
Initially we have
class Foo { string Name; }
We provide access to this resource as /V1.0/fooresource/{id}
In my use case, I use JAX-RS,
#Path("/{versionid}/fooresource")
public class FooResource {
#GET
#Path( "/{id}" )
public Foo getFoo (#PathParam("versionid") String versionid, (#PathParam("id") String fooId)
{
Foo foo = new Foo();
//setters, load data from persistence, handle business logic etc
Return foo;
}
}
Now let’s say we add 2 additional properties to Foo.
class Foo {
string Name;
string DisplayName;
int Age;
}
What I do at this point is annotate the properties with a #Version annotation
class Foo {
#Version(“V1.0")string Name;
#Version(“V1.1")string DisplayName;
#Version(“V1.1")int Age;
}
Then I have a response filter that will based on the requested version, return back to the user only the properties that match that version. Note that for convenience, if there are properties that should be returned for all versions, then you just don’t annotate it and the filter will return it irrespective of the requested version
This is sort of like a mediation layer. What I have explained is a simplistic version and it can get very complicated but hope you get the idea.
Approach for Major Version
Now this can get quite complicated when there is a lot of changes been done from one version to another. That is when we need to move to 2nd option.
Option 2 is essentially to branch off the codebase and then do the changes on that code base and host both versions on different contexts. At this point we might have to refactor the code base a bit to remove version mediation complexity introduced in Approach one (i.e. make the code cleaner) This might mainly be in the filters.
Note that this is just want I am thinking and haven’t implemented it as yet and wonder if this is a good idea.
Also I was wondering if there are good mediation engines/ESB’s that could do this type of transformation without having to use filters but haven’t seen any that is as simple as using a filter. Maybe I haven’t searched enough.
Interested in knowing thoughts of others and if this solution will address the original question.

Dynamic Properties for object instances?

After the previous question "What are the important rules in Object Model Design", now I want to ask this:
Is there any way to have dynamic properties for class instances?
Suppose that we have this schematic object model:
So, each object could have lots of properties due to the set of implemented Interfaces, and then become relatively heavy object. Creating all the possible -and of course reasonable- object can be a way for solving this problem (i.e. Pipe_Designed v.s. Pipe_Designed_NeedInspection), but I have a large number of interfaces by now, that make it difficult.
I wonder if there is a way to have dynamic properties, something like the following dialog to allow the end user to select available functionalities for his/hers new object.
What you want is Properties pattern. Check out long and boring but clever article from Steve Yegge on this
I think maybe you're putting too many roles into the "Road" and "Pipe" classes, because your need for dynamic properties seems to derive from various states/phases of the artifacts in your model. I would consider making an explicit model using associations to different classes instead of putting everything in the "Road" or "Pipe" class using interfaces.
If you mean the number of public properties, use explicit interface implementation.
If you mean fields (and object space for sparse objects): you can always use a property bag for the property implementation.
For a C# example:
string IDesigned.ApprovedBy {
get {return GetValue<string>("ApprovedBy");}
set {SetValue("ApprovedBy", value);}
}
with a dictionary for the values:
readonly Dictionary<string, object> propValues =
new Dictionary<string, object>();
protected T GetValue<T>(string name)
{
object val;
if(!propValues.TryGetValue(name, out val)) return default(T);
return (T)val;
}
protected void SetValue<T>(string name, T value)
{
propValues[name] = value;
}
Note that SetValue would also be a good place for any notifications - for example, INotifyPropertyChanged in .NET to implement the observer pattern. Many other architectures have something similar. You can do the same with object keys (like how EventHandlerList works), but string keys are simpler to understand ;-p
This only then takes as much space as the properties that are actively being used.
A final option is to encapsulate the various facets;
class Foo {
public bool IsDesigned {get {return Design != null;}}
public IDesigned Design {get;set;}
// etc
}
Here Foo doesn't implement any of the interfaces, but provides access to them as properties.

Resources