Should I use log4net directly in my domain model objects? - log4net

I'm wondering if it's bad practice to use log4net directly on my domain object... I'll be using ELMAH for my exceptions on the ASP.NET MVC application side, but for some informational purposes I'd like to log some data about the domain model itself.
Given the following domain object:
public class Buyer
{
private int _ID;
public int ID
{
get { return _ID; }
set
{
_ID = value;
}
}
private IList<SupportTicket> _SupportTickets=new List<SupportTicket>();
public IList<SupportTicket> SupportTickets
{
get
{
return _SupportTickets.ToList<SupportTicket>().AsReadOnly();
}
}
public void AddSupportTicket(SupportTicket ticket)
{
if (!SupportTickets.Contains(ticket))
{
_SupportTickets.Add(ticket);
}
}
}
Is adding logging behavior in the AddSupportTicketMethod a bad idea...so essentialy it'd look like this:
public class Buyer
{
protected static readonly ILog log = LogManager.GetLogger(typeof(SupportTicket));
public Buyer()
{
log4net.Config.XmlConfigurator.Configure();
}
private int _ID;
public int ID
{
get { return _ID; }
set
{
_ID = value;
}
}
private IList<SupportTicket> _SupportTickets=new List<SupportTicket>();
public IList<SupportTicket> SupportTickets
{
get
{
return _SupportTickets.ToList<SupportTicket>().AsReadOnly();
}
}
public void AddSupportTicket(SupportTicket ticket)
{
if (!SupportTickets.Contains(ticket))
{
_SupportTickets.Add(ticket);
} else {
log.Warn("Duplicate Ticket Not Added.");
}
}
}

I have used log4net and log4J directly in domain objects. This has good side effects and bad ones.
+: Logging in the domain object is simple and straightforward to code and you know you can take advantage of log4net features.
--: It means the program making use of the domain objects needs to pay attention to log4net configuration, which may or may not be a problem
--: You cannot link your domain object to a different log4net version than the calling program is using. I've seen a lot of conflicts with one item linked against log4net 1.2.0.10 and another linked against an earlier release.
Not logging in your domain object is a bad idea. The alternative is as others have suggested, dependency injection or an external framework (such as commons-logging for log4J) that allows plugging different logging frameworks or creating an interface that does the logging and logging against that interface. (The code using your domain object would need to then supply an appropriate instance of that interface for logging purposes.)

If you are going to log from your domain objects and you use an IOC container which you might want to swap out, I would recommend you use the Service Locator pattern (you could look at the Sharp# architecture for a nice implementation of a SafeServiceLocator that wraps msoft's ServiceLocator with more informative error messages).
I would also like to suggest that you consider whether you want to log the type of error you show in your example. I would tend to want to have the domain object throw an exception in that case and let the caller decide whether that was something that was expected by the application (and hence shouldn't be logged) or whether that represents a situation that the caller wants to deal with in some way.

This is a classic question!
The good way of doing this would be to introduce a class member of ILogger type and abstract the logging into this interface. In your class wherever you do a call to logg something do it through this interface. Then inject this dependency at the run-time with one of the implementation using one of the available IoC container or dependency injection farmeworks. By default you can use log4net implementation of this interface.
Here is a long list of available dependency injection frameworks:
http://www.hanselman.com/blog/ListOfNETDependencyInjectionContainersIOC.aspx

I think logging is a cross cutting concern, so it's best done in an aspect-oriented fashion. If you're using a framework like Spring.NET it's available to you.

Related

Decorating Repositories with AutoFac

Hi I have a maybe a common problem that I think not entirely can be solved by Autofac or any IoC container. It can be a design problem that I need some fresh input on.
I have the classic MVC web solution with EF 6. Its been implemented in a true DDD style with Anti-corruption layer, three bounded contexts, cross-cutting concerns movers out to infrastructure projects. It has been a real pleasure to see all pieces fall in to place in good way. We also added Commands to CUD operations into Domain.
Now here is the problem. Customer want a change log that tracks every entities property and when updates are done we need to save into change log values before and after update. We have implemented that successful in a ILoggerService that wraps a Microsoft test utility that we uses to detect changes. But I, my role is Software Architect, took the decision to Decorate our generic repositories with a ChangeTrackerRepository that have a dependency on ILoggerService. This works fine. The Decorator track methods Add(…) and Modify(…) in our IRepository<TEntity>.
The problem is that we have Repositories that have custom repositories that have custom queries like this:
public class CounterPartRepository : Repository<CounterPart>, ICounterPartRepository
{
public CounterPartRepository(ManagementDbContext unitOfWork)
: base(unitOfWork)
{}
public CounterPart GetAggregate(Guid id)
{
return GetSet().CompleteAggregate().SingleOrDefault(s => s.Id == id);
}
public void DeleteCounterPartAddress(CounterPartAddress address)
{
RemoveChild(address);
}
public void DeleteCounterPartContact(CounterPartContact contact)
{
RemoveChild(contact);
}
}
We have simple repositories that just closes the generic repository and get proper EF Bounded context injected into it (Unit Of Work pattern):
public class AccrualPeriodTypeRepository : Repository<AccrualPeriodType>, IAccrualPeriodTypeRepository
{
public AccrualPeriodTypeRepository(ManagementDbContext unitOfWork)
: base(unitOfWork)
{
}
}
The problem is that when decorating AccrualPeriodTypeRepository with AutoFac through generic Decorator we can easily inject that repo into CommandHandler actor like this
public AddAccrualPeriodCommandHandler(IRepository<AccrualPeriod> accrualRepository)
This works fine.
But How do we also decorate CounterPartRepository???
I have gone through several solutions in my head and they all end up with a dead-end.
1) Manually decorate every custom repository generate to many custom decorators that it will be near unmaintainable.
2) Decorate the closed Repository Repository with extended custom queries. This smells bad. Should be part of that repository?
3) If we consider 2… maybe Skip our Services and only rely on IRepository for operating on our Aggregate Roots and IQueryHandler (see article https://cuttingedge.it/blogs/steven/pivot/entry.php?id=92)
I need some fresh input to a common problem I think, when it comes to decorating your repositories when you have custom closed repositories and simple repositories also closed but both inherit from same Repository
Have you consider decorating command handlers instead of decorating repositories?
Repos are too low level, and it is not their responsibility to know what should be logged and how.
What about the following:
1) You have your command handlers in a way:
public class DeleteCounterPartAddressHandler : IHandle<DeleteCounterPartAddressCommand>
{
//this might be set by a DI container, or passed to a constructor
public ICounterPartRepository Repository { get; set; }
public void Handle(DeleteCounterPartAddressCommand command)
{
var counterpart = repository.GetPropertyById(command.CounterPartId);
// in DDD you always want to read and aggregate
// and save an aggregate as a whole
property.DeleteAdress(command.AddressId);
repository.Save(counterpart)
}
}
2) Now you can simply use Chain Of Responsibility pattern to "decorate" your handlers with logging, transactions, whatever:
public class LoggingHandler<T> : IHandler<T> {
private readonly IHandler<T> _innerHandler;
public LoggingHandler(IHandler<T> innerHandler) {
_innerHandler = innerHandler;
}
public void Handle(T command)
{
//Obviously you do it properly, but you get the idea
_log.Info("Before");
_innerHandler.Handle(command);
_log.Info("After");
}
}
Now you have just one piece of code responsible for logging and you can compose it with any command handler, so if you ever want to log a particular command then you just "wrap" it with the logging handler, and it is still your IHandle<T> so the rest of the system is not impacted.
And you can do it with other concerns too (threading, queueing, transactions, multiplexing, routing, etc.) without messing around and plumbing this stuff here and there.
Concerns are very well separated this way.
It is also much better (to me) because you log on a real operation (business) level, rather than on low-level repository.
Hope it helps.
P.S. In DDD you really want your repositories to only expose aggregate-level methods because Aggregates suppose to take care of their invariants (and nothing else, no services, no repositories), and because Aggregate represents transaction boundary.
Really, it is up to the Repository how to get the Aggregate from persisted storage and how to persist it back, outside it should look like you ask someone for an object and it gives you an object you can call behaviors on.
So normally you would only get an aggregate from the repository, call its behavior(s) and then save it back. Which really means that your repositories would mostly have GetById and Save methods, not some internals like "UpdateThatPartOfAnAggregate".

Domain Driven Design - Access modifier for domain entities

I am just starting out with domain driven design and have a project for my domain which is structured like this:
Domain
/Entities
/Boundaries
/UserStories
As I understand DDD, apart from the boundaries with which the outside world communicates with the domain, everything in the domain should be invisble. All of the examples I have seen of entity classes within a domain have a public access modifer, for example here I have a entity named Message:
public class Message
{
private string _text;
public string Text
{
get { return _text; }
set { _text = value; }
}
public Message()
{
}
public bool IsValid()
{
// Do some validation on text
}
}
Would it not be more correct if the entity class and its members were marked as internal so it is only accessible within the domain project?
For example:
internal class Message
{
private string _text;
internal string Text
{
get { return _text; }
set { _text = value; }
}
internal Message()
{
}
internal bool IsValid()
{
// Do some validation on text
}
}
I think there's a confusion here: the Bounded Context is a concept which defines the context in which a model is valid there aren't classes actualy named Boundary. Maybe those are objects for anti corruption purposes, but really the Aggregate Root should deal with that or some entry point in the Bounded Context.
I wouldn't structure a Domain like this, this is artificial, you should structure the Domain according to what make sense in the real world process. You're using DDD to model a real world process in code and I haven't heard anyone outside software devel talking aobut Entities or Value Objects. They talk about Orders, Products, Prices etc
Btw that Message is almost certain a value object, unless the Domain really needs to identify uniquely each Message. Here the Message is a Domain concept, I hope you don't mean a command or an event. And you should put the validation in the constructor or in the method where the new value is given.
In fairness this code is way to simplistc, perhaps you've picked the wrong example. About the classes being internal or public, they might be one or another it isn't a rule, it depends on many things. At one extreme you'll have the approach where almost every object is internal but implements a public interface common for the application, this can be highly inefficient.
A rule of the thumb: if the class is used outside the Domain assembly make it public, if it's something internally used by the Domain and/or implements a public interface, make it internal.

How do I define a dependancy for use within an attribute using autofac

I have an asp.net mvc application and I am developing a custom attribute to secure some wcf end points inheriting from a CodeAccessSecurityAttribute.
I'm having difficulty finding out how I would use autofac to inject a service dependancy that I can use within this attribute.
[Serializable]
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true, Inherited = true)]
public class SecuredResourceAttribute : CodeAccessSecurityAttribute
{
public ISecurityService SecurityService { get; set; }
public SecuredResourceAttribute(SecurityAction action) : base(action)
{
}
public override IPermission CreatePermission()
{
// I need access to the SecurityService here
// SecurityService == null :(
}
}
I have tried from the application start to register for property auto wiring, but this is not working. What's the best way to inject a dependancy into an attribute?
builder.RegisterType<SecuredResourceAttribute>().PropertiesAutowired();
Thanks
The way you are approaching this is not going to pan out for a couple reasons:
Registering an attribute in autofac will do nothing as you're not using autofac to instantiate the attribute.
Attributes are applied before code execution, and thus rely on constant inputs.
You're going to have to use a service location pattern inside your CreatePermission() method to locate the SecurityService, as I am assuming the CreatePermission() call comes after the container is setup (and the constructor does not!)
Keep in mind ServiceLocation will hinder your class testability, as you will have to configure/set-up the service locator for each test.
Please use with caution
You should start your journey into ServiceLocation here but honestly this should make you question your design. Is an attribute best suited for the role you've tasked it? Perhaps you should look into Aspect-Oriented Programming like PostSharp

How do I implement repository pattern and unit of work when dealing with multiple data stores?

I have a unique situation where I am building a DDD based system that needs to access both Active Directory and a SQL database as persistence. Initially this wasnt a problem because our design was setup where we had a unit of work that looked like this:
public interface IUnitOfWork
{
void BeginTransaction()
void Commit()
}
and our repositories looked like this:
public interface IRepository<T>
{
T GetByID()
void Save(T entity)
void Delete(T entity)
}
In this setup our load and save would handle the mapping between both data stores because we wrote it ourselves. The unit of work would handle transactions and would contain the Linq To SQL data context that the repositories would use for persistence. The active directory part was handled by a domain service implemented in infrastructure and consumed by the repositories in each Save() method. Save() was responsible with interacting with the data context to do all the database operations.
Now we are trying to adapt it to entity framework and take advantage of POCO. Ideally we would not need the Save() method because the domain objects are being tracked by the object context and we would just need to add a Save() method on the unit of work to have the object context save the changes, and a way to register new objects with the context. The new proposed design looks more like this:
public interface IUnitOfWork
{
void BeginTransaction()
void Save()
void Commit()
}
public interface IRepository<T>
{
T GetByID()
void Add(T entity)
void Delete(T entity)
}
This solves the data access problem with entity framework, but does not solve the problem with our active directory integration. Before, it was in the Save() method on the repository, but now it has no home. The unit of work knows nothing other than the entity framework data context. Where should this logic go? I argue this design only works if you only have one data store using entity framework. Any ideas how to best approach this issue? Where should I put this logic?
I wanted to come back and followup with what I have learned since I posted this. It seems if you are going to keep true to repository pattern, the data stores it persists to do not matter. If there are two data stores, write to them both in the same repository. What is important is to keep up the facade that repository pattern represents: an in memory collection. I would not do separate repositories because that doesn't feel like a true abstraction to me. You are letting the technology under the hood dictate the design at that point. To quote from the dddstepbystep.com:
What Sits Behind A Repository? Pretty
much anything you like. Yep, you heard
it right. You could have a database,
or you could have many different
databases. You could use relational
databases, or object databases. You
could have an in memory database, or
a singleton containing a list of in
memory items. You could have a REST
layer, or a set of SOA services, or a
file system, or an in memory cache…
You can have pretty much anything –
your only limitation is that the
Repository should be able to act like
a Collection to your domain. This
flexibility is a key difference
between Repository and traditional
data access techniques.
http://thinkddd.com/assets/2/Domain_Driven_Design_-_Step_by_Step.pdf
First I assume you are using an IoC container. I advocate you make true Repositories for each entity type. This means you will wrap each object context EntitySet in a class that implements something like:
interface IRepository<TEntity> {
TEntity Get(int id);
void Add(TEntity entity);
void Save(TEntity entity);
void Remove(TEntity entity);
bool CanPersist<T>(T entity);
}
CanPersist merely returns whether that repository instance supports persisting the passed entity, and is used polymorphically by UnitOfWork.Save described below.
Each IRepository will also have a constructor that allows the IRepository to be constructed in "transactional" mode. So, for EF, we might have:
public partial EFEntityARepository : IRepository<EntityA> {
public EFEntityARepository(EFContext context, bool transactional) {
_context = context;
_transactional = transactional;
}
public void Add(EntityA entity) {
_context.EntityAs.Add(entity);
if (!_transactional) _context.SaveChanges();
}
}
UnitOfWork should look like this:
interface UnitOfWork {
void Add(TEntity entity);
void Save(TEntity entity);
void Remove(TEntity entity);
void Complete();
}
The UnitOfWork implementation will use dependency injection to get instances of all IRepository. In UnitOfWork.Save/Add/Remove, the UoW will pass the argument entity into CanPerist of each IRepository. For any true return values, the UnitOfWork will store that entity in a private collection specific to that IRepository and to the intended operation. In Complete, the UnitOfWork will go through all private entity collections and call the appropriate operation on the appropriate IRepository for each entity.
If you have an entity that needs to be partially persisted by EF and partially persisted by AD, you would have two IRepository classes for that entity type (they would both return true from CanPersist when passed an instance of that entity type).
As for maintaining atomicity between EF and AD, that is a separate non-trivial problem.
IMO I would wrap the calls to both of these repos in a service type of class. Then I would use IoC/DI to inject the repo types into the service class. You would have 2 repos, 1 for the Ent. framework and 1 that supports AD. This way each repo deals with only its underlaying data store and doesn't have to cross over.
What I have done to support multiple units of work types, is to have IUnitOfWork be more of a factory. I create another type called IUnitOfWorkScope which is the actual unit of work and it has only a commit method.
namespace Framework.Persistance.UnitOfWork
{
public interface IUnitOfWork
{
IUnitOfWorkScope Get();
IUnitOfWorkScope Get(bool shared);
}
public interface IUnitOfWorkScope : IDisposable
{
void Commit();
}
}
This allows me to inject different implementations of the unit of work into a service and be able to use them side by side.

NInject and thread-safety

I am having problems with the following class in a multi-threaded environment:
public class Foo
{
[Inject]
public IBar InjectedBar { get; set; }
public bool NonInjectedProp { get; set; }
public void DoSomething()
{
/* The following line is causing a null-reference exception */
InjectedBar.DoSomething();
}
public Foo(bool nonInjectedProp)
{
/* This line should inject the InjectedBar property */
KernelContainer.Inject(this);
NonInjectedProp = nonInjectedProp;
}
}
This is a legacy class which is why I am using property rather than constructor injection.
Sometime when the DoSomething() is called the InjectedBar property is null. In a single-threaded application, everything runs fine.
How can this be occuring and how can I prevent it?
I am using NInject 2.0 without any extensions, although I have copied the KernelContainer from the NInject.Web project.
I have noticed a similar problem occurring in my web services. This problem is extremely intermittent and difficult to replicate.
First of all, let me say that this is wrong on so many levels; the KernelContainer was an infrastructure class kept specifically to work around certain limitations in the ASP.NET WebForms page lifecycle. It was never meant to be used in application code. Using the Ninject kernel (or any DI container) as a service locator is an anti-pattern.
That being said, Ninject itself is definitely thread-safe because it's used to service parallel requests in ASP.NET all the time. Wherever this NullReferenceException is coming from, it's got little if anything to do with Ninject.
I can think of two possibilities:
You have to initialize KernelContainer.Kernel somewhere, and that code might have a race condition. If something tries to use the KernelContainer before the kernel is fully initialized (possible if you use the IKernel.Bind methods instead of loading modules as per the guidance), you'll get errors like this. Or:
It's your IBar implementation itself that has problems, and the NullReferenceException is happening somewhere inside the DoSomething method. You don't actually specify that InjectedBar is null when you get the exception, so that's a legitimate possibility here.
Just to narrow the field of possibilities, I'd eliminate the KernelContainer first. If you absolutely must use Ninject as a service locator due to a poorly-designed legacy architecture, then at least allow it to create the dependencies instead of relying on Inject(this). That is to say, whichever class or classes need to create your Foo, have that class call kernel.Get<Foo>(), and set up your kernel to Bind<Foo>().ToSelf().

Resources