OSGi application design and DDD - domain-driven-design

After reading OSGi application design - am I abusing the service framework? and What is the best way of grouping OSGi bundles to make a coherent 'application', I'm now stuck with burning question of how to apply the "don't program OSGi" mantra.
Assuming the bounded context is the entire application and not the individual bundles, I should be free to declare some aggregate root entity in an API bundle, and in the DDD style, a repository for working with said entity. Now, the crux of the matter: an explicit repository appears to be antithetical to the OSGi style because a repository is itself a registry (of domain objects), and this design sidesteps the OSGi service registry. But to do away with the repository would require consumers to program OSGi in order to perform lookups of an entity.
It is said that repositories are just a façade--does this imply that I should create a repository implementation which delegates to the service registry? This does not appear to be a coherent approach since consumers would have two entry points into the domain model: the repository, and the service registry itself. To forgo the repository would no longer be DDD because we're back to mixing domain logic with framework code in a spaghetti-like fashion.
So what's the take-away here? Is domain-driven design incompatible with the "OSGi way," or am I missing some critical concept?

The rationale for not depending on OSGi is that as middleware OSGi visibility in your code makes it less cohesive. Domain code and OSGi code should not mix (just like it should not mix with JMS, Java EE, or any other API). Less cohesive code is more complicated and therefore error prone.
However, you always need some bridging code that links the infra-structure to your domain code. Since this bridging code is going to be coupled to some API anyway, leverage it to the hilt, there is absolutely no use in abstracting yourself from the environment for bridge code.
DS (and Blueprint, iPOJO, etc) hide the registry for the business logic, they are just very convenient ways to provide your domain registry. So with OSGi you hardly have to write any code to have a very powerful repository of domain objects without your domain objects themselves being aware of OSGi.
Yes, if you move your code from OSGi to somewhere else you have to rewrite the bridge code and coming from an OSGi world writing the generic functionality provided by OSGi is surprisingly large. However, not using the OSGi constructs because in event that the app is ported you have to provide is functionality is throwing away money.
Conclusion: domain could should not mix with OSGi, bridge code should leverage OSGi to the hilt.

I think you should simply register the repository as an OSGi service. The bundles that need the repository should reference the service and should only know of the repository interface. Of course this means you have to use either a ServiceTracker in an Activator or e.g. a blueprint context. I prefer the blueprint context as this way you do not have a real java level dependency on OSGi.
Of course this creates a dependency to OSGi in some way but this is not avoidable. The basic idea to follow is that you should keep the OSGi specific code out of your business code and have it in separate classes or in blueprint.
See one of my tutorials which shows a simple application with UI, a model and a persistence impl. While the application is too small to really do DDD the aproach should be fairly compatible. One thing you will notice is that the whole application does not import a single OSGi interface. One nice side effect is that you can easily reuse the code outside OSGi but then of course you have to solve the DI differently.
http://www.liquid-reality.de/x/DIBZ

Related

How to read Config values in Cross Cutting project?

I have followed DDD guidelines to structure my project, I have Domain, Infrastructure, Application and Presentation layers.
I have also defined a cross-cutting project called Common. All other projects have dependency on Common.
One of the things that I need in Common project is my config/setting values. I have defined all solution wide settings in a DB table. The Common project reads the settings and any other project can access these settings through Common project...
How should Common project access the DB? Anywhere else in the solution, I use Infrastructure layer to read from DB, but if I reference Infrastructure in Common project, I would get circular dependency.
Should Common project have it's own DB Reader? Or putting all the config in the Common project was not the correct design at the first place?
The common package could be organized by feature. Here the IConfigProvider implementations would live in the same package as the interface.
E.g.
You could also consider global configuration as a supporting BC and implement the appropriate anti-corruption layer in each downstream context, where every context has it's own view and interpretation of such configuration.
Dependencies are always an interesting thing.
Since you haven't specified what languages/environments you are using and I have experience with C# I will use techniques and terminology related to strong typed OO languages like it.
Let's say that we will separate interface and implementation. I will use the c# notation that an interface begins with capital 'I' to make more clear what is an interface.
Your repositories are part of your domain. Lets say you have Account entity and you have AccountRepository for this entity.
What we will do is separate the interface for this repository from it's implementation. We will have IAccountRepository and a concrete implementation (maybe more than one, but this is very rare) for it: AccountRepository.
If we want to use SQL database we may have SQLAccountRepository. If we want to use MongoDB we may have MongoDBRepository. Both of these concrete repositories will implement the interface IAccountRepository.
IAccountRepository is part of your Domain, but the implementations (SQL, MongoDB etc.) are part of your Infrastructure layer as they will access external things (SQL server or MongoDB server in this example).
Your dependencies in this case will be 'Infrastructure -> Domain' not 'Domain -> Infrastructure'. This isolates your domain from the Infrastructure as the Infrastructure has reference to the Domain note vice versa.
By using interfaces your Domain only specifies what it needs not how it needs it.
If you apply the same idea you can define interfaces in your Common project for getting (and setting if necessary) settings (ISettingsProvider, IApplicationSettings etc.) and allow the Infrastructure that references Common to provide implementation for these interfaces (SQLSettingsProvider etc.)
You can use dependency injection, service locator or similar technique to bind implementations to interfaces.

Using MEF as an IoC

After reading some stuff such as this: http://mikehadlow.blogspot.com/2008/09/managed-extensibility-framework-why.html
I understand that MEF has some featcures that I will not find in an IoC, and that MEF has some IoC stuff that is probaboly not as advanced as some other IoC systems can offer.
I need the MEF stuff. Do I also need an IoC framework or would I be fine with what MEF has?
Asher
Depends on your requirements/existing code.
If you have an existing code infrastructure built on an IoC container, you can in fact combine these with MEF. Recently I've been building an ASP.NET MVC+MEF framework, and a couple of my readers have been asking how to integrate Unity with the MEF+MVC framework I have built. This turned out to be really easy, thanks to a project called the Common Services Locator.
The CSL project is designed to provide an abstraction over service location, so I can grab a CSL provider for Unity, wire it up with a custom ExportProvider and MEF automatically starts composing my IoC-driven parts.
This is one of the benefits of MEFs ExportProvider model, you can easily plug in any additional providers to start pulling exports from a variety of sources.
Last week I blogged about combining MEF+Unity (and also MEF+Autofac as another exmaple), and although my examples are geared up for ASP.NET MVC, the concept is the same for most other implementations.
If you have the option of building something fresh using MEF, you'll probably find that you won't need an IoC container, MEF can handle property injection, constructor injection, part lifetime management, and type resolution.
Let me know if you have any questions :)
An IoC follows a purpose.
MEF specially is good designed, if you want to have some sort of plugins in your system.
or code that you do not trust to run properly (dont forget to handle exceptions).
but, it comes therefore with a overhead.
if you just want to do IoC, as it is a good design pattern for extensible and testable software, then i would recommend AutoFac, as it is from the same guy. more or less :-)
and, if you need both intentions.
then use Both. as Matthew pointed out, you can use the CSL to abstract both. if wanted.
for plugins -> MEF
for your IoC -> an simple IoC
We use MEF as an IoC container and it is working for us.
That being said, you should take a look at this blog post by Glenn Block where he lists the shortcomings you might encounter: Should I use MEF for my general IoC needs?
I am using MEF for both extensibility and IoC in SoapBox Core. There's also an article on CodeProject called Building an Extensible Application with MEF, WPF, and MVVM describing how it works.
I was using the Provider Model for essentially "IOC" in web application before switching to AutoFac recently.
Basically my requirement is that I need to be able to "on-sell" applications, so any interfaces need to be abstracted. By using the Provider Model things tend to get messy. Using an IOC container (if you are already) makes more sense, and I can still meet my requirements of "on-selling" because the IOC container can be "reconfigured" to allow different implementations.
I think MEF would be the best solution if you want a clean code base because it has more conventions, so essentially people could drop components in a folder and not change any configuration to instigate changes. This is nice but possibly overkill for my scenario where a simple config override is sufficient.
Read more on my blog post - hopefully the blog will get some good comments about it too: http://healthedev.blogspot.com/2011/12/making-custom-built-applications.html

How should I secure my webapp written using Wicket, Spring, and JPA?

So, I have an web-based application that is using the Wicket 1.4 framework, and it uses Spring beans, the Java Persistence API (JPA), and the OpenSessionInView pattern. I'm hoping to find a security model that is declarative, but doesn't require gobs of XML configuration -- I'd prefer annotations.
Here are the options so far:
Spring Security (guide) - looks complete, but every guide I find that combines it with Wicket still calls it Acegi Security, which makes me think it must be old.
Wicket-Auth-Roles (guide 1 and guide 2) - Most guides recommend mixing this with Spring Security, and I love the declarative style of #Authorize("ROLE1","ROLE2",etc). I'm concerned about having to extend AuthenticatedWebApplication, since I'm already extending org.apache.wicket.protocol.http.WebApplication, and Spring is already proxying that behind org.apache.wicket.spring.SpringWebApplicationFactory.
SWARM / WASP (guide) - This looks the newest (though the main contributor passed away years ago), but I hate all of the JAAS-styled text files that declare permissions for principals. I also don't like the idea of making an Action class for every single thing a user might want to do. Secure models also aren't immediately obvious to me. Plus, there isn't an Authn example.
Additionally, it looks like lots of folks recommend mixing the first and second options. I can't tell what the best practice is at all, though.
I don't know if you saw this blog post so I'm adding it here as reference and I'll just quote the end:
Update 2009/03/12: those interested in securing Wicket
applications should also be aware that
there is an alternative to
Wicket-Security, called
wicket-auth-roles. This thread
will give you a good overview of the
status of the two frameworks.
Integrating wicket-auth-roles with
Spring Security is covered here.
One compelling feature of
wicket-auth-roles is the ability to
configure authorizations with Java
annotations. I find it somehow more
elegant than a centralized
configuration file. There is an
example here.
Based on the information above and the one your provided, and because I prefer annotations too, I'd go for Wicket-Auth-Roles with Spring Security (i.e. guide 2). Extending AuthenticatedWebApplication shouldn't be a problem as this class extends WebApplication. And pulling your application object out of spring context using SpringWebApplicationFactory should also just work.
And if your concerns are really big, this would be pretty easy and fast to confirm with a test IMO :)
We've been using Wicket-security for years now and we have used it together with jaas files and with annotatations. Defining jaas files is quite a hassle and maintaining them is near impossible...
With annotations one has to define actions and principals for every page. This is timeconsuming however it does allow you to let the user define roles and authorizations dynamically. It is also possible to test all the principals using the WicketTester.
Each of the 3 packages has it's (dis)advantages, it's a matter of taste and it also depends on the size of the application.

Questions regarding Domain driven Design

After reading Eric Evans' Domain driven Design I have a few questions. I searched but no where i could able to find satisfying answers. Please let me know if anyone of you have clear understanding below questions.
My concerns are
Repository is for getting already existing aggregates from DB,Web service .
If yes, Can Repository also have transaction calls on this entity (i.e Transfer amount,send account details ...etc)
Can Entity have Methods which have business logic in which it calls infrastructure Layer services for sending emails .. logs etc (Entity methods calling IS services direclty).
Repository implementation and Factory classes will reside in Infrastrucure Layer. is that correct statement ?
Can UI layer (controller) call Repositry methods directly ? or should we call these from Application layer ?
There are still lot many confusion in my mind ... please guide me ...
Books i am using Eric Evan's domain driven desing ......
.NET Domain-Driven Design with C#
There is a lot of debate about whether Repositories should be read-only or allow transactions. DDD doesn't dictate any of these views. You can do both. Proponents of read-only Repositories prefer Unit of Work for all CUD operations.
Most people (self included) consider it good practice that Entities are Persistent-Ignorant. Extending that principle a bit would indicate that they should be self-contained and free of all infrastructure layer services - even in abstract form. So I would say that calls to infrastructure services belong in Service classes that operate on Entities.
It sounds correct that Repository implementations and Factories (if any) should reside in the infrastructure layer. Their interfaces, however, must reside in the Domain Layer so that the domain services can interact with them without having dependencies on the infrastructure layer.
DDD doesn't really dictate whether you can skip layers or not. Late in the book, Evans talks a bit about layering and calls it Relaxed Layering when you allow this, so I guess he just sees it as one option among several. Personally I'd prefer to prevent layer skipping, because it makes it easier to inject some behavior at a future time if calls already go through the correct layers.
Personally, in my latest DDD-project, I use a Unit Of Work that holds an NHibernate session. The UoW is ctor injected in the repositories, giving them the single responsible of Add, Remove and Find.
Evans has stated that one piece of the puzzle that's missing in the DDD book is «Domain Events». Using something like Udi Dahan's DomainEvents will give you a totally decoupled architecture (the domain object simply raises an event). Personally, I use a modified version of Domain Events and StructureMap for the wiring. It works great for my needs.
I recommend, based on other recommendations, that the Repository interfaces be a part of the model, and their implementations be a part of the infrastructure.
Yes! I've personally worked on three DDD web projects where services and repositories were injected to the presenters/controllers (ASP.NET/ASP.NET MVC) and it made a lot of sense in our context.
The repository should only be for locating and saving entities, there should not be any business logic in that layer. For example:
repository.TransferAmount(amount, toAccount); // this is bad
Entities can do things like send emails as long as they depend on abstractions defined in your domain. The implementation should be in your infrastructure layer.
Yes, you put your repository implementation in your infrastructure layer.
Can UI layer (controller) call Repositry methods directly ? or should we call these from Application layer ?
Yes, I try to follow this pattern for the most part:
[UnitOfWork]
public ActionResult MyControllerAction(int id)
{
var entity = repository.FindById(id);
entity.DoSomeBusinessLogic();
repository.Update(entity);
}

Using java classes in Grails

I have a Java\Spring\Hibernate application - complete with domain classes which are basically Hibernate POJOs
There is a piece of functionality that I think can be written well in Grails.
I wish to reuse the domain classes that I have created in the main Java app
What is the best way to do so ?
Should I write new domain classes extending the Java classes ? this sounds tacky
Or Can I 'generate' controllers off the Java domain classes ?
What are the best practices around reusing Java domain objects in Grails\Groovy
I am sure there must be others writing some pieces in grails\groovy
If you know about a tutorial which talks about such an integration- that would be awesome !!!
PS: I am quite a newbie in grails-groovy so may be missing the obvious. Thanks !!!
Knowing just how well Groovy and Grails excel at integrating with existing Java code, I think I might be a bit more optimistic than Michael about your options.
First thing is that you're already using Spring and Hibernate, and since your domain classes are already POJOs they should be easy to integrate with. Any Spring beans you might have can be specified in an XML file as usual (in grails-app/conf/spring/resources.xml) or much more simply using the Spring bean builder feature of Grails. They can then be accessed by name in any controller, view, service, etc. and worked with as usual.
Here are the options, as I see them, for integrating your domain classes and database schema:
Bypass GORM and load/save your domain objects exactly as you're already doing.
Grails doesn't force you to use GORM, so this should be quite straightforward: create a .jar of your Java code (if you haven't already) and drop it into the Grails app's lib directory. If your Java project is Mavenized, it's even easier: Grails 1.1 works with Maven, so you can create a pom.xml for your Grails app and add your Java project as a dependency as you would in any other (Java) project.
Either way you'll be able to import your classes (and any supporting classes) and proceed as usual. Because of Groovy's tight integration with Java, you'll be able to create objects, load them from the database, modify them, save them, validate them etc. exactly as you would in your Java project. You won't get all the conveniences of GORM this way, but you would have the advantage of working with your objects in a way that already makes sense to you (except maybe with a bit less code thanks to Groovy). You could always try this option first to get something working, then consider one of the other options later if it seems to make sense at that time.
One tip if you do try this option: abstract the actual persistence code into a Grails service (StorageService perhaps) and have your controllers call methods on it rather than handling persistence directly. This way you could replace that service with something else down the road if needed, and as long as you maintain the same interface your controllers won't be affected.
Create new Grails domain classes as subclasses of your existing Java classes.
This could be pretty straightforward if your classes are already written as proper beans, i.e. with getter/setter methods for all their properties. Grails will see these inherited properties as it would if they were written in the simpler Groovy style. You'll be able to specify how to validate each property, using either simple validation checks (not null, not blank, etc.) or with closures that do more complicated things, perhaps calling existing methods in their POJO superclasses.
You'll almost certainly need to tweak the mappings via the GORM mapping DSL to fit the realities of your existing database schema. Relationships would be where it might get tricky. For example, you might have some other solution where GORM expects a join table, though there may even be a way to work around differences such as these. I'd suggest learning as much as you can about GORM and its mapping DSL and then experiment with a few of your classes to see if this is a viable option.
Have Grails use your existing POJOs and Hibernate mappings directly.
I haven't tried this myself, but according to Grails's Hibernate Integration page this is supposed to be possible: "Grails also allows you to write your domain model in Java or re-use an existing domain model that has been mapped using Hibernate. All you have to do is place the necessary 'hibernate.cfg.xml' file and corresponding mappings files in the '%PROJECT_HOME%/grails-app/conf/hibernate' directory. You will still be able to call all of the dynamic persistent and query methods allowed in GORM!"
Googling "gorm legacy" turns up a number of helpful discussions and examples, for example this blog post by Glen Smith (co-author of the soon-to-be-released Grails in Action) where he shows a Hibernate mapping file used to integrate with "the legacy DB from Hell". Grails in Action has a chapter titled "Advanced GORM Kungfu" which promises a detailed discussion of this topic. I have a pre-release PDF of the book, and while I haven't gotten to that chapter yet, what I've read so far is very good, and the book covers many topics that aren't adequately discussed in other Grails books.
Sorry I can't offer any personal experience on this last option, but it does sound doable (and quite promising). Whichever option you choose, let us know how it turns out!
Do you really want/need to use Grails rather than just Groovy?
Grails really isn't something you can use to add a part to an existing web app. The whole "convention over configuration" approach means that you pretty much have to play by Grails' rules, otherwise there is no point in using it. And one of those rules is that domain objects are Groovy classes that are heavily "enhanced" by the Grails runtime.
It might be possible to have them extend existing Java classes, but I wouldn't bet on it - and all the Spring and Hibernate parts of your existing app would have to be discarded, or at least you'd have to spend a lot of effort to make them work in Grails. You'll be fighting the framework rather than profiting from it.
IMO you have two options:
Rewrite your app from scratch in Grails while reusing as much of the existing code as possible.
Keep your app as it is and add new stuff in Groovy, without using Grails.
The latter is probably better in your situation. Grails is meant to create new web apps very quickly, that's where it shines. Adding stuff to an existing app just isn't what it was made for.
EDIT:
Concerning the clarification in the comments: if you're planning to write basically a data entry/maintenance frontend for data used by another app and have the DB as the only communication channel between them, that might actually work quite well with Grails; it can certainly be configured to use an existing DB schema rather than creating its own from the domain classes (though the latter is less work).
This post provides some suggestions for using grails for wrapping existing Java classes in a web framework.

Resources