After reading some stuff such as this: http://mikehadlow.blogspot.com/2008/09/managed-extensibility-framework-why.html
I understand that MEF has some featcures that I will not find in an IoC, and that MEF has some IoC stuff that is probaboly not as advanced as some other IoC systems can offer.
I need the MEF stuff. Do I also need an IoC framework or would I be fine with what MEF has?
Asher
Depends on your requirements/existing code.
If you have an existing code infrastructure built on an IoC container, you can in fact combine these with MEF. Recently I've been building an ASP.NET MVC+MEF framework, and a couple of my readers have been asking how to integrate Unity with the MEF+MVC framework I have built. This turned out to be really easy, thanks to a project called the Common Services Locator.
The CSL project is designed to provide an abstraction over service location, so I can grab a CSL provider for Unity, wire it up with a custom ExportProvider and MEF automatically starts composing my IoC-driven parts.
This is one of the benefits of MEFs ExportProvider model, you can easily plug in any additional providers to start pulling exports from a variety of sources.
Last week I blogged about combining MEF+Unity (and also MEF+Autofac as another exmaple), and although my examples are geared up for ASP.NET MVC, the concept is the same for most other implementations.
If you have the option of building something fresh using MEF, you'll probably find that you won't need an IoC container, MEF can handle property injection, constructor injection, part lifetime management, and type resolution.
Let me know if you have any questions :)
An IoC follows a purpose.
MEF specially is good designed, if you want to have some sort of plugins in your system.
or code that you do not trust to run properly (dont forget to handle exceptions).
but, it comes therefore with a overhead.
if you just want to do IoC, as it is a good design pattern for extensible and testable software, then i would recommend AutoFac, as it is from the same guy. more or less :-)
and, if you need both intentions.
then use Both. as Matthew pointed out, you can use the CSL to abstract both. if wanted.
for plugins -> MEF
for your IoC -> an simple IoC
We use MEF as an IoC container and it is working for us.
That being said, you should take a look at this blog post by Glenn Block where he lists the shortcomings you might encounter: Should I use MEF for my general IoC needs?
I am using MEF for both extensibility and IoC in SoapBox Core. There's also an article on CodeProject called Building an Extensible Application with MEF, WPF, and MVVM describing how it works.
I was using the Provider Model for essentially "IOC" in web application before switching to AutoFac recently.
Basically my requirement is that I need to be able to "on-sell" applications, so any interfaces need to be abstracted. By using the Provider Model things tend to get messy. Using an IOC container (if you are already) makes more sense, and I can still meet my requirements of "on-selling" because the IOC container can be "reconfigured" to allow different implementations.
I think MEF would be the best solution if you want a clean code base because it has more conventions, so essentially people could drop components in a folder and not change any configuration to instigate changes. This is nice but possibly overkill for my scenario where a simple config override is sufficient.
Read more on my blog post - hopefully the blog will get some good comments about it too: http://healthedev.blogspot.com/2011/12/making-custom-built-applications.html
Related
I have Core project where I need to do some cryptographic operations, e.g. verification of SHA256. What can I do if it's Core project, so it shouldn't depend on anything? I have to write my own cryptographic functions that are resistant to e.g. side-channel attack? This causes security problems.
So what to do? Can my Core project depend on a nuget package if I use Clean Architecture?
The guideline regarding dependencies is to keep the core project as simple as possible so that most of its logic is about solving the business problem.
By keeping it simple, it's much easier to express which part of the business domain the classes solve. It's also easy to write focused tests that prove that the code can solve the correct part of the business problem.
To me, preventing attacks is not a part of that. It's something that should be done on inbound API calls before the domain is called. I would put that logic in application services. Those services can, of course, live in the Core project but not in any of the bounded contexts.
In Clean Architecture we try to keep the domain and application logic as independent from external libraries and frameworks as possible so that we do not depend on their future development.
Nevertheless the application logic will have to interact with external libraries, services and other IO which is achieved via "dependency inversion": the application logic defines an interface which is implemented by the outer layers (infrastructure).
This was the application logic remains "clean" and can focus on decision making while you can still reuse external libraries and services.
A more detailed discussion of this topic you can find here: http://www.plainionist.net/Implementing-Clean-Architecture-Frameworks/
After reading OSGi application design - am I abusing the service framework? and What is the best way of grouping OSGi bundles to make a coherent 'application', I'm now stuck with burning question of how to apply the "don't program OSGi" mantra.
Assuming the bounded context is the entire application and not the individual bundles, I should be free to declare some aggregate root entity in an API bundle, and in the DDD style, a repository for working with said entity. Now, the crux of the matter: an explicit repository appears to be antithetical to the OSGi style because a repository is itself a registry (of domain objects), and this design sidesteps the OSGi service registry. But to do away with the repository would require consumers to program OSGi in order to perform lookups of an entity.
It is said that repositories are just a façade--does this imply that I should create a repository implementation which delegates to the service registry? This does not appear to be a coherent approach since consumers would have two entry points into the domain model: the repository, and the service registry itself. To forgo the repository would no longer be DDD because we're back to mixing domain logic with framework code in a spaghetti-like fashion.
So what's the take-away here? Is domain-driven design incompatible with the "OSGi way," or am I missing some critical concept?
The rationale for not depending on OSGi is that as middleware OSGi visibility in your code makes it less cohesive. Domain code and OSGi code should not mix (just like it should not mix with JMS, Java EE, or any other API). Less cohesive code is more complicated and therefore error prone.
However, you always need some bridging code that links the infra-structure to your domain code. Since this bridging code is going to be coupled to some API anyway, leverage it to the hilt, there is absolutely no use in abstracting yourself from the environment for bridge code.
DS (and Blueprint, iPOJO, etc) hide the registry for the business logic, they are just very convenient ways to provide your domain registry. So with OSGi you hardly have to write any code to have a very powerful repository of domain objects without your domain objects themselves being aware of OSGi.
Yes, if you move your code from OSGi to somewhere else you have to rewrite the bridge code and coming from an OSGi world writing the generic functionality provided by OSGi is surprisingly large. However, not using the OSGi constructs because in event that the app is ported you have to provide is functionality is throwing away money.
Conclusion: domain could should not mix with OSGi, bridge code should leverage OSGi to the hilt.
I think you should simply register the repository as an OSGi service. The bundles that need the repository should reference the service and should only know of the repository interface. Of course this means you have to use either a ServiceTracker in an Activator or e.g. a blueprint context. I prefer the blueprint context as this way you do not have a real java level dependency on OSGi.
Of course this creates a dependency to OSGi in some way but this is not avoidable. The basic idea to follow is that you should keep the OSGi specific code out of your business code and have it in separate classes or in blueprint.
See one of my tutorials which shows a simple application with UI, a model and a persistence impl. While the application is too small to really do DDD the aproach should be fairly compatible. One thing you will notice is that the whole application does not import a single OSGi interface. One nice side effect is that you can easily reuse the code outside OSGi but then of course you have to solve the DI differently.
http://www.liquid-reality.de/x/DIBZ
So, I have an web-based application that is using the Wicket 1.4 framework, and it uses Spring beans, the Java Persistence API (JPA), and the OpenSessionInView pattern. I'm hoping to find a security model that is declarative, but doesn't require gobs of XML configuration -- I'd prefer annotations.
Here are the options so far:
Spring Security (guide) - looks complete, but every guide I find that combines it with Wicket still calls it Acegi Security, which makes me think it must be old.
Wicket-Auth-Roles (guide 1 and guide 2) - Most guides recommend mixing this with Spring Security, and I love the declarative style of #Authorize("ROLE1","ROLE2",etc). I'm concerned about having to extend AuthenticatedWebApplication, since I'm already extending org.apache.wicket.protocol.http.WebApplication, and Spring is already proxying that behind org.apache.wicket.spring.SpringWebApplicationFactory.
SWARM / WASP (guide) - This looks the newest (though the main contributor passed away years ago), but I hate all of the JAAS-styled text files that declare permissions for principals. I also don't like the idea of making an Action class for every single thing a user might want to do. Secure models also aren't immediately obvious to me. Plus, there isn't an Authn example.
Additionally, it looks like lots of folks recommend mixing the first and second options. I can't tell what the best practice is at all, though.
I don't know if you saw this blog post so I'm adding it here as reference and I'll just quote the end:
Update 2009/03/12: those interested in securing Wicket
applications should also be aware that
there is an alternative to
Wicket-Security, called
wicket-auth-roles. This thread
will give you a good overview of the
status of the two frameworks.
Integrating wicket-auth-roles with
Spring Security is covered here.
One compelling feature of
wicket-auth-roles is the ability to
configure authorizations with Java
annotations. I find it somehow more
elegant than a centralized
configuration file. There is an
example here.
Based on the information above and the one your provided, and because I prefer annotations too, I'd go for Wicket-Auth-Roles with Spring Security (i.e. guide 2). Extending AuthenticatedWebApplication shouldn't be a problem as this class extends WebApplication. And pulling your application object out of spring context using SpringWebApplicationFactory should also just work.
And if your concerns are really big, this would be pretty easy and fast to confirm with a test IMO :)
We've been using Wicket-security for years now and we have used it together with jaas files and with annotatations. Defining jaas files is quite a hassle and maintaining them is near impossible...
With annotations one has to define actions and principals for every page. This is timeconsuming however it does allow you to let the user define roles and authorizations dynamically. It is also possible to test all the principals using the WicketTester.
Each of the 3 packages has it's (dis)advantages, it's a matter of taste and it also depends on the size of the application.
I have a Java\Spring\Hibernate application - complete with domain classes which are basically Hibernate POJOs
There is a piece of functionality that I think can be written well in Grails.
I wish to reuse the domain classes that I have created in the main Java app
What is the best way to do so ?
Should I write new domain classes extending the Java classes ? this sounds tacky
Or Can I 'generate' controllers off the Java domain classes ?
What are the best practices around reusing Java domain objects in Grails\Groovy
I am sure there must be others writing some pieces in grails\groovy
If you know about a tutorial which talks about such an integration- that would be awesome !!!
PS: I am quite a newbie in grails-groovy so may be missing the obvious. Thanks !!!
Knowing just how well Groovy and Grails excel at integrating with existing Java code, I think I might be a bit more optimistic than Michael about your options.
First thing is that you're already using Spring and Hibernate, and since your domain classes are already POJOs they should be easy to integrate with. Any Spring beans you might have can be specified in an XML file as usual (in grails-app/conf/spring/resources.xml) or much more simply using the Spring bean builder feature of Grails. They can then be accessed by name in any controller, view, service, etc. and worked with as usual.
Here are the options, as I see them, for integrating your domain classes and database schema:
Bypass GORM and load/save your domain objects exactly as you're already doing.
Grails doesn't force you to use GORM, so this should be quite straightforward: create a .jar of your Java code (if you haven't already) and drop it into the Grails app's lib directory. If your Java project is Mavenized, it's even easier: Grails 1.1 works with Maven, so you can create a pom.xml for your Grails app and add your Java project as a dependency as you would in any other (Java) project.
Either way you'll be able to import your classes (and any supporting classes) and proceed as usual. Because of Groovy's tight integration with Java, you'll be able to create objects, load them from the database, modify them, save them, validate them etc. exactly as you would in your Java project. You won't get all the conveniences of GORM this way, but you would have the advantage of working with your objects in a way that already makes sense to you (except maybe with a bit less code thanks to Groovy). You could always try this option first to get something working, then consider one of the other options later if it seems to make sense at that time.
One tip if you do try this option: abstract the actual persistence code into a Grails service (StorageService perhaps) and have your controllers call methods on it rather than handling persistence directly. This way you could replace that service with something else down the road if needed, and as long as you maintain the same interface your controllers won't be affected.
Create new Grails domain classes as subclasses of your existing Java classes.
This could be pretty straightforward if your classes are already written as proper beans, i.e. with getter/setter methods for all their properties. Grails will see these inherited properties as it would if they were written in the simpler Groovy style. You'll be able to specify how to validate each property, using either simple validation checks (not null, not blank, etc.) or with closures that do more complicated things, perhaps calling existing methods in their POJO superclasses.
You'll almost certainly need to tweak the mappings via the GORM mapping DSL to fit the realities of your existing database schema. Relationships would be where it might get tricky. For example, you might have some other solution where GORM expects a join table, though there may even be a way to work around differences such as these. I'd suggest learning as much as you can about GORM and its mapping DSL and then experiment with a few of your classes to see if this is a viable option.
Have Grails use your existing POJOs and Hibernate mappings directly.
I haven't tried this myself, but according to Grails's Hibernate Integration page this is supposed to be possible: "Grails also allows you to write your domain model in Java or re-use an existing domain model that has been mapped using Hibernate. All you have to do is place the necessary 'hibernate.cfg.xml' file and corresponding mappings files in the '%PROJECT_HOME%/grails-app/conf/hibernate' directory. You will still be able to call all of the dynamic persistent and query methods allowed in GORM!"
Googling "gorm legacy" turns up a number of helpful discussions and examples, for example this blog post by Glen Smith (co-author of the soon-to-be-released Grails in Action) where he shows a Hibernate mapping file used to integrate with "the legacy DB from Hell". Grails in Action has a chapter titled "Advanced GORM Kungfu" which promises a detailed discussion of this topic. I have a pre-release PDF of the book, and while I haven't gotten to that chapter yet, what I've read so far is very good, and the book covers many topics that aren't adequately discussed in other Grails books.
Sorry I can't offer any personal experience on this last option, but it does sound doable (and quite promising). Whichever option you choose, let us know how it turns out!
Do you really want/need to use Grails rather than just Groovy?
Grails really isn't something you can use to add a part to an existing web app. The whole "convention over configuration" approach means that you pretty much have to play by Grails' rules, otherwise there is no point in using it. And one of those rules is that domain objects are Groovy classes that are heavily "enhanced" by the Grails runtime.
It might be possible to have them extend existing Java classes, but I wouldn't bet on it - and all the Spring and Hibernate parts of your existing app would have to be discarded, or at least you'd have to spend a lot of effort to make them work in Grails. You'll be fighting the framework rather than profiting from it.
IMO you have two options:
Rewrite your app from scratch in Grails while reusing as much of the existing code as possible.
Keep your app as it is and add new stuff in Groovy, without using Grails.
The latter is probably better in your situation. Grails is meant to create new web apps very quickly, that's where it shines. Adding stuff to an existing app just isn't what it was made for.
EDIT:
Concerning the clarification in the comments: if you're planning to write basically a data entry/maintenance frontend for data used by another app and have the DB as the only communication channel between them, that might actually work quite well with Grails; it can certainly be configured to use an existing DB schema rather than creating its own from the domain classes (though the latter is less work).
This post provides some suggestions for using grails for wrapping existing Java classes in a web framework.
I'm in the beginning phases of a Blackberry/J2ME project -- and along with other limitations that come with this wonderful platform, the lack of support for reflection and 1.3 language level mean that the vast majority of existing IoC containers are unusable. (Google has Guice for Android with no AOP, but even that requires support for annotations).
So the space of IoC containers on J2ME is pretty limited. The one framework that has caught my attention is called Signal Framework, and it looks pretty promising. It tries to stay conceptually close to Spring Framework's IoC, implementing a small subset of its functionality, and does so without relying on bytecode-modification or causing runtime xml parsing. Instead, it processes configuration XMLs at build-time to generate java code which implements this IoC functionality.
Generally speaking, code generation at build time seems like a very wise approach for mobile applications -- and if my app has to do less XML parsing on user's device, that's great too!
So, what have your experiences been with implementing IoC on J2ME/CLDC, and how were you able to extinguish that bitter taste in your mouth?
We used Spring ME at TomTom. It worked out pretty well.
In J2ME you need to reduce the number of classes you use as much as possible to reduce the size of jar files. This leads to many design compromises not least of which is flexibility.
It is not easy to adjust to J2ME development when you have to throw must of what you hae learnt (and come to value highly) about OO out the window. The truth is if you want apps that can run on a large range of phones you need to be very sensitive to the constraints of the devices.
As such I do not think an IoC framework will match many people's needs for J2ME development.
You might be interested in checking out FallME. Even though I haven't used it personally, it seems like a no non-sense framework built specifically for the J2ME platform.
I came across Spring ME during a Dutch JUG conference (have no experience with it whatsoever).
Signal Framework it is.
Update: unfortunately, Signal is very undercooked right now, so I'm going with Israfil IOC with custom additions.